
Russ P. answered 10/10/14
Tutor
4.9
(135)
Patient MIT Grad For Math and Science Tutoring
Dawn,
I assume you mean the greatest variability in their scores, not the greatest variable?
In Statistics, variability is the dispersion of the scores about their mean and is measured by the standard deviation around that mean. I'll do Ed's scores to show you how it's done. Then you do the same for the other 4 people's scores and compare their standard deviations. Note that a person with the same score each year has a mean (each year's identical score, but zero variability. The person with the highest standard deviation has the highest variability (i.e., inconsistency) is the answer to your problem.
Ed's scores: 1, 4, 2, 4, 1
The mean is estimated by adding up all the 5 scores and dividing by 5 = (1/5)(12) = 2.4
So on average, Ed has a score of 2.4 across those 5 years. If he was totally consistent each year, that is what he would score each year if decimal ratings were allowed. BTW, if Ed had 22 year's of scores, you would add up all 22 scores and divide the sum by 22. So calculating the mean depends on the number of data points you have.
To estimate variability, you have to take each score, subtract the mean from it to get its deviation from the mean, square that deviation, add up all these squared numbers, divide by the sample size 5. Finally, take the square root to get the standard deviation. BTW, the reason for the squaring is that you're interested in the deviations from the mean and don't care whether they are + or - deviations. Without the squaring some +'s and -'s might cancel out. The square root then "undos" the squaring in a sense to keep it linear with the scores.
s2 = (1/5)[(1-2.4)2 + (4-2.4)2 + (2-2.4)2 + (4-2.4)2 + (1-2.4)2]
s2 = (1/5)[1.96 + 2.56 + 0.16 + 2.56 + 1.96] = (1/5)(9.2)
s2 = 1.84
s = 1.36 is our estimate for Ed's standard deviation over his 5 years of scores.
Now you know why and what to do for the rest of your data.