
Timothy M. answered 09/18/15
Tutor
New to Wyzant
Ph.D. in neuroscience with specialty in statistical analysis
Hello,
This is a pretty tricky question. If we find the mean and median for these errors, we end up with the following:
One day:
Mean = 0.5
Median = 0
Five days
Mean = -0.3
Median = 1.0
Since these are errors, if we look at the mean, we would conclude that the five-day prediction has less error. But if we look at the median, we would conclude that the one-day predictions were less error-prone. However, this is not how we typically evaluate errors. To see why, let's say we have temperature data for two days. On day one, the error was 1 degree; on day two, it was -1 degree. If we average those together, we would get 0 because one is positive and the other is negative. That means that, despite that fact that each day was off by 1, we're saying there was 0 error. To get around this, we typically square the error terms first. So in my example, we would have 1^2 and -1^2. This would give us an average error of 1 which makes more sense.
Let's apply that to your example. First, we square each of those numbers. Then we 1) take the average of the squared numbers and 2) take the median of the squared numbers (this is somewhat unusual to do for a median, but the question calls for it).
For one day predictions
Mean = 2.1
Median = 0
One last step is to take the square root (since I squared the errors earlier, I have to take the square root so that the units are correct). This gives us:
Mean = 1.449
Median = 0
For five day predictions
Mean = 14.7
Median = 5
Again, we have to take the square root. This gives us:
Mean = 3.384
Median = 2.236
When we do it this way, both the mean and the median say that one-day predictions had less error than five-day predictions.
This is a pretty tricky question. If we find the mean and median for these errors, we end up with the following:
One day:
Mean = 0.5
Median = 0
Five days
Mean = -0.3
Median = 1.0
Since these are errors, if we look at the mean, we would conclude that the five-day prediction has less error. But if we look at the median, we would conclude that the one-day predictions were less error-prone. However, this is not how we typically evaluate errors. To see why, let's say we have temperature data for two days. On day one, the error was 1 degree; on day two, it was -1 degree. If we average those together, we would get 0 because one is positive and the other is negative. That means that, despite that fact that each day was off by 1, we're saying there was 0 error. To get around this, we typically square the error terms first. So in my example, we would have 1^2 and -1^2. This would give us an average error of 1 which makes more sense.
Let's apply that to your example. First, we square each of those numbers. Then we 1) take the average of the squared numbers and 2) take the median of the squared numbers (this is somewhat unusual to do for a median, but the question calls for it).
For one day predictions
Mean = 2.1
Median = 0
One last step is to take the square root (since I squared the errors earlier, I have to take the square root so that the units are correct). This gives us:
Mean = 1.449
Median = 0
For five day predictions
Mean = 14.7
Median = 5
Again, we have to take the square root. This gives us:
Mean = 3.384
Median = 2.236
When we do it this way, both the mean and the median say that one-day predictions had less error than five-day predictions.