Training your intuition

For most people, solving a problem or a question is not difficult if they have a model to follow and the correct data to plug into the model. Take one of the most basic functions, paying for something at a cash register. If the cashier tells you the Happy Meal costs (with tax) $4.23, and you hand the cashier a $10.00 bill, I suspect that most cashiers will give and most people will expect their $5.77 in change. Oh, you can confuse people and make the problem more difficult (7 dimes, a nickel and two pennies, rather than 3 quarters and two pennies), but these are just "tricks." This works, because for the vast majority of people, this is an "ordinary" occurrence something we've either done or witnessed hundreds of times, and we can intuitively extend our addition and subtraction rules to a new problem.

Unfortunately, most classroom topics are taught like the math example above using clear, intuitive, and easily understood examples, but tested using confusing tricks to render our intuition useless. Thus, one of the most important skills a tutor can provide is to help "train" a student's intuition to help identify when an answer might be incorrect and why. Take a statistics/algebra problem. Suppose you are asked about the frequencies, mean and standard deviation a 250 individual results on a test (graded from 0-100) and what those numbers suggest about how the individuals performed from a population of 10,000 students who took the exam. Your student does their math but makes a mistake and determining that the mean score was 75 and the standard deviation was about 5 points, and reports frequency data shows that there were about 25 scores less than 60 and 25 scores above 90. At first blush, this might makes sense as the low scores "cancel out" the high scores and so a mean of 75 might be reasonable.

But these results conflict with some basic rules about how samples should behave that need to become intuitive. First, the central limit theorem implies that any sampling distribution to find its mean, no matter how unusual, will become approximately more normal (i.e. bell curve shaped) as the sample size increases (Sullivan, 2010). Second, a normal distribution is bell shaped, with the mean at its highest peak, and empirically with 68% of the data within +/- 1 standard deviation of the mean, 95% of the data within +/- 2 standard deviations, and 99.7% of the data within +/- 3 standard deviations (Sullivan, 2010). Third, beyond empirical data, Chebyshev's Inequality requires --- as a matter of the mathematics of the mean and standard deviation --- that any distribution no matter how skewed, requires that at least 75% of all data lie within 2 standard deviations of the mean, and 88.9% of all the data lie within 3 standard deviations of the mean (Sullivan, 2010).

Using these ideas, we would expect the sample above to be normally shaped, and thus that almost all the data (99.7%) should be between 75 +/- 15 (3x5) or between 60 and 90. If 20% of the data is outside that range (10% less than 60 and 10% above 90) we should be suspicious. Furthermore, mathematically if the mean is 75 and the standard deviation is 5, almost at least 88.9% of the data must be between 60 & 90. But here we have frequencies that indicate that 20% of the data is outside that range, i.e. that only 80% of the data is within that range. So an empirical suspicion that something is wrong, must be certainly wrong.

Thus, once our intuition is trained to look for such anomalies, we can look back over our work and ask where the mistake was made. Clearly, the standard deviation is too small to be correct, thus, we should focus our error checking on how we might have made it smaller: did we divide instead of multiply somewhere? did we subtract when we should have added? or Did we transpose digits making numbers smaller? etc. Furthermore, because the standard deviation is based upon the mean, an error in calculating the mean might have influenced this result as well.

In helping students, it is just as important to help them know when they are making a mistake on their own as when they have calculated the right answer. Today with the rise of computer calculations often people only see the end results and treat them as "Deus et Machina," God by machine, and assume the numbers must be right. An important part of reviewing numbers is to humanly understand, intuitively, why those numbers make sense beyond the fact that the computer produced the result, and that is what students need to succeed on their own during exams, particularly standardized tests like the ACT, SAT, MCAT, etc.

if (isMyPost) { }