
Arianna R. answered 05/29/19
Patient and Knowledgeable Science, Math & English Tutor
In statistics, a confidence interval is an expression of the degree of uncertainty associated with a sample statistic. It's an interval estimate with a probability statement. Confidence intervals are preferred to point estimates and to interval estimates, because only confidence intervals indicate both the precision and the uncertainty of the estimate.
For example, you conduct a survey and assign your sample a 95% confidence level. In other words, if someone else used the same sampling method to select different samples and computed an interval estimate for each sample, they would expect the true population parameter to fall within the interval estimates 95% of the time.
You can also have confidence intervals based on different levels of significance. Level of significance is the degree to which how willing you are to be wrong. With a 95 percent confidence interval, you have a 5 percent chance of being wrong. With a 90 percent confidence interval, you have a 10 percent chance of being wrong. Wider confidence intervals in relation to the estimate itself indicate instability, while narrower confidence intervals indicate stability.
You calculate your confidence interval based on the standard error of a measurement. That is, you multiply the standard error by a constant that reflects the level of significance desired, based on the normal distribution. The constant for 95% confidence intervals, for instance, is 1.96.