John M. answered • 04/24/14

Tutor

4.9
(551)
Analytical assistance -- Writing, Math, and more

Vivian,

Generally, margin of error and confidence intervals require knowing not only the mean and the sample size, but the standard deviation (of either the population or the sample).

The formula for standard error of the mean is the standard deviation divided by the square root of the sample size, i.e. s/√n, and to determine the margin of error for a confidence interval, you use the t- or z- table value for a/2=.025, Given your sample sizes (900 & 1600) the t and z table results are identical or t=z=1.96. So the 95% confidence interval is M±(z•s/√n). Looking at the formula, the answer will be smaller as the sample size (n) increases because the margin of error is always divided by the square root of the sample size. So for a sample size of 1600, √n=40, whereas for a sample size of 900, √n=30, and assuming the standard deviations are the same, the margin of error around the sample of 1600 should be smaller because z and s divided by 40 will always be a smaller number than the same z and s divided by 30.