Originally posted by sh76
I figure that mathematics is a science, so I hope I have the right forum.
I need to determine what level of sample size is necessary to generate statistically reliable data for exam results. In other words, if I give an exam to X random college students, I want to be able to assert that based on those results, I can be confident that Y% of random students wi ...[text shortened]... or; and
2) Someone can give me a layman's tip on how to make that determination.
Thanks!
I think this is what you originally meant, but I don't know if you still want it.
Let's say M is the true mean. If I understand you correctly, you wanted to see how big should your sample size have been if you wanted a confidence interval of [M-Z,M+Z] with 95% confidence.
The formula for a 95% confidence interval for the sample mean is [M - 2*SD\sqrt(N),M + 2*SD/sqrt(N)].
So you want 2*SD\sqrt(N) = Z. Imagine that by good fortune you know the population SD was the one you get in your sample. Then you get the following formula for N as a function of Z:
N = (2*13.01/Z)^2.
For example, if you wanted to see how big the sample would need to be to get a sample in a confidence interval of [M-Z,M+Z] with
Z=1: N = 678
Z=2: N = 339
Z=3: N = 225
Of course the sample SD is not the true sample, so using it isn't usually good practice but I guess it's close enough for what you need. I don't know what Excel uses but you want the sample standard deviation (formula uses N-1 and is unbiased), not the standard deviation of the sample (formula uses N and is biased downward).