What Sample Size Is Needed for a 95% Confidence Interval?

For a 95% confidence interval with a 5% margin of error, you need approximately 385 respondents, assuming a large population and no prior estimate of the proportion you’re measuring. That number is the most commonly cited benchmark, but your actual required sample size can range from under 200 to well over 1,000 depending on how precise you need your results to be and what you already know about your population.

The Core Formula

Sample size for a 95% confidence interval is calculated using this formula:

n = p × (1 − p) × (z / E)²

Here’s what each piece means:

  • n = the sample size you need
  • p = your best estimate of the population proportion (the percentage you expect to find)
  • z = 1.96, the critical value for 95% confidence
  • E = your margin of error (how much wiggle room you’ll accept)

If you’re estimating a proportion (like what percentage of customers prefer a product), you plug in your best guess for p. If you have no idea what p might be, use 0.5. This is the most conservative choice because 0.5 × 0.5 = 0.25, which is the largest possible product of p and (1 − p). It guarantees your sample will be big enough no matter what the true proportion turns out to be.

You always round up to the next whole number. You can’t survey 0.3 of a person.

How Margin of Error Changes Everything

The margin of error you choose has the single biggest impact on sample size. Cutting the margin of error in half roughly quadruples the number of people you need, because the margin of error is squared in the denominator of the formula. Here’s how the numbers shake out at 95% confidence using the conservative estimate of p = 0.5:

  • ±5% margin of error: 385 people
  • ±3% margin of error: 1,068 people
  • ±1% margin of error: 9,604 people

This is why most surveys and polls settle on a 3% to 5% margin of error. Going below 3% demands a sample size that gets expensive fast, with diminishing returns on precision. Major polling organizations like Quinnipiac University routinely survey over 1,000 respondents, which lands them in that ±3% range for national polls.

Why 0.5 Is the Safe Default

When you don’t know what proportion to expect, statisticians default to p = 0.5. This gives the largest possible sample size for any given margin of error, which means you’re covered regardless of the actual result. But if you have prior data or a reasonable estimate, using it can save you significant effort.

For example, say previous research suggests about 73% of people in your target group have a certain characteristic. Using p = 0.73 with a 3% margin of error at 95% confidence, the formula gives you 842 people instead of 1,068. That’s roughly 20% fewer participants, simply because you had a better starting estimate. The further p is from 0.5 in either direction, the smaller the required sample.

Adjusting for Small Populations

The standard formula assumes your population is very large relative to your sample. If you’re surveying a company of 500 employees or a school of 2,000 students, you can reduce the required sample size using a finite population correction. The adjusted formula is:

adjusted n = n / (1 + (n − 1) / N)

Here, n is the sample size from the standard formula and N is the total population. When N is very large (say, over 100,000), this correction barely changes anything. When N is small, it makes a real difference. For a population of 500, the corrected sample at 95% confidence and 5% margin of error drops from 385 to about 218.

A useful rule of thumb: if your sample would be less than about 5% of the total population, the correction isn’t worth bothering with.

Means vs. Proportions

The formula above works when you’re estimating a proportion, like a percentage or a yes/no outcome. If you’re measuring an average (mean blood pressure, average test score, mean weight), the formula changes slightly:

n = (z × s / E)²

Here, s is the standard deviation of the measurement you’re interested in, and E is now expressed in the same units as that measurement. For instance, if you want to estimate average blood glucose levels within ±5 mg/dL at 95% confidence and preliminary data shows a standard deviation of 32 mg/dL, you’d need about 158 subjects. The logic is identical: more variability in the data or a tighter margin of error means more people.

Confidence Intervals vs. Statistical Power

If you’re designing an experiment to compare two groups (like a treatment vs. a placebo), the sample size calculation works differently. Confidence interval calculations are about precision: how closely your estimate matches the true value. Power calculations are about detection: whether your study can reliably spot a real difference between groups if one exists.

Power calculations add an extra variable, the desired statistical power, typically set at 80% or 90%. A study powered at 80% has a 20% chance of missing a real effect. Both types of calculations use the 1.96 value for 95% confidence, but power calculations also factor in the size of the effect you’re trying to detect and require an additional critical value (0.84 for 80% power, 1.28 for 90% power). This generally results in larger sample sizes than precision-based calculations alone.

Quick Reference for Common Scenarios

These assume a large population, unknown proportion (p = 0.5), and 95% confidence:

  • Quick opinion poll (±10%): 97 people
  • Standard survey (±5%): 385 people
  • Detailed research (±3%): 1,068 people
  • High-precision study (±1%): 9,604 people

If you already have a reasonable estimate for your proportion and it’s not close to 50%, you can safely use a smaller sample. If your population is under a few thousand, apply the finite population correction to avoid over-sampling. And if you’re comparing groups rather than estimating a single value, you’ll need a power analysis instead of (or in addition to) these calculations.