What Is the Margin of Error in Statistics?

The margin of error is a number that tells you how much a survey or poll result might differ from the true value for the entire population. If a poll says 52% of voters support a candidate with a margin of error of ±3%, the actual support in the full population likely falls somewhere between 49% and 55%. That range is the confidence interval, and the margin of error is its radius.

How the Margin of Error Works

Any time you survey a sample instead of counting every single person in a population, your results will be slightly off. The margin of error quantifies that gap. It captures the random variation you’d expect if you ran the same survey hundreds of times with different random samples. Some of those samples would skew a little high, others a little low, and the margin of error tells you how far most of them would land from the true value.

The larger the margin of error, the less confident you should be that the result reflects what a full census would show. A poll with a ±1% margin of error is far more precise than one with ±5%, even if both report the same headline number.

The Formula Behind It

For surveys that measure percentages (like the share of people who prefer a product or plan to vote a certain way), the margin of error is calculated as:

Margin of error = z × √(p(1−p) / n)

Three components drive the result:

  • z (critical value): A multiplier tied to how confident you want to be. For 95% confidence, it’s 1.96. For 90% confidence, it’s 1.645. For 99%, it’s 2.575. Higher confidence means a wider margin.
  • p (sample proportion): The percentage your survey found. A result near 50% produces the largest margin of error because that’s where uncertainty is greatest. Results near 0% or 100% shrink it.
  • n (sample size): The number of people surveyed. Bigger samples produce smaller margins of error, but with diminishing returns.

Standard Error vs. Margin of Error

These two terms get confused often, but they’re different. The standard error is a raw measure of how much your sample estimate would bounce around if you repeated the survey many times. It’s calculated as the sample’s standard deviation divided by the square root of the sample size. The margin of error takes that standard error and multiplies it by the critical value for your chosen confidence level. At 95% confidence, that multiplier is 1.96, so the margin of error is always larger than the standard error. Think of the standard error as the building block and the margin of error as the finished product you report.

Why Sample Size Matters So Much

Sample size sits under a square root in the formula, which creates a pattern of diminishing returns. Going from 100 respondents to 1,000 dramatically shrinks the margin of error. But going from 1,000 to 2,000 only reduces it by about one percentage point.

The numbers make this concrete. A national poll of 1,000 people (in a country of 132 million voters) produces a margin of error around 3.1%. Double the sample to 2,000 and it drops to about 2.2%. Push to 5,000 and you get roughly 1.4%. It takes a full 10,000-person sample to reach 1%. That’s why most major polls settle for sample sizes between 1,000 and 2,000. Beyond that, the cost of surveying more people far outweighs the small gain in precision.

When Population Size Enters the Picture

You might wonder why the total population size barely appears in these calculations. For most national polls, the sample is such a tiny fraction of the population that the total doesn’t matter. But when your sample exceeds 5% of the population, as it might in a survey of a small company or a niche community, a finite population correction factor adjusts the formula downward. In those cases, you’re covering enough of the group that your estimate is naturally more precise than the standard formula would suggest.

Reading Polls Like a Pro

When news outlets report a poll showing Candidate A at 48% and Candidate B at 46% with a margin of error of ±3%, those two candidates’ ranges overlap substantially. Candidate A’s true support could be anywhere from 45% to 51%, and Candidate B’s could be 43% to 49%. Because those ranges overlap, the poll can’t distinguish a real lead from random noise. Journalists often call this a “statistical tie,” though the more precise description is that the difference isn’t statistically significant.

The confidence level matters here too. Most polls use 95% confidence, meaning they expect the true value to fall within the stated range 95 times out of 100. That still leaves a 5% chance the true value sits outside the margin. If a poll instead used 99% confidence, the margin would widen, but you’d be more certain the true answer falls inside it. The tradeoff is always between precision and certainty.

What Margin of Error Doesn’t Cover

The margin of error only accounts for random sampling error, the natural variation that comes from asking a subset instead of everyone. It does not capture other sources of error that can be far more damaging to a poll’s accuracy: poorly worded questions, people who refuse to respond, respondents who don’t answer honestly, or a sample that fails to represent the population. A poll could have a tight ±2% margin of error and still miss badly if the people who agreed to participate differ systematically from those who didn’t.

This is why two polls conducted around the same time can report different numbers even when both claim small margins of error. Their sampling methods, question wording, or response rates may differ in ways the margin of error never reflects. The margin tells you about one specific type of uncertainty and stays silent on the rest.

Putting It All Together

When you see a confidence interval written as “52% ± 3%,” you’re looking at a point estimate (52%) and a margin of error (3%) that together form the range 49% to 55%. That range represents your best guess at where the true population value sits, given the confidence level chosen. If the confidence intervals of two groups don’t overlap at all, the difference between them is statistically significant. If they overlap substantially, the data can’t confirm a real difference exists.

The margin of error turns a single number into an honest range. It’s a reminder that no sample perfectly mirrors a population, and it gives you a way to judge how seriously to take the gap between what a survey found and what reality might be.