The margin of error is the range of uncertainty around a survey or poll result. When a poll says 52% of voters support a candidate “with a margin of error of ±3%,” it means the true level of support in the full population likely falls somewhere between 49% and 55%. It’s the statistical way of acknowledging that measuring a sample instead of the entire population introduces some imprecision, and it quantifies exactly how much.
How the Margin of Error Works
Every survey measures a sample, not the whole population. If you polled every single voter in a country, you’d have exact numbers with no margin of error. But that’s impractical, so pollsters survey a smaller group and use that data to estimate what the full population thinks. The margin of error tells you the maximum expected difference between what the sample shows and what the true population value is.
The calculation depends on three things: how confident you want to be in your result, how spread out the data is, and how many people you surveyed. For percentage-based data like polls, the formula looks like this:
Margin of error = z × √(p(1−p) / n)
Here, z is a number tied to your confidence level (more on that below), p is the proportion you measured (like 0.52 for 52%), and n is your sample size. The result is the “±” number you see reported alongside poll results.
Confidence Levels and What They Mean
The confidence level is the probability that the true population value actually falls within your margin of error. A 95% confidence level, the most common standard in polling and research, means that if you repeated the same survey 100 times, about 95 of those surveys would produce a range that captures the true value.
Higher confidence requires a wider margin of error. The z-score in the formula reflects this tradeoff:
- 90% confidence: z = 1.645
- 95% confidence: z = 1.96
- 99% confidence: z = 2.575
Moving from 95% to 99% confidence doesn’t sound like a big jump, but it increases the z-score by about 31%, which widens your margin of error considerably. Most polls and published research use 95% because it balances precision with reliability. When you see a margin of error reported without a stated confidence level, it’s almost always 95%.
Why Sample Size Matters So Much
Sample size is the single biggest lever for shrinking the margin of error. A national poll of about 1,000 voters typically produces a margin of error around ±3%. That’s the industry standard for most major political polls.
But the relationship between sample size and margin of error isn’t linear. The margin of error is inversely proportional to the square root of the sample size. In practical terms, this means doubling your sample size doesn’t cut the margin of error in half. It reduces it by about 30%. To actually halve the margin of error, you need to quadruple the sample size. Going from 1,000 respondents to 4,000 would shrink the margin from roughly ±3% to roughly ±1.5%.
This square root relationship explains why pollsters don’t just survey 10,000 or 50,000 people. The cost of each additional respondent grows while the payoff in precision shrinks. There’s a point of diminishing returns where the added expense isn’t worth the marginal improvement in accuracy.
Reading Polls With Overlapping Margins
One of the most common situations where margin of error matters is comparing two candidates or two options in a poll. If Candidate A is at 48% and Candidate B is at 45%, with a ±3% margin of error, the ranges overlap (45–51% for A, 42–48% for B). Many people interpret this as a “statistical tie,” but that conclusion is more complicated than it sounds.
Research from GraphPad highlights an important nuance: overlapping confidence intervals don’t automatically mean there’s no real difference. Two results can overlap and still be statistically significant, or they can fail to overlap and still not reach significance, depending on sample sizes and other factors. The one reliable shortcut is that when two 95% confidence intervals don’t overlap at all, and the sample sizes are roughly equal, the difference between them is statistically significant. Beyond that, a proper statistical test is needed to draw firm conclusions.
So when a news anchor calls a race “within the margin of error,” they’re signaling that the lead is small enough that the trailing candidate could plausibly be ahead. It doesn’t mean the race is tied, and it doesn’t mean the lead is meaningless. It means there’s genuine uncertainty.
What the Margin of Error Doesn’t Cover
Here’s the part that often gets overlooked: the margin of error only measures one type of error, called sampling error. That’s the natural imprecision that comes from surveying a portion of a population instead of everyone. It’s mathematically predictable, and it’s what the formula calculates.
But surveys can go wrong in ways that have nothing to do with sample size. These are called non-sampling errors, and the margin of error says nothing about them. Some major sources include:
- Non-response bias: People who refuse to participate may have systematically different views from those who respond. Increasing the sample size doesn’t fix this, because you’re still missing the same type of person.
- Coverage bias: If your survey only reaches people with landlines, or only people online, you’re missing portions of the population entirely. The sampling frame doesn’t match the target population.
- Question wording: A badly designed questionnaire can push respondents toward certain answers, creating a systematic slant in the data.
- Respondent bias: People sometimes give answers they think are socially acceptable rather than honest ones, particularly on sensitive topics.
These non-sampling errors can distort results far more than the stated margin of error suggests. A poll with a ±3% margin of error might actually be off by 6 or 7 points if it has significant non-response bias. The Australian Bureau of Statistics notes that non-sampling error “can occur at any stage of a sample survey” and, unlike sampling error, is difficult to measure mathematically. This is why some polls with small margins of error still produce wildly inaccurate predictions.
Small Populations Change the Math
The standard margin of error formula assumes your sample is tiny compared to the overall population. For a national poll of 1,000 people out of millions of voters, this assumption holds perfectly. But when you’re surveying a large fraction of a small population, say 200 out of 500 employees, the standard formula overstates the uncertainty.
In these cases, a correction factor is applied. It accounts for the fact that you’ve already measured a substantial chunk of the group, so there’s less unknown territory. The adjustment multiplies the standard formula by √((N−n)/(N−1)), where N is the total population and n is your sample. As n gets closer to N, this factor shrinks toward zero, reflecting the fact that if you surveyed everyone, there’d be no uncertainty at all. For most large-scale polls and studies, the correction is unnecessary because the sample is a tiny fraction of the population.
A Quick Example
Suppose you survey 600 people and find that 40% prefer a particular product. Using a 95% confidence level (z = 1.96), the margin of error would be:
1.96 × √(0.40 × 0.60 / 600) = 1.96 × √(0.0004) = 1.96 × 0.02 = about ±3.9%
So you’d report that between 36.1% and 43.9% of the population likely prefers the product, with 95% confidence. If you needed that margin closer to ±2%, you’d need roughly 2,400 respondents, four times as many, because of the square root relationship.
The margin of error is largest when p is close to 50%, because that’s the point of maximum uncertainty. If your measured proportion were 10% or 90% instead of 40%, the margin of error would be smaller with the same sample size. This is why pollsters often use 50% as a worst-case assumption when planning how many people to survey: it guarantees the margin of error won’t exceed their target no matter what the actual result turns out to be.

