The margin of error accounts for one specific thing: the uncertainty that comes from surveying a sample of people instead of the entire population. That’s it. When a poll reports “±3 percentage points,” it’s quantifying the random luck of the draw, the chance that the particular group of people selected for the survey doesn’t perfectly reflect the whole population. It does not account for many other sources of error that can affect a survey’s accuracy.
Sampling Error: The Core of Margin of Error
Every time you survey a subset of a population rather than every single person, there’s a gap between your sample’s results and what you’d find if you could ask everyone. This gap is called sampling error, and it’s the only thing the margin of error measures.
Think of it this way: if you randomly selected 1,000 voters to ask about a candidate, and then randomly selected a different 1,000 voters, you’d get slightly different results each time. The margin of error captures that natural variation. It tells you the range within which the true population value likely falls, given that you only talked to a fraction of the group. A poll showing 52% support with a ±3 point margin of error means the real number is probably somewhere between 49% and 55%.
The reason sampling error gets its own dedicated measure is simple: it can be calculated mathematically. As the Australian Bureau of Statistics puts it, sampling error can be measured precisely using the standard error, which indicates how close a survey estimate is to the result you’d get from counting the entire population. That mathematical tractability is what makes the margin of error possible in the first place.
What Margin of Error Does Not Account For
This is where most people get tripped up. The margin of error leaves out every other source of inaccuracy in a survey, and those other sources can be far more damaging than sampling error alone. Survey statisticians recognize several categories of what’s called non-sampling error, including nonresponse error, measurement error, and coverage error. When polling professionals compute the margin of error, they only consider sampling imprecision, not any of these.
Nonresponse error occurs when certain types of people are less likely to answer the survey. If younger voters hang up the phone at higher rates, your sample is skewed toward older voters, and no sample-size calculation will fix that.
Coverage error happens when the method of reaching people systematically misses part of the population. A phone survey that only calls landlines misses people who only use cell phones. The margin of error says nothing about this gap.
Measurement error comes from the survey itself: confusing questions, socially desirable answers, or respondents who misunderstand what’s being asked. If a question is poorly worded, you’ll get precise but wrong data.
In election polling, there’s an additional problem the margin of error ignores entirely: forecasting who will actually show up to vote. A poll might perfectly capture public opinion, but if its “likely voter” model is wrong, the final prediction will miss the mark for reasons the margin of error never touches.
How Sample Size Affects It
The margin of error shrinks as your sample size grows, but the relationship follows a square root pattern, which means you hit diminishing returns quickly. Going from 100 respondents to 400 cuts the margin of error roughly in half. But going from 1,000 to 4,000 also only cuts it in half. Each incremental improvement in precision costs dramatically more in survey effort.
This is why most national polls settle on sample sizes of around 1,000 to 1,500 people. At that range, you typically get a margin of error around ±3 percentage points at the 95% confidence level, which is a practical sweet spot. Pushing to ±1 point would require roughly 10,000 respondents, a massive increase in cost for a relatively small gain in precision.
The Role of Confidence Level
The margin of error is always paired with a confidence level, usually 95%. This means that if you repeated the same survey 100 times with fresh random samples, about 95 of those samples would produce results within the stated margin of error of the true value.
Choosing a higher confidence level widens the margin of error. At 90% confidence, the margin is narrower because you’re accepting a greater chance of being wrong. At 99% confidence, it’s wider because you want more certainty. The multiplier used in the calculation is about 1.65 for 90% confidence, 1.96 for 95%, and 2.58 for 99%. Most polls and studies default to 95% because it strikes a practical balance between precision and reliability.
Population Size Matters Less Than You’d Think
A common misconception is that surveying a country of 330 million people requires a vastly larger sample than surveying a city of 100,000. In practice, the total population size barely affects the margin of error once the population is large relative to the sample.
Data from Penn State illustrates this clearly. To estimate a population proportion within ±3 percentage points at 95% confidence, you’d need about 1,068 respondents whether the population is 1 million or 10 million. Even at a population of 100,000, you’d need 1,057. The numbers only start shifting meaningfully when the population drops to around 10,000 (where you’d need 966) or 1,000 (where you’d need 517). This is why a well-designed national poll of 1,200 people can be just as statistically valid as a city-level poll of the same size.
When the sample represents a large fraction of the total population, a correction factor kicks in that actually reduces the margin of error. But for most real-world surveys, the sample is such a tiny fraction of the population that this correction is negligible.
Why This Distinction Matters in Practice
Understanding what the margin of error does and doesn’t capture changes how you should interpret any survey result. A poll with a ±2 point margin of error sounds impressively precise, but if 40% of the people contacted refused to participate, the nonresponse bias could easily dwarf that ±2 points. The stated margin of error would give you false confidence in a potentially misleading number.
In medical research, this same logic applies through confidence intervals, which are built from the same margin of error formula. Clinical trials use confidence intervals to estimate the actual size of a treatment effect rather than just whether a difference exists. A drug trial might find that a treatment lowers blood pressure by 8 points with a 95% confidence interval of 5 to 11 points. That range accounts for sampling variability, but it doesn’t account for problems like patients dropping out of the study at unequal rates or inconsistent measurement techniques across different study sites.
Researchers at Northwestern have proposed a concept called “total margin of error” that would fold in estimates of non-sampling error alongside the traditional sampling-based margin. This approach uses a broader measure that sums up both random variation and systematic bias. It hasn’t become standard practice in polling yet, but the concept highlights just how much the traditional margin of error leaves on the table.
The bottom line: the margin of error is a useful but narrow tool. It answers one question well, “how much could random chance have affected these results?” It says nothing about whether the right questions were asked, whether the right people were reached, or whether respondents told the truth.

