Margin of error is controlled by three main factors: sample size, how much variation exists in the population, and the confidence level you choose. Change any one of these and the margin of error shifts, sometimes dramatically. Understanding how each factor works gives you a much clearer sense of what poll numbers, survey results, and study findings actually mean.
The Basic Formula
For survey percentages, the margin of error formula is: multiply the critical value (a number tied to your confidence level) by the square root of (p × (1 − p) / N), where p is the proportion giving a particular answer and N is the sample size. For averages, the formula swaps in the standard deviation divided by the square root of N, then multiplied by the critical value.
Every variable in that formula represents a lever. Pull one and the margin of error widens or narrows. The sections below break down each lever individually.
Sample Size Has the Biggest Practical Impact
Sample size sits in the denominator of the formula, under a square root. That square root creates a relationship that surprises most people: doubling your sample size does not cut the margin of error in half. You need to quadruple the sample size to halve the margin of error. Going from 400 respondents to 1,600 respondents, for instance, cuts a 4% margin of error down to roughly 2%.
This square-root relationship means early gains are large and later gains are expensive. Moving from 100 to 400 respondents dramatically improves precision. Moving from 4,000 to 16,000 improves it by the same factor but costs four times as many interviews. That diminishing return is why most national polls settle on sample sizes between 1,000 and 1,500. Beyond that range, each additional respondent buys very little extra precision relative to the cost.
One important caveat: total population size almost never matters. As long as your sample is less than 5% of the total population, the math works the same whether you’re surveying a city of 100,000 or a country of 300 million. Only when the sample represents a large fraction of the population does a correction factor come into play.
Confidence Level Sets the Precision Tradeoff
The confidence level is the probability that your margin of error actually captures the true value. Most polls and studies use 95%, which means if you repeated the same survey 100 times, about 95 of those intervals would contain the real number. That 95% confidence level uses a critical value (called a z-score) of 1.96, which gets multiplied into the margin of error calculation.
Choosing a different confidence level changes that multiplier directly:
- 90% confidence: critical value of 1.645, producing a narrower margin of error
- 95% confidence: critical value of 1.96, the most common standard
- 99% confidence: critical value of 2.58, producing a wider margin of error
Switching from 95% to 99% confidence increases the margin of error by about 32%, even with the exact same data. You’re more certain the interval contains the truth, but the interval itself is wider and less precise. Going down to 90% confidence narrows things, but you accept a 1-in-10 chance of being wrong instead of 1-in-20. In medical research, the 95% standard is nearly universal, corresponding to the convention that a result needs a p-value below 0.05 to count as statistically significant.
Population Variability Widens the Range
When the thing you’re measuring varies a lot from person to person, the margin of error grows. This factor shows up differently depending on whether you’re measuring a percentage or an average.
For percentages, the variability is built into the formula as p × (1 − p). This expression is largest when p equals 0.50, meaning the population is split right down the middle. A poll showing a 50/50 split has the maximum possible margin of error for that sample size. As results move toward 10/90 or 90/10, the margin of error shrinks because there’s simply less variation to account for. This is why pre-election polls in tight races carry wider margins of error than polls on issues where public opinion is lopsided.
For averages (like household income or test scores), variability is captured by the standard deviation. A greater standard deviation means the data points are more spread out, which pushes the standard error higher and widens the margin of error. If you’re surveying incomes in a neighborhood where everyone earns between $50,000 and $70,000, you’ll get a tight margin of error. Survey a mixed-income city and the same sample size produces a much wider one.
What Margin of Error Doesn’t Cover
The reported margin of error in any poll or survey only accounts for one source of uncertainty: the randomness of who happened to land in your sample. Survey statisticians call this “sampling imprecision,” and it’s the only thing that standard margin-of-error calculations address. Several other error sources can be just as large, or larger, but they never show up in that plus-or-minus number.
Non-response bias is often the most damaging. When only a fraction of sampled people actually answer the survey, there’s no guarantee that respondents think the same way as those who declined. If people who support a particular candidate are less likely to pick up the phone, the poll skews without the margin of error reflecting it. Coverage error is a related problem: if the sampling method can’t reach certain groups at all (people without internet access in an online poll, for example), those voices are missing entirely.
Measurement error, including poorly worded questions or social desirability bias (where people give the answer they think sounds good rather than the truth), also falls outside the margin of error. A researcher at Northwestern University has documented that polling professionals routinely mention these issues qualitatively but only ever quantify sampling imprecision when reporting results. The reported margin of error is, in practice, a best-case scenario for how uncertain a result truly is.
How These Factors Work Together
In real survey design, these variables interact as a set of tradeoffs. You want a small margin of error, but achieving it requires a larger sample (more expensive), a lower confidence level (more risk of being wrong), or a population that happens to be uniform (not something you control). Most researchers start by picking a confidence level, usually 95%, then estimate the expected variability, and finally calculate how large a sample they need to hit their target margin of error.
For a quick rule of thumb on national polls: a random sample of about 1,000 people produces a margin of error near 3 percentage points at 95% confidence, assuming maximum variability (a 50/50 split). Bumping to 2,500 respondents brings that down to about 2 percentage points. Getting to 1 percentage point would require roughly 10,000 respondents, which is why you rarely see margins that tight outside of major government surveys.
Knowing which factors are in play helps you read results more critically. A poll touting a small margin of error but based on an opt-in online panel may have low sampling error on paper while carrying substantial non-sampling error that goes unreported. The margin of error is a useful starting point for evaluating precision, but it’s only one piece of the picture.

