There is no single percentage of a population that qualifies as a good sample size. A common assumption is that you need to survey 10% or even 20% of a population to get reliable results, but that’s a myth. National polls routinely measure the opinions of hundreds of millions of people using only 800 to 1,500 respondents, which is a fraction of a fraction of one percent. What actually determines a good sample size is how much error you’re willing to accept, how confident you need to be in the results, and how varied the population is.
Why Percentage Is the Wrong Way to Think About It
Sample size math is counterintuitive. Once a population gets large enough, the number of people you need to survey barely changes whether the population is 500,000 or 500 million. That’s because statistical precision depends on the absolute number of responses, not the proportion of the population sampled. A survey of 1,000 people gives you roughly the same margin of error whether you’re studying a city of 100,000 or a country of 300 million.
This is why professional pollsters don’t think in percentages. The New York Times/Siena College Poll, one of the most respected election polls in the U.S., typically surveys around 1,500 people to represent the entire American electorate. That produces a margin of error of about 2.5 to 2.8 percentage points. A sample of 800 respondents yields a margin of error of roughly plus or minus 3.5 points. These are tiny fractions of the population, yet they produce results accurate enough to call elections.
The Three Factors That Actually Determine Sample Size
Instead of a percentage, sample size calculations depend on three inputs that you choose based on how precise you need your results to be.
- Margin of error is how far off your result might be from the true answer. If your survey finds that 60% of people prefer a product, a margin of error of 5% means the real number is likely between 55% and 65%. Smaller margins require larger samples.
- Confidence level is how sure you want to be that your result falls within that margin. The standard in most research is 95%, meaning if you repeated the survey 100 times, 95 of those results would land within your margin of error. Bumping this to 99% increases the sample size you need.
- Population variability reflects how spread out the answers are. If nearly everyone in a group thinks the same way, you don’t need many responses to capture that. If opinions are split 50/50, you need the largest possible sample. When you don’t know the variability in advance, researchers assume a 50/50 split as a worst-case scenario.
Plugging in the most common defaults (95% confidence, 5% margin of error, maximum variability), you get a required sample of about 385 people for any large population. That’s the number you’ll see in most online sample size calculators, and it holds whether the population is 50,000 or 50 million.
When Population Size Does Matter
Percentage starts to matter when your population is small. If you’re surveying employees at a 200-person company, or students at a school of 500, the math changes. Statisticians apply what’s called a finite population correction, which adjusts the required sample downward because each person you survey represents a larger share of the whole group.
As a rough guide, if your sample would be more than about 5% of the total population, you should use this correction. For a town of 2,000 people, for instance, you wouldn’t need nearly as many respondents as the standard formula suggests. The correction essentially acknowledges that once you’ve surveyed a meaningful chunk of a small group, each additional response gives you diminishing returns. For very small populations (under a few hundred), you may need to survey a large percentage, sometimes 30% to 50% or more, to get reliable results.
Sample Sizes in Practice Across Industries
Real-world sample sizes vary enormously depending on the stakes and the type of research being done.
Consumer market research projects typically start at 1,000 respondents, with many large-scale studies surveying more than 10,000. Business-to-business research operates on a completely different scale. A study targeting advertising decision-makers in specific industries might use just 40 people per segment and still produce representative results, because the total population of people in that role is small and relatively homogeneous.
Clinical drug trials follow a structured progression. Phase 1 trials test safety in just 20 to 100 volunteers. Phase 2 expands to a few hundred patients to look for effectiveness signals. Phase 3, the final stage before approval, enrolls 300 to 3,000 participants. These numbers are driven not by population percentages but by the statistical power needed to detect whether a drug actually works. The standard threshold is 80% power, meaning the study has an 80% chance of detecting a real effect if one exists.
Qualitative research (interviews, focus groups) works differently from surveys entirely. A systematic review of empirical studies found that researchers consistently reached the point where no new information emerged, known as data saturation, within 9 to 17 interviews or 4 to 8 focus group discussions. These numbers apply to studies with relatively similar participants and focused research questions. Multi-country studies or those exploring broad themes needed more.
Quick Reference by Population Size
These numbers assume a 95% confidence level, 5% margin of error, and maximum variability. They give you a practical starting point.
- Population of 500: about 217 responses needed (43%)
- Population of 1,000: about 278 (28%)
- Population of 5,000: about 357 (7%)
- Population of 10,000: about 370 (3.7%)
- Population of 100,000: about 383 (0.4%)
- Population of 1,000,000 or more: about 384 to 385 (well under 0.1%)
Notice how the percentage drops rapidly as the population grows, while the actual number of responses barely changes past 10,000. This is the core insight: for large populations, the percentage is essentially irrelevant. For small populations, it can be quite high.
How to Increase Precision Without Huge Samples
If you need tighter results but can’t afford a massive sample, a few strategies help. Stratified sampling, where you divide your population into subgroups (by age, region, or income, for example) and sample within each, often produces more precise estimates than randomly surveying the same total number of people. This works because it ensures every subgroup is represented proportionally, reducing the impact of variability.
Narrowing your margin of error from 5% to 3% roughly triples the required sample size (from about 385 to about 1,067 for large populations). Going from 3% to 1% increases it by another factor of nine. So there’s a steep cost to precision, and for most practical purposes, a 5% margin is considered acceptable.
Response rate also matters more than raw sample size in many cases. The New York Times/Siena poll reaches fewer than 2% of the people it contacts. Low response rates can introduce bias that no amount of additional respondents will fix, because the people who respond may differ systematically from those who don’t. A smaller sample of the right people often beats a larger sample of self-selected volunteers.

