What Is A Large Enough Sample Size

There is no single number that qualifies as a “large enough” sample size for every situation. The right number depends on what you’re studying, how precise your results need to be, and how large an effect you’re trying to detect. A quick poll might work fine with a few hundred responses, while a clinical trial testing a subtle drug benefit could need thousands of participants. Understanding the factors that drive this number will help you land on the right target for your specific project.

The Four Factors That Determine Sample Size

Every sample size calculation comes down to four interconnected variables: the effect size you’re looking for, the statistical power you want, your acceptable error rate, and your margin of error. Change any one of these, and the number of participants you need shifts, sometimes dramatically.

Effect size is the magnitude of the difference or relationship you’re trying to detect. If you’re comparing two groups and expect a big difference between them, you need fewer people. If the difference is subtle, you need far more. To put this in concrete terms: researchers studying a dental measurement found that detecting a difference as small as 0.1 degrees would require thousands of patients, while detecting a 1-degree difference would cut the required sample drastically. This inverse relationship between effect size and sample size is one of the most important dynamics in research design.

Statistical power is the probability that your study will catch a real effect if one exists. The standard target is 80%, meaning you accept a 20% chance of missing a true effect. Raising power to 90% gives you more confidence but requires a larger sample.

Alpha level is the risk you’re willing to take of finding a result that looks real but isn’t. Most studies set this at 0.05, meaning a 5% chance of a false positive. Setting it lower (say, 0.01) makes your standard stricter and pushes the required sample size up.

Margin of error describes how close your sample’s results will be to the true population value. A survey of 400 people yields a margin of error around 5% at a 95% confidence level. Here’s the key relationship to remember: to cut your margin of error in half, you need to quadruple your sample size. Going from 5% precision to 2.5% precision means jumping from roughly 400 to 1,600 respondents.

Where the “Rule of 30” Comes From

You may have heard that 30 is the minimum sample size for statistical analysis. This comes from the central limit theorem, which states that as your sample grows, the distribution of sample averages starts to look like a bell curve, regardless of how the underlying data is distributed. At around 30 observations, this approximation becomes reliable enough for many standard statistical tests.

But 30 is a floor for mathematical convenience, not a target for good research. It means your statistical tools will technically work at that size. It does not mean your results will be precise, powerful, or convincing. For most practical purposes, 30 is far too small to detect anything but the most obvious effects. Think of it as the minimum to get your engine running, not the fuel to reach your destination.

Sample Sizes in Clinical Trials

Drug development offers a useful window into how sample sizes scale with the stakes involved. The FDA outlines three phases of human testing, each with a different purpose and participant range:

  • Phase 1 trials typically enroll 20 to 80 people. The goal is safety, not effectiveness, so a small group is enough to identify major side effects.
  • Phase 2 trials expand to a few dozen up to about 300 people. These start testing whether the drug actually works and refine the dosing.
  • Phase 3 trials involve several hundred to about 3,000 people. This is where researchers confirm effectiveness and monitor side effects across a diverse population.

The progression makes intuitive sense. Early on, you just need enough people to spot problems. Later, when you’re trying to measure a precise treatment effect and make sure it holds across different types of patients, you need far more.

Sample Sizes for Surveys

If you’re designing a survey, sample size calculations involve one extra wrinkle: response rates. Not everyone you contact will respond, so you need to send your survey to more people than your target sample. Email surveys without follow-up reminders typically get only 25% to 30% response rates. With multiple follow-up contacts across different channels, you can push response rates up to around 60% to 70%.

This matters because low response rates introduce bias. If only 30% of people respond, you’re missing input from 70% of your sample, and those non-responders may differ from responders in ways that skew your results. A good benchmark for most survey research is a 60% response rate. If you need 400 completed surveys and expect a 50% response rate, you should plan to contact at least 800 people.

One more consideration for surveys: when your total population is small, you can sometimes get away with a smaller sample. The general rule is that if your sample is less than 5% of the total population, population size doesn’t meaningfully affect the calculation. But if you’re surveying, say, 200 employees at a company and planning to sample 50 of them (25% of the population), you can apply a correction that reduces the sample you actually need.

Sample Sizes in Qualitative Research

Not all research uses numbers. Qualitative studies based on interviews or focus groups follow a completely different logic. Instead of statistical power, the goal is “saturation,” the point where additional interviews stop revealing new themes or insights.

A systematic review of empirical studies found that saturation typically occurs within 9 to 17 interviews or 4 to 8 focus group discussions, particularly when the study population is relatively similar and the research question is focused. Studies with more diverse populations or broader objectives tend to need more. Multi-country research or studies exploring deeper layers of meaning in people’s responses pushed beyond these ranges.

How to Pick the Right Number for Your Project

The practical starting point is to define what you’re trying to measure and how precise you need to be. A few guidelines that apply across most situations:

  • Smaller expected effects need bigger samples. If the thing you’re measuring is subtle (a small difference between groups, a weak correlation), you’ll need hundreds or thousands of observations to detect it reliably.
  • Higher confidence requires more data. Moving from 90% to 95% confidence, or from 80% to 90% power, always costs more participants.
  • Budget and timeline matter. Increasing sample size reduces statistical errors but increases cost and time. There is always a tradeoff between precision and practicality.
  • Attrition and non-response require oversampling. Whatever your target number, recruit more than you need. People drop out of studies, skip questions, and ignore survey invitations.

For a rough sense of scale: with standard settings of 80% power, a 5% significance level, and a moderate effect size, comparing two groups often lands somewhere around 30 to 80 participants per group. Detecting a small effect with those same settings can push requirements past 400 per group. Surveys aiming for a 5% margin of error at 95% confidence typically need around 400 respondents. These are ballpark figures, and your specific situation will shift them, but they give you a reasonable frame of reference before running a formal calculation.