What Is the Minimum Sample Size? The 30 Rule Explained

There is no single minimum sample size that works for every study. The number you need depends on what you’re studying, how precise your results need to be, and what type of research you’re doing. That said, 30 is the most commonly cited starting threshold in statistics, and for good reason: at a sample size of 30, the distribution of sample averages closely approximates a normal (bell-curve) distribution, which is the foundation most statistical tests rely on.

But 30 is a rule of thumb, not a universal answer. A survey of 10,000 people might need 400 respondents. A clinical drug trial might need 3,000. A qualitative interview study might only need 12. The right minimum depends entirely on context.

Why 30 Keeps Coming Up

The number 30 traces back to the central limit theorem, one of the most important principles in statistics. It says that if you take enough random samples from any population, the averages of those samples will form a bell curve, regardless of how the original population is distributed. At around 30 observations, the approximation becomes reliable enough for most standard tests to work properly.

This matters practically because many common statistical tests, like the t-test, assume your data follows a normal distribution. When your sample hits 30, the t-distribution (used for smaller samples where you don’t know the true population spread) becomes nearly identical to the standard normal distribution. Below 30, your results become more sensitive to the shape of the underlying data, and you may need to use alternative methods that don’t assume normality.

So 30 isn’t magical. It’s the point where the math becomes forgiving enough that you can trust standard tools. If your data is heavily skewed or has extreme outliers, you may need more than 30 even for basic analyses.

The Four Factors That Determine Your Sample Size

When researchers formally calculate a minimum sample size, they use something called a power analysis. Four variables interact to produce the number you need:

  • Effect size: How large is the difference or relationship you’re trying to detect? Smaller effects require dramatically larger samples. Detecting a large effect (say, a treatment that doubles recovery speed) might require only 8 participants per group. Detecting a subtle effect (a 5% improvement) could require 788.
  • Statistical power: This is the probability that your study will detect a real effect if one exists. The standard target is 80%, meaning you accept a 20% chance of missing a true effect. Raising power to 90% increases the sample you need.
  • Significance level (alpha): This is your tolerance for a false positive, typically set at 5%. A stricter threshold like 1% demands more participants.
  • Variability: The more spread out your data naturally is, the harder it becomes to distinguish a real signal from noise, and the more observations you need.

To illustrate how dramatically these interact: with a large effect size of 1.0, 80% power, and a 5% significance level, you need roughly 34 participants. Drop the effect size to 0.2 (a small effect), and that number jumps to 788. The relationship between effect size and sample size is inverse and steep.

Minimum Sample Sizes for Surveys

If you’re designing a survey, the calculation works differently than in experimental research. The key inputs are your desired margin of error, your confidence level, and the size of the population you’re surveying.

For a standard survey targeting 95% confidence with a 5% margin of error, you need approximately 400 respondents. This holds true whether your population is 50,000 or 5 million, because once a population gets large enough, the required sample size stabilizes. Surveying a city of 100,000 and a country of 100 million requires roughly the same sample for the same precision.

Small populations are the exception. If your total population is under 100, you’ll need to survey a proportionally larger share of it. In some cases, if the calculated sample size approaches the total population, it makes more sense to simply survey everyone.

The standard formula for large populations is Cochran’s formula, which multiplies the squared confidence value by the estimated proportion of the attribute you’re measuring, then divides by the squared margin of error. For small or finite populations, a correction factor adjusts this number downward. In practice, most online sample size calculators handle both formulas automatically.

Researchers also typically inflate their target sample by 10 to 15% to account for people who don’t respond or drop out, and another 10 to 20% if the analysis requires adjusting for confounding variables.

Sample Sizes in Clinical Trials

Drug development follows a structured progression with escalating sample sizes at each phase. Phase 1 trials, which primarily test safety and dosing, typically enroll 20 to 100 participants (often healthy volunteers). Phase 2 trials expand to a few hundred people who have the condition being studied, generating early data on whether the treatment works. Phase 3 trials, which provide the definitive evidence regulators need, involve 300 to 3,000 participants.

These aren’t arbitrary numbers. Each phase balances the ethical obligation to expose as few people as possible to an unproven treatment against the statistical need for enough data to draw reliable conclusions. Phase 3 trials are large because they need to detect treatment benefits with enough certainty to justify approving a drug for widespread use.

Qualitative Research Has Different Rules

Not all research involves numbers. In qualitative studies (interviews, focus groups, case studies), the goal isn’t statistical significance but “saturation,” the point where new interviews stop producing new insights. A systematic review of empirical studies found that most qualitative research reaches saturation within 9 to 17 interviews or 4 to 8 focus groups, particularly when the study population is relatively similar and the research question is narrowly defined.

More diverse populations or broader research questions push the number higher. But the principle is different from quantitative work: you’re not calculating a number in advance so much as monitoring whether additional data is still teaching you something new.

Pilot Studies and Early-Stage Research

If you’re running a pilot study to test whether a larger study is feasible, the sample size expectations are smaller. Recommendations vary, with some guidelines suggesting at least 12 participants per group and others recommending 30 or more per group. The purpose of a pilot isn’t to produce definitive results. It’s to estimate how variable your data will be, test your procedures, and generate the numbers you’ll plug into a power analysis for the full study.

How to Choose Your Number

Start by identifying what type of research you’re doing. If you’re comparing groups or testing an intervention, run a power analysis using estimated effect sizes from previous research or a pilot study. If you’re surveying a population, use a sample size calculator with your desired confidence level and margin of error. If you’re conducting qualitative interviews, plan for 12 to 17 participants and assess saturation as you go.

When you genuinely have no prior data and no way to estimate effect sizes, 30 per group is a defensible starting point for quantitative work. It satisfies the central limit theorem’s requirements and provides enough statistical power to detect large effects. But it won’t be sufficient for detecting small or moderate effects, which is where most real-world research questions live. The more precise your question and the subtler the effect you’re chasing, the more participants you’ll need.