There is no single correct number of participants for every study. The right sample size depends on what kind of study you’re running, what you’re trying to measure, and how precise your results need to be. A pilot study might need as few as 24 people. A Phase III clinical trial might need 3,000. A survey aiming for a 3% margin of error needs about 1,000 respondents. The number always comes from a calculation, not a guess.
The Four Factors That Determine Sample Size
Every sample size calculation for a quantitative study balances the same four elements: your significance level (alpha), your statistical power, the effect size you’re trying to detect, and how much variability exists in what you’re measuring. Change any one of these and the required number of participants shifts, sometimes dramatically.
Alpha is the risk you’re willing to accept of finding a result that isn’t real, known as a false positive. Most researchers set this at 0.05, meaning a 5% chance of declaring something significant when it isn’t. Power is the flip side: the probability that your study will detect a real effect when one exists. A power of 0.80 (80%) is the most common target, though some studies aim for 0.90. Together, these two thresholds are the starting point for any calculation.
Effect size is where things get interesting. This describes how big the difference or relationship you expect to find actually is. If you’re testing a drug that dramatically lowers blood pressure, that’s a large effect, and you won’t need many participants to see it clearly. If the expected improvement is modest, you’ll need far more people to separate the signal from the noise. To illustrate: at a power of 0.80 and alpha of 0.05, detecting a large effect size requires roughly 8 participants, a medium effect needs about 34, and a small effect demands around 788. That hundred-fold range shows why effect size matters more than any other factor in determining your sample.
Sample Sizes by Study Type
Surveys
For surveys, sample size is driven by how tight you want your margin of error to be. At a 95% confidence level, 50 respondents give you a margin of error around 14 percentage points, which is too wide for most purposes. Bumping up to 400 respondents narrows that to about 5%. At 1,000 respondents, you reach roughly 3%, and going beyond 1,500 offers diminishing returns, dropping only to about 2% at 2,000 respondents. One counterintuitive fact: the size of the overall population you’re studying barely matters. Whether your target population is 50,000 or 50 million, you need about the same sample size for the same margin of error, as long as the population is substantially larger than your sample.
Clinical Trials
Drug trials follow a structured progression where participant numbers increase at each phase. Phase I studies typically enroll 20 to 80 people and focus on safety and dosing. Phase II studies expand to a few hundred patients to get early signals about whether the treatment works. Phase III trials, which generate the evidence needed for regulatory approval, enroll 300 to 3,000 participants. These numbers reflect the increasingly demanding statistical requirements at each stage, moving from “is this safe?” to “does this reliably work better than the alternative?”
Clinical trials are also expensive. Per-patient costs in Phase III trials range from roughly $17,500 to $62,000 depending on the therapeutic area, complexity of monitoring, and length of follow-up. This creates real tension between statistical ideals and practical budgets.
Observational Studies
Case-control studies, where researchers compare people who have a condition to those who don’t, typically use smaller samples than cohort studies that follow people forward in time. In a large review comparing the two designs, case-control studies had a median sample size of 767, while cohort studies had a median of 4,700. Cohort studies need more participants because they’re tracking outcomes that may take years to develop, and only a fraction of participants will experience the event of interest.
Qualitative Research
Qualitative studies (interviews, focus groups, ethnographic work) don’t use power calculations at all. Instead, the guiding principle is saturation: the point at which new interviews stop producing new insights. In practice, researchers often hear the same themes emerging again and again, and collecting more data becomes redundant. There’s no fixed formula for when this happens, and sample sizes in qualitative work are commonly chosen in round numbers like 10, 20, or 30. That rounding itself hints at how much these decisions rely on convention and practical judgment rather than precise calculation.
Pilot Studies Have Their Own Rules
If you’re running a pilot study to test whether a larger study is feasible, you don’t need to power it the same way. Pilot studies aren’t designed to prove that one treatment beats another. They exist to test your procedures, estimate dropout rates, and refine your measurement tools. Published recommendations for pilot study sample sizes range from 24 to 70 total participants across treatment groups, depending on whose guidance you follow. A reasonable floor is about 10 participants per group, which allows at least some investigation of feasibility outcomes like recruitment rates and protocol adherence.
Why the “30 Participants” Rule Exists
You may have heard that you need at least 30 participants for a valid study. This comes from the central limit theorem, a foundational principle in statistics: once your sample hits about 30, the distribution of sample means approximates a normal (bell-shaped) curve regardless of how the underlying data are distributed. Below 30, the shape of your data matters much more, and you may need to use different statistical tests that don’t assume normality.
This rule is useful as a bare minimum threshold for certain types of analysis, but it should never be treated as a target. Thirty participants is almost always too few to detect anything but the largest effects. It’s better understood as a floor below which standard statistical methods start to break down, not as a recommendation for how many people your study actually needs.
Plan for Dropouts
Whatever sample size you calculate, you’ll need to recruit more than that number because some participants will inevitably drop out. In a review of randomized trials published in major medical journals, 54% had some loss to follow-up, with a median dropout rate of 7%. Some studies lost nearly half their participants. Losses below 5% are generally considered minor, while losses above 20% raise serious concerns about whether the remaining data can be trusted.
The standard approach is to inflate your target sample size to account for expected attrition. If you need 100 participants to complete the study and you anticipate 15% dropout, you’d recruit about 118. The exact adjustment depends on the study length, the burden on participants, and the population you’re working with. Longer studies and more demanding protocols lose more people.
Too Few or Too Many Participants Are Both Problems
An underpowered study, one with too few participants, is more than just a statistical weakness. It’s an ethical concern. If a study exposes people to risk (taking an experimental drug, undergoing extra procedures, giving up their time) but doesn’t have enough participants to answer the research question, those risks were taken for nothing. Ethics review boards expect researchers to justify their sample size for exactly this reason: enrolling participants in a study that can’t produce a meaningful answer wastes their contribution.
Over-enrollment carries its own problems. Enrolling far more participants than needed exposes extra people to potential harms without scientific justification, increases costs, and delays completion. The goal is a sample size large enough to detect a clinically meaningful effect with adequate confidence, and no larger.
How to Calculate Your Sample Size
Start by identifying your study design, your primary outcome, and the smallest effect you’d consider meaningful. Then decide on your alpha level (usually 0.05) and your desired power (usually 0.80 or 0.90). With these inputs, you can use free tools like G*Power, online calculators from academic institutions, or the sample size functions built into statistical software like R or Stata. For surveys, simpler online calculators let you plug in your desired margin of error and confidence level to get a number directly.
If you’re unsure what effect size to expect, look at published studies on similar topics. Their reported effect sizes give you a realistic starting point. When no prior data exist, researchers sometimes use standardized benchmarks (small, medium, or large effects defined by statistician Jacob Cohen), but these are rough guides. The more precisely you can estimate the expected effect, the more accurately you can size your study, and the less likely you are to waste resources or miss a real finding.

