What Is the 10% Condition in Statistics?

The 10% condition in statistics states that your sample size should be no more than 10% of the total population when you’re sampling without replacement. When this condition is met, you can treat individual observations as roughly independent of each other and use simpler formulas for standard error. When it’s violated, your calculations need an extra adjustment to stay accurate.

This rule comes up constantly in introductory statistics courses, especially when working with confidence intervals and hypothesis tests for proportions or means. Understanding why it exists helps you know when your standard formulas work and when they don’t.

Why Independence Matters in Sampling

Most standard statistical formulas assume that each observation in your sample is independent of the others. When you sample with replacement (putting each item back before drawing the next), every draw is truly independent. But in practice, most real-world sampling happens without replacement. You survey a person once, test a product once, or measure a specimen once, then move on to the next.

Sampling without replacement means each draw slightly changes the makeup of the remaining population. If you pull a defective item from a batch, the proportion of defective items left in the batch shifts. Technically, every observation depends on which items were already removed. The 10% condition tells you when this dependence is so small that you can safely ignore it and still use the standard formulas that assume independence.

The Rule in Plain Terms

The condition is simple: if your sample size n is less than or equal to 10% of your population size N, you can proceed with the standard formulas. Written as an inequality, that’s n ≤ 0.10N. For example, if you’re sampling from a city of 50,000 people, your sample should be no larger than 5,000 for the condition to hold.

In most real scenarios, this condition is easily satisfied. Polling 1,200 adults from a country of millions, surveying 100 students from a university of 30,000, or testing 50 items from a production run of 10,000 all pass the test comfortably. Problems arise mainly with small populations: sampling 30 students from a class of 200, or testing 15 parts from a shipment of 80.

What Happens When the Condition Is Met

When your sample is a small fraction of the population, removing items barely changes the population’s composition. The standard formula for the standard error of the mean is simply σ/√n, and for a proportion it’s √[p(1−p)/n]. These treat each observation as independent, and the slight dependence introduced by sampling without replacement is negligible enough to ignore.

Mathematically, the reason this works involves something called the finite population correction factor. This factor equals √[(N−n)/(N−1)], and it’s technically always part of the true standard error formula. When n is small relative to N, this factor is very close to 1. For instance, if you sample 100 from a population of 5,000, the correction factor is √[(5000−100)/(5000−1)] ≈ 0.990. Multiplying your standard error by 0.990 changes almost nothing, so you skip it.

What Happens When It’s Violated

If your sample exceeds 10% of the population, ignoring the correction factor starts to matter. The standard formulas will overestimate the true variability in your data. That means your confidence intervals will be wider than they should be, and your hypothesis tests will be more conservative than necessary. You won’t get false positives from this error, but you lose statistical power: real effects become harder to detect, and your estimates are less precise than your data actually supports.

The overestimation gets worse as the sample becomes a larger fraction of the population. If you sample half the population, the correction factor drops to about 0.707, meaning the true standard error is roughly 70% of what the uncorrected formula gives you. At the extreme, if you measure the entire population, the correction factor drops to zero. There’s no sampling variability at all because you have the complete data.

To see the scale of the problem: sampling 5% of a population produces a correction factor of about 0.975, which barely matters. Sampling 20% gives about 0.894, which noticeably shrinks the standard error. Sampling 50% gives about 0.707. The 10% threshold is a practical cutoff where the distortion becomes large enough to care about.

Applying the Finite Population Correction

When the 10% condition is not met, you multiply your standard error by the finite population correction factor. The adjusted formula for the standard error of the mean becomes (σ/√n) × √[(N−n)/(N−1)]. For proportions, the variance formula becomes [(N−n)/(N−1)] × [p(1−p)/n], and you take the square root for the standard error.

Some textbooks write the correction as 1−(n/N), which is called the sampling fraction. This version appears inside the variance formula rather than the standard error formula (since you’d take its square root to get the standard error form). Both expressions capture the same idea: as your sample takes up more of the population, the remaining uncertainty shrinks.

Where You’ll Encounter This Condition

In AP Statistics and introductory college courses, the 10% condition is one of several conditions you check before constructing confidence intervals or running hypothesis tests. For a one-sample proportion test, for example, you typically verify three things: random sampling, the 10% condition, and a large enough sample for the normal approximation to work (the “large counts” condition). Skipping any of these checks means your results may not be reliable.

In professional survey work and quality control, the finite population correction is applied directly rather than relying on the 10% rule of thumb. Auditors sampling financial records from a company with a few hundred transactions, or quality inspectors testing a limited batch of products, routinely use the adjusted formula. The 10% rule is essentially a shortcut that tells students when the adjustment is small enough to skip.

The key takeaway is practical: check whether your sample is small relative to the population. If it is, use the standard formulas with confidence. If it isn’t, apply the correction factor so your confidence intervals and test results reflect the actual precision of your data.