The large counts condition is a rule in statistics that checks whether your sample is large enough to use the normal distribution as a shortcut for working with proportions. It requires that both the expected number of successes (np) and the expected number of failures (n(1−p)) in your sample are at least 10. If both values hit that threshold, the math behind normal-curve-based methods like confidence intervals and hypothesis tests for proportions will be reasonably accurate.
Why the Threshold Is 10
When you’re dealing with proportions, the underlying data follows a binomial distribution. Each observation is either a “success” or a “failure,” and you’re interested in the overall proportion of successes. The binomial distribution can be clunky to work with directly, especially as sample sizes grow, so statisticians rely on the fact that it starts to resemble a normal (bell-shaped) distribution once the sample gets large enough.
This convergence happens because of the Central Limit Theorem, which says that the sampling distribution of a proportion becomes approximately normal as n increases. But “large enough” depends on how lopsided p is. If p is close to 0.5, even a moderate sample produces a roughly symmetric, bell-shaped distribution. If p is close to 0 or 1, you need a much larger sample before the distribution stops being skewed. The np ≥ 10 and n(1−p) ≥ 10 rule is a practical cutoff that accounts for this: it ensures there are enough expected outcomes on both sides of the distribution to make the normal approximation reliable regardless of where p falls.
Other Names for the Same Rule
You’ll see this condition called different things depending on the textbook. The “success-failure condition” is the most common synonym, and it means exactly the same thing: at least 10 expected successes and at least 10 expected failures. Some instructors also refer to it as the “normality condition” for proportions. The name “large counts” simply emphasizes that both counts (successes and failures) need to be large enough, not just the overall sample size n.
How to Check It in Practice
The way you check the condition depends on whether you’re building a confidence interval or running a hypothesis test. The difference comes down to which value of p you plug in.
For a confidence interval, you don’t know the true population proportion p. Instead, you use your sample proportion (p-hat). You check that the number of successes in your sample is at least 10 and the number of failures is at least 10. If you surveyed 200 people and 18 said yes, your counts are 18 successes and 182 failures. Both are at least 10, so the condition is satisfied.
For a hypothesis test, you use the hypothesized proportion from your null hypothesis (p₀), not your sample results. If your null hypothesis states that p = 0.03 and your sample size is 150, then np₀ = 4.5, which is less than 10. The condition fails, and you shouldn’t use the normal approximation for that test, even if your sample seems reasonably large.
That second example highlights an important point: a large sample size alone doesn’t guarantee the condition is met. When the proportion you’re testing is very small or very large, you may need hundreds or thousands of observations before both counts clear 10.
What Happens When It’s Not Met
If the large counts condition fails, the normal approximation can give misleading results. Confidence intervals may not have the coverage they claim (a “95%” interval might actually capture the true proportion only 85% of the time), and p-values from hypothesis tests can be inaccurate enough to change your conclusions.
When your counts are too small, the standard approach is to use an exact binomial test instead. Rather than approximating the binomial distribution with a normal curve, this method works directly with the binomial probability calculations. It’s more computationally intensive, which is why the normal approximation exists as a shortcut in the first place, but statistical software handles exact tests easily. In R, for example, the function is simply binom.test().
A Quick Example
Suppose you want to estimate the proportion of customers who return a product. You sample 80 customers and find that 12 returned their purchase. To check the large counts condition for a confidence interval, look at your observed counts: 12 successes and 68 failures. Both are at least 10, so you’re clear to use the normal approximation to build your interval.
Now suppose only 6 out of 80 returned the product. Your success count is 6, which is below 10. The condition is not met, and a standard confidence interval for proportions could be unreliable. You’d want to use an exact method or, if the problem is on an AP Statistics exam, simply state that the normal approximation is not appropriate and explain why.
Where This Fits Among Other Conditions
The large counts condition is one of several assumptions you typically check before doing inference on proportions. The others are independence (each observation doesn’t influence the next) and the 10% condition (your sample is no more than 10% of the population, which helps ensure independence when sampling without replacement). All three need to hold for the standard one-sample z-procedures for proportions to be valid. The large counts condition specifically addresses the shape of the sampling distribution, ensuring it’s close enough to normal that z-based calculations give accurate results.

