You can narrow a confidence interval in three main ways: increase your sample size, reduce the variability in your data, or lower your confidence level. Each approach shrinks the margin of error through a different part of the formula, and each comes with trade-offs worth understanding before you collect your data.
The core formula behind every confidence interval is: point estimate ± critical value × standard error. That “±” portion is your margin of error, and everything that follows the ± sign is fair game for narrowing. Let’s walk through each lever you can pull.
Increase Your Sample Size
This is the most common and most reliable way to get a tighter interval. The standard error of a sample mean is calculated as the standard deviation divided by the square root of your sample size (n). Because n sits under a square root, there’s a diminishing-returns effect: doubling your sample size doesn’t cut the interval in half. It shrinks it by about 29%. To actually halve the width of your interval, you need to quadruple the sample size.
That square root relationship matters for planning. Going from 100 to 400 observations cuts your margin of error in half. Going from 400 to 1,600 cuts it in half again. Early gains are cheap; later gains get expensive fast. If you’re designing a study or survey, run the numbers beforehand so you know how many observations you actually need to reach your target precision rather than collecting data and hoping for the best.
Reduce Variability in Your Data
The standard deviation is the other component sitting inside the standard error. If your data points are scattered widely around the mean, your interval will be wide no matter how large your sample is. Bringing that spread down directly tightens the interval.
In practice, you can reduce variability several ways:
- Use more precise measurement tools. A lower measurement error means a lower standard error, which translates directly into a narrower interval. If you’re timing a physical task, for instance, averaging multiple trials on different days reduces random noise in each person’s score.
- Standardize your procedures. Inconsistent data collection introduces variability that has nothing to do with what you’re measuring. Controlling environmental conditions, training observers, and following strict protocols all help.
- Restrict your population. Studying a more homogeneous group (say, adults aged 30 to 40 instead of 18 to 65) naturally reduces the spread in your measurements. The trade-off is that your results apply to a narrower group.
- Use a paired or within-subjects design. When each participant serves as their own control, you remove between-person variability from the equation, often dramatically shrinking the standard deviation of the differences you care about.
When your data is heavily skewed, the standard deviation can overstate the true spread. Transforming skewed data with a logarithmic or square root transformation can pull extreme values closer to the center, reducing variability and producing a more accurate (and often narrower) interval.
Lower Your Confidence Level
The critical value (z-score) in the formula scales directly with how confident you want to be. The common confidence levels and their critical values are:
- 90% confidence: critical value of 1.645
- 95% confidence: critical value of 1.96
- 99% confidence: critical value of 2.575
Dropping from 95% to 90% confidence replaces 1.96 with 1.645, shrinking your margin of error by about 16% with no extra data collection at all. Moving from 99% down to 95% is an even bigger drop, roughly 24% narrower.
The catch is obvious: a lower confidence level means a higher chance your interval misses the true value. At 95%, you accept a 5% chance of being wrong. At 90%, that rises to 10%. For exploratory work or situations where a rough estimate is fine, 90% may be perfectly acceptable. For high-stakes decisions, giving up confidence to get a tighter interval is usually the wrong trade.
Apply a Finite Population Correction
If you’re sampling from a known, limited population (say, all 2,000 employees at a company), and your sample covers a meaningful chunk of that population, you can apply a finite population correction that genuinely narrows the interval. The adjustment multiplies the standard variance by (1 − n/N), where n is your sample size and N is the total population.
This correction becomes meaningful when your sample is at least 10% to 15% of the total population. Below about 5%, the correction is so small it barely matters and can actually make intervals too narrow to be trustworthy. But when you’re surveying, say, 500 out of 2,000 people (25% of the population), the correction meaningfully tightens your estimate because you’ve already observed a large fraction of the group you’re generalizing to.
Choosing the Right Approach
In most real-world situations, increasing sample size is the safest lever because it doesn’t sacrifice confidence or limit who your results apply to. But it’s also the most expensive. If budget or logistics constrain how much data you can collect, reducing variability through better measurement and tighter study design gives you more precision per observation, effectively doing more with less.
Lowering the confidence level is the quickest fix on paper, but it’s really an accounting trick: you’re not gaining precision, you’re accepting more risk. It’s best reserved for situations where the consequences of being wrong are low, or where you need a preliminary estimate before committing to a larger study.
The most effective strategies often combine approaches. A well-designed study with standardized procedures (reducing variability), an appropriately large sample, and a confidence level matched to the stakes of the decision will produce the narrowest interval you can defend. The key insight is that all three levers live inside the same formula, and understanding where each one sits helps you decide which is worth pulling for your specific situation.

