The critical region (also called the rejection region) is the set of values for a test statistic that leads you to reject the null hypothesis. If your calculated test statistic lands inside the critical region, you conclude that the data provides enough evidence against the null hypothesis to accept the alternative. If it falls outside, you don’t reject the null hypothesis.
This concept is central to hypothesis testing in statistics, and understanding how it works gives you a clear framework for making data-driven decisions.
How the Critical Region Works
Every hypothesis test starts with a null hypothesis, which is essentially the default assumption (for example, “this drug has no effect” or “these two groups have the same average”). The critical region defines how extreme your data needs to be before you’re willing to say that default assumption is wrong.
Here’s the process: you collect data, calculate a test statistic from it, and then check whether that statistic falls in the critical region. If it does, the result is “statistically significant” and you reject the null hypothesis. If it doesn’t, you lack sufficient evidence to reject it. The boundaries of the critical region are set by specific cutoff points called critical values.
The Role of the Significance Level
The size of the critical region is determined by the significance level, commonly written as alpha (α). This is the probability of rejecting the null hypothesis when it’s actually true, a mistake known as a Type I error. The most common choice is α = 0.05, meaning you accept a 5% chance of incorrectly rejecting a true null hypothesis.
A smaller alpha (like 0.01) shrinks the critical region, making it harder to reject the null hypothesis and reducing the risk of a false positive. A larger alpha (like 0.10) expands it, making rejection easier but increasing that risk. Choosing alpha is a judgment call you make before running the test, and it depends on how serious a false positive would be in your context.
One-Tailed vs. Two-Tailed Tests
Where the critical region sits on the distribution depends on the type of test you’re running.
In a two-tailed test, you’re looking for any significant difference in either direction. The alpha is split evenly between both ends of the distribution. At α = 0.05, that means 0.025 (2.5%) sits in the upper tail and 0.025 in the lower tail. Your result is significant if the test statistic falls in the top 2.5% or the bottom 2.5%.
In a one-tailed test, you’re only interested in one direction (for example, “is this mean greater than X?”). The entire alpha is concentrated in a single tail. At α = 0.05, all 5% goes into whichever tail matches your alternative hypothesis. This makes it easier to detect an effect in that specific direction, but you completely ignore effects in the opposite direction.
Finding Critical Values
The boundaries of the critical region are determined by critical values, which you look up in statistical tables or calculate with software. The specific table you use depends on the type of test.
For large samples where the population standard deviation is known, you use z-critical values from the standard normal distribution. At α = 0.05 for a two-tailed test, the critical values are ±1.96, meaning the critical region includes any z-score above 1.96 or below -1.96. For a one-tailed test at the same alpha, the critical value is 1.645 in the direction of interest.
For smaller samples, you typically use t-critical values, which depend on degrees of freedom (roughly, your sample size minus one). With 10 degrees of freedom at α = 0.05 for a two-tailed test, the critical value is ±2.23, which is wider than the z-value because smaller samples carry more uncertainty. As degrees of freedom increase, t-critical values gradually approach the z-values. By 1,000 degrees of freedom, the two-tailed critical value at α = 0.05 is 1.96, identical to the z-value.
The Decision Rule
The logic for making a decision is straightforward. If your test statistic is more extreme than the critical value in the direction of the alternative hypothesis, you reject the null hypothesis. If it’s less extreme, you do not reject. “More extreme” simply means further into the tail of the distribution.
For example, if your critical value is 1.96 and your calculated z-score is 2.4, the test statistic is in the critical region. You reject the null hypothesis. If your z-score is 1.5, it’s outside the critical region, and you don’t reject.
Critical Region vs. P-Value Approach
There are two equivalent ways to carry out a hypothesis test: the critical region approach and the p-value approach. They always produce the same conclusion.
With the critical region approach, you compare your test statistic to the critical value. With the p-value approach, you calculate the probability of observing a result as extreme as yours (the p-value) and compare it to alpha. Your test statistic lands in the critical region if and only if the p-value is less than alpha. Mathematically, these are the same check expressed differently.
The p-value approach has become more popular for a practical reason: reporting a p-value lets readers apply their own significance level. If you report p = 0.03, someone using α = 0.05 would reject the null hypothesis, while someone using α = 0.01 would not. The critical region approach locks you into a single alpha from the start. That said, the critical region framework is often easier to visualize, especially when you’re first learning hypothesis testing, because you can literally picture the shaded tails of a distribution curve and ask whether your result landed there.
A Quick Example
Suppose you want to test whether a new teaching method improves test scores compared to the standard method, where students average 75 points. You set α = 0.05 and run a two-tailed test. Your critical values are ±1.96.
After collecting data, you calculate a z-score of 2.15. That falls above 1.96, placing it in the critical region. You reject the null hypothesis and conclude that the new method produces a statistically significant difference in scores. If the z-score had been 1.80, it would fall outside the critical region, and you wouldn’t have enough evidence to claim a meaningful difference.
Why the Critical Region Matters
The critical region gives hypothesis testing a concrete decision boundary. Without it, you’d have a test statistic but no principled way to decide what counts as “extreme enough” to matter. By fixing alpha before collecting data, you set the rules of the game in advance, which prevents the temptation of adjusting your threshold after seeing the results.
The size of the critical region also directly controls your Type I error rate. If you set α = 0.05, you know that across many tests where the null hypothesis is true, you’ll incorrectly reject it about 5% of the time. This makes statistical conclusions reproducible and comparable across different studies and fields.

