A chi-square critical value is a threshold number on the chi-square distribution that determines whether your test result is statistically significant. If the test statistic you calculate from your data exceeds this critical value, you reject the null hypothesis. If it falls below, you don’t. Two things determine the critical value: your significance level (alpha) and your degrees of freedom.
How the Critical Value Works
Think of the chi-square distribution as a curve that stretches to the right. The critical value marks a point on that curve. Everything to the right of it is called the “rejection region,” and it represents outcomes so unlikely under the null hypothesis that you’d conclude something real is going on. Everything to the left is the “fail to reject” zone, where your data looks consistent with random chance.
The significance level, usually written as alpha, controls how much of the curve falls in that rejection region. When alpha is 0.05, you’re saying: “I’ll reject the null hypothesis if there’s less than a 5% chance of seeing results this extreme by luck alone.” A smaller alpha like 0.01 makes the critical value larger, meaning you need stronger evidence to reject the null hypothesis. A larger alpha like 0.10 lowers the bar.
The Two Inputs: Alpha and Degrees of Freedom
Every chi-square critical value depends on exactly two inputs.
Significance level (alpha) is the risk of a false positive you’re willing to accept. The most common choice is 0.05, but researchers also use 0.01 and 0.10 depending on the context. A lower alpha produces a higher critical value, making it harder to reject the null hypothesis.
Degrees of freedom (df) reflect the size and structure of your data. How you calculate them depends on which chi-square test you’re running. In a goodness-of-fit test, where you’re checking whether observed counts match expected proportions, degrees of freedom equal the number of categories minus one. In a test of independence, where you’re checking whether two variables in a table are related, degrees of freedom equal the number of rows minus one multiplied by the number of columns minus one. A 3-by-4 table, for example, gives you (3 – 1) × (4 – 1) = 6 degrees of freedom.
As degrees of freedom increase, the critical value increases too. This makes intuitive sense: larger, more complex datasets need a bigger test statistic to clear the bar for significance.
Common Critical Values at Alpha 0.05
Most chi-square tests in introductory coursework and basic research use an alpha of 0.05 with upper-tail rejection. Here are the critical values you’ll encounter most often:
- 1 degree of freedom: 3.841
- 2 degrees of freedom: 5.991
- 3 degrees of freedom: 7.815
- 5 degrees of freedom: 11.070
- 10 degrees of freedom: 18.307
So if you’re running a test of independence on a 2-by-2 table (1 degree of freedom) at alpha 0.05, your calculated test statistic needs to be greater than 3.841 to reject the null hypothesis. If you compute a chi-square statistic of 5.2, that clears the threshold, and you’d conclude the two variables are not independent. If you get 2.9, it doesn’t, and you’d fail to reject.
How to Find the Critical Value
You have three practical options. Chi-square distribution tables, found in the back of most statistics textbooks and published by NIST, list critical values organized by degrees of freedom (rows) and significance level (columns). You look up your row and column, and the number at the intersection is your critical value.
Online calculators and spreadsheet functions do the same thing without the table. In Excel or Google Sheets, the function CHISQ.INV.RT(alpha, df) returns the upper-tail critical value directly. Typing CHISQ.INV.RT(0.05, 3) returns 7.815.
Statistical software like R, Python, or SPSS also calculates critical values, though in practice these programs usually just give you a p-value and let you compare that to alpha instead of comparing a test statistic to a critical value. Both approaches reach the same conclusion.
Making the Decision
The decision rule is straightforward for the most common scenario, an upper-tail test. You reject the null hypothesis when your calculated chi-square statistic is greater than the critical value. You fail to reject when it’s less than or equal to the critical value. Most chi-square applications, including tests of independence and goodness-of-fit tests, use this upper-tail approach because larger chi-square values indicate bigger discrepancies between what you observed and what you’d expect under the null hypothesis.
Two-sided tests exist but are less common with chi-square. In a two-sided test at alpha 0.05, you split the rejection region into both tails of the distribution: 2.5% in the upper tail and 2.5% in the lower tail. You’d reject if your test statistic is either above the upper critical value or below the lower critical value. This comes up mainly in tests about variance rather than in the categorical-data tests most people associate with chi-square.
A Quick Example
Suppose you survey 200 people about their preference for three brands of coffee and want to know if preferences are evenly split. You have three categories, so degrees of freedom = 3 – 1 = 2. You choose alpha = 0.05, which gives you a critical value of 5.991.
You compute your chi-square test statistic by comparing the observed number of people who picked each brand to the expected count (66.7 per brand if preferences were equal). Say the statistic comes out to 8.4. Because 8.4 is greater than 5.991, you reject the null hypothesis and conclude that coffee preferences are not evenly distributed. If the statistic had been 4.1 instead, you’d fail to reject, meaning you don’t have enough evidence to say preferences differ from an even split.
The critical value itself never changes for a given combination of alpha and degrees of freedom. What changes is the test statistic, which depends entirely on your data. The critical value is just the yardstick you measure it against.

