A chi-square table is a reference chart that tells you the critical value you need to decide whether your chi-square test result is statistically significant. You use it by finding the intersection of two things you already know: your degrees of freedom (the row) and your significance level (the column). The number at that intersection is your critical value, the threshold your calculated test statistic must exceed to count as a significant result.
What the Table Actually Shows
A standard chi-square table has degrees of freedom listed down the left side (1, 2, 3, and so on) and probability values across the top (commonly 0.10, 0.05, 0.025, 0.01, and 0.001). The body of the table contains critical values. Most chi-square tables show right-tail probabilities, meaning the values across the top represent the area to the right of the critical value. If you need a left-tail probability, subtract it from 1 before looking it up. So a left-tail area of 0.05 corresponds to 0.95 in a right-tail table.
The most commonly used column is 0.05, which corresponds to a 5% significance level. This is the standard threshold in most fields for declaring a result statistically significant. The 0.01 column sets a stricter bar, requiring stronger evidence before you reject the null hypothesis.
How To Calculate Degrees of Freedom
Before you can use the table, you need your degrees of freedom. The formula depends on the type of chi-square test you’re running.
For a test of independence (comparing two categorical variables in a contingency table), degrees of freedom equal the number of rows minus one, multiplied by the number of columns minus one: (R – 1) × (C – 1). Don’t count the totals row or column. A 3×2 table, for example, gives you (3 – 1) × (2 – 1) = 2 degrees of freedom.
For a goodness-of-fit test (checking whether observed data matches an expected distribution), degrees of freedom equal the number of categories minus one. If you’re testing whether a die is fair across its 6 faces, you have 5 degrees of freedom.
Looking Up a Critical Value Step by Step
Here’s the process, start to finish:
- Step 1: Calculate your chi-square test statistic from your data using the standard formula (sum of (observed – expected)² / expected for each cell).
- Step 2: Determine your degrees of freedom using the formulas above.
- Step 3: Choose your significance level. In most cases, this is 0.05.
- Step 4: Find the row in the table that matches your degrees of freedom.
- Step 5: Move across that row to the column matching your significance level.
- Step 6: The number at that intersection is your critical value.
For example, with 1 degree of freedom at the 0.05 significance level, the critical value is 3.841. With 4 degrees of freedom at the same level, it’s 9.488. These are the numbers your test statistic needs to beat.
Comparing Your Test Statistic to the Critical Value
Once you have both your calculated test statistic and the critical value from the table, the decision rule is straightforward. If your test statistic is greater than the critical value, you reject the null hypothesis. This means the difference you observed in your data is unlikely to have occurred by chance alone. If your test statistic is smaller than the critical value, you fail to reject the null hypothesis, meaning you don’t have enough evidence to say the result is significant.
Say you ran a test of independence on a 3×3 table and got a chi-square statistic of 7.147 with 4 degrees of freedom. Looking at the table’s 0.05 column for 4 degrees of freedom, the critical value is 9.488. Since 7.147 is less than 9.488, you would not reject the null hypothesis at the 0.05 level. Your result falls between the 0.10 and 0.20 probability range, meaning the pattern in your data could reasonably be due to chance.
Estimating a P-Value From the Table
Chi-square tables can also give you a rough estimate of your p-value, even though they only list a handful of probability columns. Instead of picking one significance level, read across the entire row for your degrees of freedom and find which two critical values your test statistic falls between. The p-value lies between the probabilities at the top of those two columns.
For instance, with 1 degree of freedom and a test statistic of 3.418, you’d find that 3.418 sits between the critical values for 0.10 (2.706) and 0.05 (3.841). So the p-value is somewhere between 0.05 and 0.10. You know the result is close to significant at the 5% level but doesn’t quite reach it. For a precise p-value, you’d need a calculator or software, but the table gets you in the right ballpark.
One-Sided vs. Two-Sided Tests
Most chi-square tests for independence and goodness-of-fit are one-sided upper-tail tests. You’re only asking whether your test statistic is large enough to land in the right tail of the distribution. For these, you simply use the significance level column directly. A 0.05 significance level means you look up the 0.05 column.
Two-sided tests come up less often with chi-square, but when they do, you split your significance level in half. For a two-sided test at the 0.05 level, you’d check the 0.025 column for the upper tail and the 0.975 column for the lower tail. You reject the null hypothesis if your statistic is more extreme than either critical value. If your table only shows right-tail probabilities, remember that a left-tail area of 0.025 is the same as a right-tail area of 0.975.
When the Chi-Square Table Doesn’t Apply
The chi-square table assumes your data meets certain conditions, and the most important one involves expected cell counts. If expected frequencies in your contingency table are very low, the chi-square approximation breaks down. The classic guideline is that all expected cell counts should be at least 5. When they’re not, Fisher’s exact test is the standard alternative.
For 2×2 tables with a total sample size under about 40, a correction called the Yates continuity correction can improve accuracy. This adjustment subtracts 0.5 from the absolute difference between each observed and expected value before squaring, producing a slightly smaller, more conservative test statistic. It only applies to tables with 1 degree of freedom. For larger tables or larger sample sizes, the correction is negligible and can be skipped.
Quick Reference for Common Critical Values
These are the critical values you’ll use most often, all at the 0.05 significance level for upper-tail tests:
- 1 degree of freedom: 3.841 (2×2 tables, or goodness-of-fit with 2 categories)
- 2 degrees of freedom: 5.991
- 3 degrees of freedom: 7.815
- 4 degrees of freedom: 9.488
- 5 degrees of freedom: 11.070
At the stricter 0.01 significance level, these values increase: 6.635 for 1 degree of freedom, 9.210 for 2, 11.345 for 3, 13.277 for 4, and 15.086 for 5. The higher the bar, the larger your test statistic needs to be to clear it.

