How to Find the Critical Values of r Using a Table

The critical value of r is the minimum correlation coefficient your data must reach to be considered statistically significant. You find it using a critical value table, where you look up your degrees of freedom (sample size minus 2) and your chosen significance level. If your calculated r meets or exceeds the critical value, the correlation is unlikely to be due to chance alone.

What Critical Values of r Tell You

When you calculate a Pearson correlation coefficient, you get a number between -1 and +1 that describes how strongly two variables are related. But a correlation of, say, 0.45 doesn’t automatically mean anything meaningful is happening. With a small sample, random noise alone can produce correlations that look impressive. The critical value of r is the threshold that separates “this could easily be random” from “this probably reflects a real relationship.”

The underlying question is simple: if there were truly no relationship between these two variables in the population, how likely is it that you’d see a correlation this strong just by chance? If your calculated r is farther from zero than the critical value, you reject the null hypothesis (that the true correlation is zero) and conclude the relationship is statistically significant.

How to Calculate Degrees of Freedom

Before you can look up a critical value, you need your degrees of freedom. The formula is straightforward:

df = n – 2

Here, n is your sample size (the number of paired observations). If you measured height and weight for 25 people, your degrees of freedom would be 23. The reason you subtract 2 is that estimating a linear relationship between two variables uses up two degrees of freedom, one for each variable.

Using a Critical Value Table

The most common method is a printed or online table of critical r values. These tables have degrees of freedom listed down the left column and significance levels across the top. To use one:

  • Calculate your degrees of freedom (n – 2).
  • Choose your significance level. The most common choice is 0.05, meaning you’re willing to accept a 5% chance of a false positive.
  • Decide on a one-tailed or two-tailed test (more on this below).
  • Find the intersection of your df row and your significance column. That number is your critical value.

For example, at the 0.05 significance level using a two-tailed test, here are some common critical values:

  • df = 10 (n = 12): critical r = 0.576
  • df = 20 (n = 22): critical r = 0.423
  • df = 30 (n = 32): critical r = 0.361

If your calculated r (ignoring the sign) is equal to or greater than the critical value, the correlation is statistically significant. If it falls below, you cannot conclude the relationship is real.

Why Sample Size Changes the Threshold

Notice how the critical value drops as sample size increases. With only 12 data points, you need a correlation of at least 0.576 to reach significance. With 32 data points, a correlation of 0.361 is enough. This makes intuitive sense: more data gives you more confidence, so a weaker correlation can still be meaningful.

This relationship cuts both ways, though. Very large samples tend to make even tiny correlations statistically significant, even when they have no practical importance. A study with thousands of participants might find a correlation of 0.05 that clears the significance threshold but represents an almost nonexistent real-world relationship. Statistical significance and practical significance are not the same thing.

One-Tailed vs. Two-Tailed Tests

Your choice of test type directly affects the critical value. A two-tailed test checks whether the correlation is significantly different from zero in either direction, positive or negative. It splits your significance level in half, putting 0.025 in each tail when you’re using alpha = 0.05. A one-tailed test puts all 0.05 in one direction, making it easier to reach significance, but only in the direction you predicted beforehand.

Use a two-tailed test when you don’t have a strong reason to predict the direction of the relationship. Use a one-tailed test only when you have a clear, pre-existing hypothesis about whether the correlation should be positive or negative. Choosing a one-tailed test after seeing your results just to make a borderline finding significant is not appropriate. Most critical value tables are labeled for two-tailed tests, so if you’re running a one-tailed test, use the column for twice your alpha (look at the 0.10 column for a one-tailed test at the 0.05 level).

The Formula Behind the Table

Critical value tables are built from a conversion between r and a t-statistic. If you want to calculate significance directly rather than using a table, you can convert your correlation coefficient to a t-value with this formula:

t = r × √(n – 2) / √(1 – r²)

You then compare this t-value to the critical t-value for your degrees of freedom and significance level. For instance, if you have 15 data points and a correlation of 0.846, the calculation works out to a t-value of about 5.82. You’d compare that against the critical t-value for 13 degrees of freedom. Since 5.82 far exceeds the critical t-value at any common significance level, that correlation is highly significant.

This formula is what’s happening “under the hood” of the table. Most people don’t need to use it directly, but it’s useful when your degrees of freedom fall between values listed in a table, or when you want an exact p-value rather than a yes/no significance decision.

Finding Critical Values in Software

If you’re working in Excel or Google Sheets, you can skip the table entirely. The built-in function T.INV.2T returns the critical t-value for a two-tailed test, and T.INV returns it for a one-tailed test. The process works like this:

  • Step 1: Get the critical t-value using =T.INV.2T(0.05, n-2) for a two-tailed test at the 0.05 level.
  • Step 2: Convert that t-value back to a critical r using the formula: critical r = √(t² / (t² + df)).

If you want to test a specific correlation you’ve already calculated, you can convert your r to a t-value using the formula above and then use =T.DIST.2T(t, df) to get an exact p-value. A p-value below your chosen significance level (typically 0.05) means your correlation is significant.

Statistical software like R, SPSS, and Python’s SciPy library calculate p-values for correlations automatically when you run a correlation test, so you rarely need to look up critical values manually in those environments.

Putting It All Together

Suppose you collected data from 22 participants and found a correlation of r = 0.48 between two variables. Your degrees of freedom are 20. Looking at a critical value table for a two-tailed test at the 0.05 significance level, the critical value is 0.423. Since 0.48 exceeds 0.423, you’d conclude the correlation is statistically significant. At the stricter 0.01 level, the critical value jumps to 0.507, and your result of 0.48 would fall short, meaning you couldn’t call it significant at that more demanding threshold.

This is why reporting the actual r value and the p-value matters more than simply saying a result is “significant” or “not significant.” The same correlation can cross one threshold but not another, and the practical meaning of the relationship depends on context, not just whether it cleared a statistical bar.