A Pearson correlation table displays the relationships between multiple variables in a grid format, where each cell contains a number between -1 and +1. Reading one comes down to understanding three things: the sign (positive or negative), the size of the number, and whether the result is statistically significant. Once you know what to look for, these tables become straightforward to interpret.
How the Table Is Structured
A correlation table (also called a correlation matrix) lists the same set of variables along both the rows and the columns. Each cell shows the Pearson r value for the pair of variables where that row and column intersect. If you see “Age” on the left and “Income” across the top, the cell where they meet tells you how strongly those two variables are related.
The diagonal running from the top-left to the bottom-right corner always shows values of 1.00. That’s because each variable correlates perfectly with itself. The table is also symmetrical: the value where row A meets column B is identical to where row B meets column A. Because of this, many tables only display the lower triangle (below the diagonal), leaving the upper half blank to reduce clutter.
What the Sign Tells You
The positive or negative sign in front of the number tells you the direction of the relationship. A positive correlation means both variables move together: as one increases, the other increases too. Think of age and height in children. A negative correlation means they move in opposite directions: as one goes up, the other goes down. Hours of exercise and body weight, for instance, tend to have a negative correlation.
The sign says nothing about strength. A correlation of -0.45 is just as strong as +0.45. It simply describes a relationship heading in the opposite direction.
What the Number Tells You
The number itself, ignoring the sign, tells you how strong the relationship is. The closer it is to 1 (or -1), the stronger the linear relationship. The closer to 0, the weaker. A value of exactly 0 means no linear relationship at all.
A widely used framework from the statistician Jacob Cohen breaks the scale into rough categories:
- 0.10 to 0.29: Small correlation. There’s a relationship, but it’s weak and may not be obvious in everyday terms.
- 0.30 to 0.49: Medium correlation. A moderate, meaningful relationship.
- 0.50 and above: Large correlation. A strong relationship where knowing one variable gives you real predictive power over the other.
These thresholds apply to the absolute value, so they work for both positive and negative correlations. Keep in mind that Cohen’s benchmarks are general guidelines, not hard rules. In some fields, a correlation of 0.30 is considered impressively large. More recent research examining actual published studies has suggested that typical correlations in behavioral science are closer to 0.20, making Cohen’s “medium” benchmark of 0.30 already a relatively strong finding in practice.
How To Read the Asterisks
Most correlation tables display small asterisks (*) next to certain values. These indicate statistical significance, meaning the relationship is unlikely to have appeared by random chance alone. The standard convention follows a tiered system:
- * (one asterisk): p < .05, meaning there’s less than a 5% probability this result is due to chance.
- ** (two asterisks): p < .01, less than a 1% probability.
- *** (three asterisks): p < .001, less than a 0.1% probability.
Always check the footnote beneath the table to confirm what each asterisk means, since authors occasionally use different conventions. If a value has no asterisk, it’s not statistically significant. That doesn’t necessarily mean there’s no relationship; it may just mean the sample wasn’t large enough to detect one with confidence.
Some tables skip the asterisk system entirely and instead report exact p values in a separate row or column beneath each correlation. In that case, you’re looking for p values below .05 as the conventional cutoff for significance.
Why Sample Size Matters
The statistical significance of a correlation depends heavily on sample size. With a very large sample (say, 10,000 people), even a tiny correlation like 0.05 can reach statistical significance. With a small sample of 20 people, even a moderate correlation of 0.40 might not. This is why you should never rely on asterisks alone to judge whether a correlation is meaningful. Always look at the actual r value to assess practical importance.
When researchers test whether a correlation is significantly different from zero, they use degrees of freedom calculated as N minus 2, where N is the total number of observations. You may see this reported as “df” in a table or in the text accompanying it. The larger the degrees of freedom, the more statistical power the analysis had to detect real relationships.
A Step-by-Step Example
Imagine you’re looking at a correlation table from a study on student performance. The variables are Study Hours, Test Score, and Social Media Use. Here’s how you’d read one cell:
The cell at the intersection of Study Hours (row) and Test Score (column) shows r = .52**. This tells you three things. First, the relationship is positive: students who study more tend to score higher. Second, the magnitude is large (above .50). Third, the double asterisk means this finding is statistically significant at the p < .01 level.
Now look at Study Hours and Social Media Use: r = -.31*. The negative sign means students who spend more time on social media tend to study fewer hours. The magnitude is medium. The single asterisk indicates significance at p < .05.
Finally, check Test Score and Social Media Use: r = -.12. No asterisk. This is a small negative correlation that didn’t reach statistical significance. You can’t confidently say social media use is related to test scores in this dataset.
Formatting Conventions You’ll See
If you’re reading published research, the numbers follow specific formatting rules from the American Psychological Association (APA). Correlation values are reported to two decimal places. There’s no leading zero before the decimal point because a correlation can never exceed 1.00, so you’ll see r = .35, not r = 0.35. Exact p values are reported to two or three decimals, except when they’re extremely small, in which case they appear as p < .001.
In the text of a paper, you might see a result written as r(48) = .35, p = .012. The number in parentheses is the degrees of freedom (sample size minus 2), so this study had 50 participants.
Common Mistakes When Reading These Tables
The most frequent error is assuming that correlation means causation. A strong correlation between two variables does not mean one causes the other. A third, unmeasured variable could be driving both. Or the relationship could be coincidental. One estimate suggests that roughly 20% of published papers contain spurious correlations, relationships that appear strong but are statistical artifacts rather than meaningful connections.
Another mistake is ignoring the assumptions behind the Pearson coefficient. It only measures linear (straight-line) relationships. Two variables could have a strong curved relationship and still show a Pearson r near zero. The coefficient also assumes that the spread of data points is roughly even across the range of values (a property called homoscedasticity) and that both variables are approximately normally distributed. If these assumptions are violated, or if the data contains extreme outliers, the r value can be misleading. In those cases, a different measure called Spearman’s rank correlation is often more appropriate.
Finally, don’t confuse statistical significance with practical significance. A correlation of .08 with three asterisks simply means you had a huge sample. The relationship exists, technically, but it’s so small it probably doesn’t matter for real-world decisions. Always pair the asterisks with the actual r value to form a complete picture.

