Reporting chi-square results follows a specific format: the chi-square symbol, degrees of freedom, sample size, the test statistic rounded to two decimal places, and the p-value. The exact string looks like this: χ²(2, N = 170) = 14.14, p < .01. Getting each piece right matters for your paper, so here’s how to build that string and weave it into your writing.
The Basic Reporting Format
Every chi-square result contains the same core elements in the same order:
- The symbol: χ² (the Greek letter chi, squared). Because it’s a Greek letter, it stays in regular type, not italics.
- Degrees of freedom: placed inside parentheses immediately after the symbol, with no comma in the number even if it’s large.
- Sample size: follows the degrees of freedom inside the same parentheses, written as N = [number] after a comma.
- The test statistic: the chi-square value itself, rounded to two decimal places.
- The p-value: reported as an exact value to two or three decimal places (p = .03), unless it falls below .001, in which case you write p < .001.
Put together, the format is: χ²(df, N = sample size) = value, p = exact value. You don’t need to define χ², N, df, or p in your paper. These are standard statistical symbols that readers are expected to recognize.
Goodness of Fit vs. Test of Independence
The reporting format is identical for both types of chi-square test. What changes is the sentence you wrap around the numbers. For a test of independence, you’re examining the relationship between two categorical variables. For a goodness-of-fit test, you’re checking whether observed frequencies match an expected distribution.
Here’s a test of independence:
“A chi-square test of independence was performed to examine the relation between religion and college interest. The relation between these variables was significant, χ²(2, N = 170) = 14.14, p < .01.”
And a goodness-of-fit test:
“A chi-square test of goodness-of-fit was performed to determine whether the three sodas were equally preferred. Preference for the three sodas was not equally distributed in the population, χ²(2, N = 55) = 4.53, p < .05.”
Notice both examples name the test, state what was being examined, describe the result in plain language, and then provide the statistical string. That’s the pattern to follow.
P-Value Formatting Rules
P-values trip people up more than any other element. The rules vary slightly depending on where you’re publishing, but the APA convention used in most academic coursework is straightforward: report exact p-values to two or three decimal places. So p = .04, p = .003, or p = .72 are all correct. Never write “NS” or “not significant” without giving the actual number.
The one exception: when p drops below .001, just write p < .001. You don’t need to report p = .00004 or use scientific notation unless you’re working in a field like genetics where extremely small values carry specific meaning. Medical journals have slightly different conventions. The New England Journal of Medicine, for instance, asks for two decimal places above .01, three decimal places between .001 and .01, and p < .001 for anything smaller. For a class paper or social science journal, APA’s two-to-three decimal rule covers you.
Include an Effect Size
A significant chi-square tells you that a relationship exists, but not how strong it is. That’s what effect size measures do, and APA guidelines explicitly call for including one. For chi-square tests, the two most common effect sizes are Phi (φ) for 2×2 tables and Cramér’s V for larger tables.
Both range from 0 to 1, and the interpretation thresholds from Rea and Parker are widely used:
- Below .10: negligible association
- .10 to .19: weak association
- .20 to .39: moderate association
- .40 to .59: relatively strong association
- .60 to .79: strong association
- .80 to 1.00: very strong association
In your write-up, add the effect size after the statistical string. For example: “The relation between gender and voting preference was significant, χ²(1, N = 200) = 8.91, p = .003, φ = .21, indicating a moderate association.” This gives your reader the full picture: yes, the relationship is real, and here’s how meaningful it is.
Explaining a Significant Result With Residuals
When your chi-square test has more than two categories in either variable, a significant result tells you that something in the table is driving the association, but not which specific cells. Adjusted standardized residuals solve this. These residuals follow a normal distribution, so any cell with an absolute value greater than 1.96 is significantly different from what you’d expect by chance (at the .05 level). If you’re testing multiple cells, apply a Bonferroni correction by dividing your alpha level by the number of comparisons.
In practice, you report this by describing the pattern in plain language after giving the overall result. A good example from Cornell’s statistical consulting unit: “In our sample of 592 individuals, a chi-square test showed a significant association between hair color and eye color, χ² = 138.29, df = 9, p < .001. Examination of adjusted residuals revealed that individuals with blue eyes and blond hair were observed significantly more often than expected, while individuals with brown eyes and blond hair were observed significantly less often than expected.” You don’t need to list every residual value in the text. Focus on the cells that tell the story.
When Chi-Square Assumptions Are Violated
Chi-square tests require that at least 80% of expected cell counts are 5 or greater, and no cell has an expected count below 1. If your data violates this, you need to note it and use an alternative test. For a 2×2 table with small expected counts, Fisher’s exact test is the standard replacement. For larger tables, a likelihood ratio chi-square test handles small samples better.
Report the alternative test the same way you’d report a standard chi-square, but name the test you actually used. Something like: “Because 4 of 6 cells had expected counts below 5, a likelihood ratio chi-square test was used instead of a Pearson chi-square.” This tells your reader you checked the assumptions and made an appropriate choice.
Putting It All Together
A complete chi-square report in your results section has four parts: name the test and what it examined, state the result in words, provide the statistical string with all required components, and include the effect size. Here’s what a polished version looks like:
“A chi-square test of independence was performed to examine the relationship between department and employee turnover. The relationship was significant, χ²(3, N = 412) = 11.78, p = .008, Cramér’s V = .17, indicating a weak association. Adjusted residuals showed that the sales department had a significantly higher turnover rate than expected (z = 3.12, p < .001), while the engineering department had a significantly lower rate (z = −2.45, p = .01).”
That’s three sentences covering the test, the overall finding with effect size, and the specific cells driving the result. Your reader knows exactly what was tested, whether it mattered, how much it mattered, and where the differences actually were.

