How to Calculate Relative Risk: Formula and Examples

Relative risk (RR) is calculated by dividing the probability of an event in an exposed group by the probability of that same event in an unexposed group. The formula is straightforward once you organize your data into a simple table, and interpreting the result comes down to whether your number lands above, below, or right at 1.0.

The Formula

Relative risk uses a 2×2 contingency table where rows represent exposure status (exposed or not exposed) and columns represent outcome status (event occurred or didn’t occur). The four cells are labeled by convention:

  • A = exposed, event occurred
  • B = exposed, event did not occur
  • C = not exposed, event occurred
  • D = not exposed, event did not occur

The formula is:

RR = (A / (A + B)) / (C / (C + D))

The numerator, A / (A + B), is the risk of the event in the exposed group. The denominator, C / (C + D), is the risk of the event in the unexposed group. Each is simply the number of people who experienced the outcome divided by the total number of people in that group.

Step-by-Step Example

Suppose you’re studying whether a new workplace chemical is linked to developing a skin rash. You follow 200 workers exposed to the chemical and 300 workers who were not exposed. After one year, 30 of the exposed workers developed a rash, and 15 of the unexposed workers did.

First, fill in your table:

  • A = 30 (exposed, got rash)
  • B = 170 (exposed, no rash)
  • C = 15 (unexposed, got rash)
  • D = 285 (unexposed, no rash)

Now calculate the risk in each group. Risk in the exposed group: 30 / (30 + 170) = 30 / 200 = 0.15, or 15%. Risk in the unexposed group: 15 / (15 + 285) = 15 / 300 = 0.05, or 5%.

Divide the two: RR = 0.15 / 0.05 = 3.0. Workers exposed to the chemical were 3 times as likely to develop a skin rash compared to those who were not exposed.

How to Interpret the Result

The interpretation depends on where the value falls relative to 1.0:

  • RR = 1.0 means there is no difference in risk between the two groups. The exposure has no measurable effect on the outcome.
  • RR greater than 1.0 means the event is more likely in the exposed group. An RR of 2.5 means the exposed group is 2.5 times as likely to experience the outcome.
  • RR less than 1.0 means the event is less likely in the exposed group. This is what you’d hope to see when testing a protective treatment or intervention. An RR of 0.6 means the exposed group has 60% of the risk of the unexposed group.

When Relative Risk Works (and When It Doesn’t)

Relative risk is designed for cohort studies, where you start with groups defined by their exposure status and follow them forward in time to see who develops the outcome. This design gives you the actual number of people at risk in each group, which is what makes the calculation valid.

In case-control studies, you can’t calculate relative risk. These studies start by selecting people who already have the outcome (cases) and people who don’t (controls), then look backward at exposure. Because researchers choose how many cases and controls to include, the total number of exposed and unexposed people in the study doesn’t reflect the real population. The denominator in the RR formula becomes meaningless. For case-control studies, the odds ratio is used instead.

Relative Risk vs. Odds Ratio

The odds ratio (OR) and relative risk answer a similar question but calculate it differently. The OR compares the odds of exposure in the outcome group to the odds of exposure in the non-outcome group, while RR compares actual probabilities. When there is an association between exposure and outcome, the odds ratio exaggerates the relationship compared to relative risk. If the RR is above 1.0, the OR will be even higher. If the RR is below 1.0, the OR will be even lower.

The two measures converge when the outcome is rare, typically occurring in less than 10% of the study population. At low event rates, odds and risk are nearly identical, so the OR can serve as a reasonable stand-in for the RR. As event rates climb above that 10% threshold, the two values diverge and should not be used interchangeably.

Adding a Confidence Interval

A single RR value calculated from a sample is a point estimate. To understand how precise that estimate is, you need a confidence interval, which gives a range of plausible values for the true population relative risk. A 95% confidence interval is standard.

The calculation uses natural logarithms because the confidence interval around a relative risk is not symmetrical. First, compute the variance of the natural log of RR:

Var(ln(RR)) = (1/A) − (1/(A+B)) + (1/C) − (1/(C+D))

Then build the interval on the log scale: ln(RR) ± 1.96 × √Var(ln(RR)). Finally, convert back by taking the exponential of both the lower and upper bounds.

Using the skin rash example: ln(3.0) = 1.099. The variance is (1/30) − (1/200) + (1/15) − (1/300) = 0.033 − 0.005 + 0.067 − 0.003 = 0.092. The square root of 0.092 is 0.303. The log-scale interval is 1.099 ± (1.96 × 0.303) = 1.099 ± 0.594, giving a range of 0.505 to 1.693. Converting back: exp(0.505) = 1.66 and exp(1.693) = 5.44. The 95% confidence interval is 1.66 to 5.44.

Because this entire interval sits above 1.0, you can be confident the increased risk is statistically significant. If the interval had crossed 1.0 (for example, 0.8 to 4.2), you couldn’t rule out the possibility that the exposure has no effect.

Relative Risk Reduction vs. Absolute Risk Reduction

Relative risk is often reframed as relative risk reduction (RRR) when evaluating treatments. RRR tells you by how much a treatment reduced the risk of a bad outcome compared to a control group. If the RR is 0.6, the relative risk reduction is 1 − 0.6 = 0.4, or 40%. That sounds impressive, but it can be misleading without context.

Absolute risk reduction (ARR) gives you the actual difference in event rates between groups. If 20% of the control group had a bad outcome and 12% of the treatment group did, the ARR is 8 percentage points. The RRR is 40%, but the absolute change is only 8%. Both numbers are correct, but the ARR is generally more useful for decision-making because it reflects the real-world magnitude of the benefit. When reading study results, look for both measures reported together to get the full picture.

Reporting Relative Risk Correctly

When presenting RR results in a paper, report, or presentation, always include the point estimate, the confidence interval, and the sample sizes. Stating “RR = 3.0” alone tells the reader very little. Stating “RR = 3.0, 95% CI 1.66 to 5.44” communicates both the effect size and the precision of the estimate. If you include a p-value, treat it as a supplement to the confidence interval rather than a replacement. Effect sizes and confidence intervals carry more information than a p-value alone, because they tell you both the direction and the plausible range of the effect.