Which Situation Does Not Show Causation? Key Examples

A situation does not show causation when two things appear connected but one does not actually produce the other. The most common non-causal situations involve confounding variables, coincidence, or reversed cause and effect. If you’re working through a statistics or research methods question, the answer is almost always the scenario where a hidden third factor explains the link, where the timeline is backwards, or where the connection is pure coincidence.

What Causation Actually Requires

Causation means that changing one variable directly produces a change in another. For a relationship to be causal, three conditions need to hold: the cause must come before the effect, the two must be correlated, and no alternative explanation can account for the pattern. That last condition is where most examples fall apart. In the 1960s, epidemiologist Austin Bradford Hill laid out a set of criteria for evaluating causation, including the strength of the association, consistency across different studies, a clear time sequence, and a dose-response relationship (more of the cause leads to more of the effect). Failing to meet these criteria is a strong signal that you’re looking at something other than causation.

Confounding Variables: The Hidden Third Factor

The classic non-causal situation involves a confounding variable, sometimes called a lurking variable. This is a third factor that influences both of the things you’re measuring, creating the illusion that they’re connected to each other. Ice cream sales and drowning rates both rise in the summer. The pattern is real, but ice cream doesn’t cause drowning. Hot weather is the confounding variable: it drives people to buy ice cream and also drives people to swim, which increases drowning risk.

Another example from Penn State researchers: adults who prefer beer tend to weigh more than adults who prefer wine. That sounds like a causal relationship until you account for sex. Men in the sample were more likely to prefer beer, and men weighed more on average. Sex was the confounding variable linking beverage preference to weight. Any time you see a correlation between two variables and a plausible third factor connects them both, you’re looking at a situation that does not show causation.

Reverse Causality: Getting the Direction Wrong

Sometimes two things genuinely are connected, but people get the direction backwards. This is reverse causality. A straightforward example: people who exercise less tend to be sicker. It’s tempting to conclude that inactivity causes illness, but in many cases, the illness came first and reduced a person’s ability to exercise.

This problem shows up frequently in medical research. Patients who take a certain skin medication for eczema appear to develop a rare skin cancer at higher rates. But the explanation isn’t that the medication causes cancer. The early symptoms of the cancer look almost identical to eczema, so patients with undiagnosed cancer were being prescribed the eczema medication before anyone realized what was really going on. The disease caused the treatment, not the other way around. Correlation coefficients can tell you how strongly two variables move together, but they cannot tell you which one is driving the other.

Pure Coincidence and Large Datasets

Some correlations are just random noise. If you measure enough variables at the same time, some of them will line up by sheer chance. When researchers analyze 100 different outcome variables, roughly five will appear statistically significant purely by accident. With modern datasets containing thousands of variables and software that can test every possible pairing, coincidental correlations become almost inevitable.

This is the engine behind famously absurd correlations, like the number of films Nicolas Cage appeared in tracking closely with the number of swimming pool drownings in a given year. No one would argue that one causes the other. The pattern exists only because someone searched through enough data to find two unrelated trends that happened to move in the same direction for a few years.

The Post Hoc Fallacy: Sequence Is Not Causation

One of the most common reasoning errors is assuming that because one event followed another, the first event caused the second. This is known as the post hoc fallacy, from the Latin phrase meaning “after this, therefore because of this.” Your computer crashes after you install new software, so you blame the software. You eat ice cream and get a stomachache the next day, so you blame the ice cream. In both cases, the timeline fits, but the timeline alone isn’t enough. Dozens of other things also happened before the crash or the stomachache, and without isolating the actual cause, you’re guessing based on sequence.

This fallacy is especially common in everyday reasoning because our brains are wired to find patterns. When two events occur close together in time, it feels causal even when it isn’t.

Why Only Experiments Can Confirm Causation

Observational studies, where researchers simply watch what happens without intervening, can identify correlations but cannot reliably establish causation. The reason is that observational data is vulnerable to all the problems above: confounders, reverse causality, coincidence, and selection bias. Randomized controlled trials solve this by randomly assigning people to different groups, which distributes confounding variables evenly across groups so that any difference in outcomes can be attributed to the intervention itself.

This distinction matters for identifying non-causal situations. If a study simply observed that people who drink green tea have lower rates of heart disease, that’s a correlation. Tea drinkers might also exercise more, eat better, or have higher incomes. The observation alone does not show causation. Only a controlled experiment, where otherwise similar people are randomly assigned to drink green tea or not, could begin to isolate a causal effect.

How to Spot the Non-Causal Scenario

If you’re answering a test or homework question asking which situation does not show causation, look for these red flags:

  • A plausible third variable connects both factors (ice cream and drowning both tied to hot weather).
  • The cause and effect could be reversed (sick people exercise less, rather than inactivity making them sick).
  • The data is purely observational with no controlled experiment.
  • The connection is based only on timing (something happened after something else, with no mechanism linking them).
  • The correlation comes from mining a massive dataset with no hypothesis established beforehand.

Any scenario built around one of these patterns is describing correlation, not causation. The scenario that does show causation will typically involve a controlled experiment where one variable is deliberately changed and its direct effect on another is measured, with other factors held constant.