A situation demonstrates causation when a change in one variable directly produces a change in another, and you can rule out other explanations. The classic example: a randomized experiment where one group receives a treatment and an identical control group does not. If the treatment group improves and the control group doesn’t, the treatment caused the improvement. But causation shows up in other situations too, and understanding what separates a genuine cause-and-effect relationship from a mere pattern is one of the most important skills in science, statistics, and everyday thinking.
What Makes a Situation Causal
Three conditions must be met before you can call a relationship causal. First, the cause has to come before the effect in time. Second, there must be a real association between the two variables. Third, and most critically, no other explanation can account for the result. That third condition is where most claims fall apart. Two things can rise and fall together for years without one causing the other, simply because a hidden third factor drives both.
Researchers formalize this idea through what’s called counterfactual reasoning. The philosopher David Hume put it simply in the 18th century: “We may define a cause to be an object followed by another, where, if the first object had not been, the second never had existed.” In modern terms, you ask: if the supposed cause had not occurred, would the outcome still have happened? If the answer is yes, the cause isn’t really a cause.
The Clearest Example: A Controlled Experiment
The most straightforward situation that demonstrates causation is a randomized controlled experiment. Imagine a pharmaceutical trial where 500 people with headaches are randomly split into two groups. One group takes a new pain reliever; the other takes an identical-looking sugar pill. Neither the participants nor the researchers know who got what. After two hours, the medication group reports significantly less pain.
This situation demonstrates causation because randomization solves the biggest problem in causal reasoning: it eliminates the need to identify every possible alternative explanation. Because people were assigned to groups by chance, any other factors that affect headaches (sleep quality, stress, genetics, diet) are, on average, distributed equally between the two groups. The only systematic difference is the pill. So any difference in outcome can be attributed to the treatment itself. As one widely used political science textbook puts it, randomized trials are “a research strategy that does not require investigators to identify, let alone measure, all potential confounders.”
That said, any single trial can still have unlucky randomization where important characteristics end up unevenly distributed between groups. This is why larger sample sizes and replication matter. The logic of randomization works perfectly in theory and increasingly well in practice as sample size grows.
Situations That Look Causal but Aren’t
Correlation without causation is everywhere. Ice cream sales and drowning deaths both spike in summer, but ice cream doesn’t cause drowning. Temperature drives both. This is the “third variable” problem, also called confounding. Statistical analyses routinely find strong correlations (some with values as extreme as r = −0.96) between variables that have no logical causal connection. These spurious correlations arise from shared mathematical properties, overlapping trends, or hidden common causes.
Here’s a quick test you can apply to any situation. Ask yourself: was one variable manipulated while everything else was held constant? If not, can you identify and rule out every plausible alternative explanation? If the answer to both questions is no, the situation demonstrates correlation, not causation.
Smoking and Lung Cancer: Causation Without an Experiment
Sometimes you can’t run an experiment. You can’t randomly assign people to smoke for 30 years. Yet the scientific community is confident that smoking causes lung cancer. How?
In the 1960s, the U.S. Surgeon General drew on a framework developed by epidemiologist Austin Bradford Hill, which lays out nine types of evidence that, taken together, can build a causal case even without a controlled experiment. The smoking evidence checked nearly every box:
- Strength of association: Heavy smokers had dramatically higher lung cancer rates than nonsmokers.
- Dose-response: More years of smoking and more cigarettes per day meant higher risk, a pattern that’s hard to explain by coincidence.
- Temporality: Smoking came first, cancer came later, never the reverse.
- Consistency: The same pattern appeared across dozens of studies in different countries and populations.
- Reversibility: People who quit smoking saw their lung cancer risk drop compared to those who kept smoking, though it remained elevated above never-smokers for years.
- Biological plausibility: Scientists identified specific chemicals in tobacco smoke that damage DNA in lung cells, providing a physical mechanism.
No single piece of this evidence proves causation on its own. Together, they form a case strong enough that no credible alternative explanation survives. This is how causation is established in fields like public health, where experiments would be unethical.
Quasi-Experiments and Natural Events
Between clean lab experiments and pure observation sits a middle ground: quasi-experimental situations. These occur when some event or policy splits people into groups in a way researchers didn’t control but can study. For example, a researcher investigating the health effects of a hurricane can compare people who lived in the storm’s path with similar people just outside it. Nobody was randomly assigned, but geography created a natural dividing line.
These situations can demonstrate causation if the groups being compared were truly similar before the event and if there’s no obvious alternative explanation for differences in outcomes afterward. They’re weaker than randomized experiments but far stronger than simple observational correlations, and they’re often the best option when randomization is impossible or unethical.
How to Identify Causation in Practice
If you’re looking at a scenario on a test, in a news article, or in your own work, run through these questions in order:
- Was there manipulation? Did someone deliberately change one variable while keeping others constant? If yes, you’re closer to causation.
- Was there random assignment? Were subjects placed into groups by chance? This is the gold standard for ruling out confounders.
- Does the cause precede the effect? If the timing is unclear or reversed, causation fails immediately.
- Are alternative explanations ruled out? Could a third variable be driving both? If you can’t eliminate that possibility, you’re looking at correlation.
A situation that answers yes to all four of these questions demonstrates causation. A situation that only shows two things moving together, no matter how strongly, demonstrates correlation. The difference isn’t about how impressive the pattern looks. It’s about whether you’ve eliminated every other explanation for why that pattern exists.

