What Is a Cause and Effect Relationship in Science?

A cause and effect relationship exists when one event or condition directly changes the probability of another event occurring. The first event is the cause (sometimes called the exposure), and the second is the effect (the outcome). This sounds simple, but distinguishing a genuine causal link from a coincidence or indirect connection is one of the hardest problems in science, medicine, and everyday reasoning.

The Three Requirements for Causation

The philosopher John Stuart Mill laid out three conditions that must be met before you can call something a true cause and effect relationship. First, the cause must come before the effect in time. This is called temporal precedence, and it’s the one requirement that virtually every researcher agrees is non-negotiable. Second, the cause and the effect must be related to each other in a measurable way. If two things never vary together, one can’t be driving the other. Third, you have to rule out alternative explanations. If some hidden third factor could account for the relationship, you haven’t proven causation.

That third requirement is where things get difficult. In the real world, countless variables overlap and interact, making it surprisingly easy to see a pattern that looks causal but isn’t.

Causation vs. Correlation

Two things can move in the same direction without one causing the other. This is the difference between correlation (a statistical relationship) and causation (a direct link). Correlation gets mistaken for causation in a few predictable ways.

  • Coincidence. Rising rates of breast cancer over time might correlate with rising rates of hip replacement surgery simply because both increased during the same decades. The two trends have nothing to do with each other.
  • Reverse causation. Studies have found that people who stop drinking alcohol before surgery actually have higher mortality rates than moderate drinkers. That doesn’t mean quitting alcohol is dangerous. People who quit often did so because they were already in poor health, so the arrow of causation points the opposite direction from what it appears.
  • Confounding. A hidden variable can make two unrelated things appear connected. If patients with prior spine disease are more likely to receive a specific type of hip implant and also more likely to experience complications, the implant may look ineffective when the real issue is the spine condition influencing both the treatment choice and the outcome.

There’s also a famous logical error called “post hoc ergo propter hoc,” Latin for “after this, therefore because of this.” Just because one event follows another doesn’t mean the first caused the second. The Greeks and Romans identified this fallacy thousands of years ago, yet it remains one of the most common sources of false conclusions in modern health reporting.

How Scientists Test for Cause and Effect

The gold standard for establishing a causal relationship is the randomized controlled trial. In this type of study, participants are randomly assigned to either receive an intervention or not. Randomization is powerful because it balances out all characteristics between the two groups, both the ones researchers can measure and the ones they can’t. Any difference in outcomes can then be attributed to the intervention itself rather than to some pre-existing difference between the groups.

To reduce bias even further, these trials are often blinded, meaning participants and sometimes even the researchers don’t know who is receiving the real treatment and who isn’t. Computer-generated systems handle the random assignments so that no one involved in recruitment can steer a particular patient into a particular group. No single study can definitively prove causation on its own, but a well-conducted randomized trial comes closer than any other design.

When randomized trials aren’t possible (you can’t randomly assign people to smoke for 30 years, for example), researchers rely on observational studies and try to control for confounding variables through techniques like matching. Matching means selecting comparison groups that are as similar as possible on every measurable characteristic except the exposure being studied. Statistical adjustments can also account for known confounders, though unmeasured ones always remain a concern.

The Bradford Hill Criteria

In 1965, the epidemiologist Sir Austin Bradford Hill proposed nine “aspects of association” to help evaluate whether a correlation likely reflects a true cause and effect relationship. These criteria have been used for decades to assess links between environmental exposures and disease. None of them, except temporality, is considered absolutely required, but the more criteria a relationship satisfies, the stronger the case for causation.

  • Temporality. The cause must come before the effect. This is the only criterion widely considered essential.
  • Strength. A large effect size makes causation more plausible. Hill pointed out that the dramatic excess of lung cancer among heavy smokers would be hard to explain by any other environmental factor.
  • Dose-response. If more exposure leads to more of the effect, that strengthens the case. The lung cancer death rate rises linearly with the number of cigarettes smoked per day, which adds considerably to the simpler observation that smokers die at higher rates than nonsmokers.
  • Consistency. The same relationship shows up across different studies, populations, and methods. Hill valued seeing similar results reached “in quite different ways,” such as through both forward-looking and backward-looking study designs.
  • Plausibility. A known biological mechanism that could explain the link makes the case stronger, though Hill cautioned that plausibility depends on “the biological knowledge of the day” and can’t be demanded as proof.
  • Coherence. The proposed relationship doesn’t contradict what’s already known about the disease or condition.
  • Specificity. When an exposure is linked to one particular outcome in a particular group, that argues for causation. Nickel refiners developing cancer at specific body sites, with no increase in other causes of death, was a strong signal.
  • Experiment. Removing or reducing the exposure should reduce the effect. If workplace dust is cleaned up and disease rates drop, that’s powerful evidence.
  • Analogy. If a similar cause and effect relationship is already established (for instance, one drug causing birth defects), researchers should be more willing to accept evidence that a related drug does the same.

The Role of Biological Evidence

Statistical patterns alone don’t tell you why something happens. Biological research, including laboratory work and toxicology studies, contributes independent evidence that can support or challenge a causal claim. If epidemiological data show that a chemical exposure is associated with cancer, and lab studies reveal a specific mechanism by which that chemical damages DNA, the combined evidence is far more convincing than either piece alone.

That said, biological plausibility has limits as a gatekeeping tool. A study can be perfectly valid on its own methodological merits regardless of whether a biological explanation exists yet. The absence of a known mechanism doesn’t automatically mean the relationship is false; it may just mean the science hasn’t caught up. Conversely, a biologically plausible relationship can still be a statistical artifact if the study itself was flawed through confounding, measurement error, or selection bias. The real value of biological evidence is as an independent line of reasoning that strengthens or weakens the overall judgment, not as a pass/fail test applied to any single study.

The Counterfactual Way of Thinking

Modern causal reasoning in health sciences often relies on what’s called the counterfactual model. The core idea is simple: a cause truly has an effect if the outcome would have been different had the cause not occurred. If a patient took a medication and recovered, the causal question is whether that same patient would have recovered without the medication, all else being equal.

The obvious problem is that you can’t observe both scenarios for the same person at the same time. So researchers approximate the counterfactual by comparing a group that received the exposure to a group that didn’t, making the two groups as similar as possible in every other way. The closer those groups resemble each other, the more confidently you can attribute any difference in outcomes to the exposure itself. This is exactly why randomization is so valued: it’s the most reliable method for creating groups that are truly comparable, or “exchangeable,” as epidemiologists put it.

Everyday Cause and Effect Reasoning

You don’t need to run a clinical trial to use causal thinking in daily life. The same principles apply on a smaller scale. If you notice headaches every afternoon, you might suspect dehydration. To test it, you’d want to check temporality (do the headaches follow periods of low water intake?), look for a dose-response pattern (are the headaches worse on days you drink even less?), and rule out confounders (could screen time, caffeine withdrawal, or stress explain the pattern just as well?).

The most common mistake in everyday causal reasoning is the same one that trips up headlines: assuming that because two things happened together or in sequence, one caused the other. Training yourself to ask “what else could explain this?” is the single most useful habit for thinking clearly about cause and effect, whether you’re reading a health study or troubleshooting a problem in your own life.