What Does Causal Mean? Causation vs. Correlation

Causal means that one thing directly produces or brings about another. When scientists or journalists describe a relationship as causal, they’re saying that A actually makes B happen, not just that A and B tend to show up together. This distinction matters because much of what gets reported as a “link” between two things is really just a pattern in the data, not proof that one caused the other.

The Core Idea Behind Causality

At its simplest, calling something causal is a claim about cause and effect. Rain causes wet streets. A ball striking a window causes it to break. These feel obvious, but the concept gets slippery fast. The 18th-century philosopher David Hume put it this way: something is a cause when, if it had not occurred, the effect “never had existed.” That idea, called counterfactual reasoning, remains central to how researchers think about causality today.

Here’s what makes it tricky. When you say “the assassin’s shot caused the president’s death,” that feels straightforward. But strictly speaking, the shot alone wasn’t sufficient. The bullet’s trajectory, the lack of immediate medical intervention, the president’s physical condition: all of these were part of the full set of circumstances that produced the outcome. In everyday language, we pick out one factor and call it “the cause.” In science, researchers try to isolate that one factor rigorously while accounting for everything else.

Causal vs. Correlated

The most important distinction to understand is the one between causation and correlation. Two things are correlated when they tend to occur together or move in the same direction. Ice cream sales and drowning deaths both rise in summer, but ice cream doesn’t cause drowning. The shared cause is hot weather.

This kind of confusion shows up in real medical research. One clinical example: spinal measurements like Cobb angle and sagittal balance correlate with back pain severity, leading some to conclude that treatments should aim to fix those measurements. But correlation alone doesn’t prove that changing the measurement would reduce the pain. The measurement might just be a marker of some deeper problem.

In another case, a study of 22,000 people found a statistically significant reduction in heart attacks among aspirin users, which led to broad recommendations to take aspirin for prevention. The actual reduction in risk, though, was less than 1%, small enough that the risks of taking aspirin (like internal bleeding) could outweigh the benefit. Statistical significance and causal importance are not the same thing.

How Scientists Establish Causation

Proving that something is truly causal requires more than just noticing a pattern. In 1965, epidemiologist Sir Austin Bradford Hill laid out a set of criteria while building the case that cigarette smoking causes lung cancer. These criteria are still widely used as a framework for evaluating whether a relationship is causal rather than coincidental.

The most universally accepted requirement is temporality: the cause must come before the effect. This sounds obvious, but in diseases that develop slowly, it can be genuinely hard to tell which came first. Beyond timing, Hill’s criteria include:

  • Strength of association: A bigger effect makes causation more likely. The cancer rate among heavy smokers wasn’t slightly elevated; it was dramatically higher.
  • Dose-response: If more of the cause produces more of the effect, that’s strong evidence. Lung cancer death rates rose in a straight line with the number of cigarettes smoked per day.
  • Consistency: The same result appears across different studies, in different populations, using different methods.
  • Plausibility: There’s a reasonable biological explanation for how the cause could produce the effect.
  • Experiment: When the suspected cause is removed, does the effect decrease? If people stop smoking and lung cancer rates drop, that’s powerful support.
  • Specificity: The exposure is linked to a particular outcome in a particular group, not to everything at once.
  • Coherence: The causal explanation doesn’t contradict what’s already known about the disease or condition.
  • Analogy: Similar causes are already known to produce similar effects.

No single criterion is enough on its own, and not all nine need to be met. Researchers treat them as a weight-of-evidence framework, not a checklist with a passing score.

The Gold Standard: Randomized Experiments

The most reliable way to establish a causal relationship is through a randomized controlled trial. Participants are randomly assigned to either receive the treatment or not, which means any differences in outcome are most likely caused by the treatment itself rather than by some other factor. Random assignment is the key feature because it makes the two groups comparable in every way except the one thing being tested.

Of course, randomized trials aren’t always possible. You can’t randomly assign people to smoke for 30 years to see if it causes cancer. In those cases, researchers rely on observational data and use statistical tools to try to approximate the logic of an experiment. One modern approach uses diagrams called directed acyclic graphs, developed largely by computer scientist Judea Pearl, which map out assumed relationships between variables. These diagrams help researchers identify hidden sources of bias and figure out which factors need to be accounted for before they can make a causal claim from non-experimental data.

Deterministic vs. Probabilistic Causes

Not all causes guarantee their effects. Some causal relationships are deterministic: dropping a glass on concrete will shatter it, every time. But most causes in health and social science are probabilistic. Smoking causes lung cancer, but not every smoker develops it. What the causal claim really means is that smoking substantially increases the probability of lung cancer compared to not smoking.

This probabilistic nature of causation is why you’ll often see careful language in health reporting. A causal relationship between obesity and heart disease doesn’t mean every person with obesity will develop heart disease. It means the exposure raises your risk in a way that isn’t explained by coincidence or other factors. Understanding this helps make sense of why two people can have the same exposure and different outcomes, without that difference undermining the causal claim.

Why the Word Matters in Everyday Life

When a headline says a food “is linked to” a health outcome, that language deliberately avoids the word causal. It’s signaling that a pattern was observed, but the evidence isn’t strong enough to say one thing actually caused the other. When a headline says something “causes” an outcome, it’s making a much stronger claim, one that should be backed by experimental evidence or a deep body of consistent, well-controlled research.

Knowing the difference helps you evaluate health news, policy arguments, and even everyday decisions. If someone tells you that a supplement reduces inflammation because users report less joint pain, that’s a correlation. If a randomized trial shows the supplement reduces specific markers of inflammation compared to a placebo, that’s moving toward a causal claim. The word “causal” is, at its core, a statement about confidence: confidence that the relationship is real, directional, and not just a statistical artifact.