“Causally related” means one thing directly produces or contributes to another. If A is causally related to B, then A’s occurrence is part of the reason B happened. This is the core idea behind “cause and effect,” and it shows up in medicine, law, science, and everyday reasoning. The phrase sounds simple, but proving that two things are causally related, rather than just happening to occur together, is one of the hardest problems in research and legal disputes alike.
Causal Relationships vs. Correlation
Two things can be related without one causing the other. A correlation is a statistical measure showing that two variables move together. Smoking rates and alcohol use, for example, are correlated: people who smoke are more likely to drink heavily. But smoking doesn’t cause alcoholism. Some third factor, like stress or social environment, may drive both behaviors up at the same time.
A causal relationship is stricter. Smoking is causally related to lung cancer because the act of smoking itself increases the biological risk of cancer developing. The key distinction: a correlation tells you two things are connected, but it cannot tell you whether one actually caused the other. A correlation coefficient, no matter how strong, says nothing about cause and effect on its own.
This difference matters in real life. If a study finds that people who eat breakfast weigh less, that’s a correlation. It doesn’t prove skipping breakfast causes weight gain. People who eat breakfast may also exercise more, sleep better, or have higher incomes, and any of those factors could be the real driver.
How Confounding Creates False Causal Links
A confounding variable is a hidden third factor that influences both the suspected cause and the outcome, making it look like one caused the other when it didn’t. The classic criteria for a confounder are that it must be associated with both the exposure and the outcome, and it must not simply be a step along the path between them.
Unmeasured confounders are especially dangerous. In obstetric research, for instance, placental abruption can create a misleading statistical link between preeclampsia and cerebral palsy when researchers adjust for gestational age. The apparent causal connection is actually a statistical artifact created by the hidden variable. This kind of problem is common in observational studies, where researchers can’t control who gets exposed to what.
Selection bias can mimic causation too. When you accidentally filter your data by a variable that’s a shared consequence of both the exposure and the outcome, you can manufacture a relationship that doesn’t exist in the real world. These pitfalls are why the phrase “correlation does not imply causation” is repeated so often in science.
The Chicken-or-Egg Problem
Even when two things are genuinely connected, the direction of causation can run opposite to what you’d expect. This is called reverse causality. A good example comes from research on health and social trust. Many studies reported that people with low social trust tend to have worse health, suggesting that distrust harms your wellbeing. But longitudinal research found the arrow also points the other way: people in poor health reported lower trust afterward, with an odds ratio of 1.38. The vulnerability and uncertainty of being sick made people less trusting, not the reverse.
Reverse causality is why timing matters so much. For two things to be causally related, the cause must come before the effect. This sounds obvious, but in diseases that develop slowly over years, figuring out which came first can be genuinely difficult.
How Scientists Establish Causation
The gold standard for proving a causal link is the randomized controlled trial. Participants are randomly assigned to either receive a treatment or not, which ensures the two groups are balanced on every characteristic, both measured and unmeasured, except for the treatment itself. Any difference in outcomes can then be attributed to the treatment rather than to some lurking variable.
When a randomized trial isn’t ethical or practical, researchers use other tools. One increasingly important method uses genetic variants as a kind of natural experiment. Because your genes are randomly assigned at conception (much like a coin flip in a clinical trial), researchers can use genetic differences that affect a specific risk factor, like cholesterol levels, to test whether that risk factor truly causes a health outcome like heart disease. This approach sidesteps many of the biases that plague observational studies.
Researchers also use causal diagrams, essentially flowcharts where arrows represent cause-and-effect relationships between variables. These diagrams help identify which variables need to be accounted for in an analysis and which ones should be left alone, preventing the kind of statistical mistakes that create false causal claims.
The Bradford Hill Criteria in Medicine
In the 1960s, epidemiologist Austin Bradford Hill proposed nine viewpoints for evaluating whether an observed association is likely causal. These aren’t a checklist where every box must be ticked, but a framework for weighing evidence. The criteria include:
- Strength: A larger association is harder to explain away by confounding alone. The pronounced excess of lung cancer among heavy smokers, for instance, would require an implausibly strong hidden factor to explain without invoking causation.
- Consistency: The same finding appears across different populations, study designs, and time periods.
- Temporality: The exposure must precede the outcome. This is the one criterion considered non-negotiable.
- Dose-response: More exposure leads to more of the outcome. Lung cancer death rates rising linearly with the number of cigarettes smoked daily is a powerful example.
- Plausibility: A biological mechanism can explain how the cause produces the effect, though Hill himself cautioned that plausibility depends on current knowledge and shouldn’t be required.
- Experiment: When the suspected cause is removed, does the effect diminish? If workplace dust is reduced and disease rates drop, that strongly supports causation.
- Specificity: The exposure leads to a particular outcome rather than a wide range of unrelated ones.
- Coherence: The causal interpretation doesn’t conflict with what’s already known about the disease.
- Analogy: If a similar cause is already known to produce a similar effect, less evidence may be needed.
These criteria remain foundational in public health. They were central to establishing that smoking causes cancer, and they continue to guide decisions about whether environmental exposures, medications, or lifestyle factors truly cause health outcomes.
“Causally Related” in Legal Contexts
In law, “causally related” carries specific meaning tied to liability. Courts generally apply two tests. The first is the “but-for” test: but for the defendant’s action, would the harm have occurred? If the answer is no, the action is considered a cause. Washington state jury instructions define proximate cause as “a cause which in a direct sequence produces the injury complained of and without which such injury would not have happened.”
The second test is the “substantial factor” test. If the action was a substantial factor in producing the harm, it qualifies as a proximate cause, even if other factors also contributed. Remote or trivial factors, even if technically part of the chain of events, are not considered proximate causes. Courts also require foreseeability: the harm that occurred must have been a reasonably foreseeable consequence of the action. Multiple proximate causes can exist for a single injury.
You’ll encounter the phrase “causally related” in workers’ compensation claims, personal injury lawsuits, and insurance disputes, where the central question is whether an injury or condition was caused by a specific event or exposure.
“Causally Related” in Drug Safety
When regulators evaluate whether a drug caused a side effect, they use the standard of “reasonable possibility.” Under U.S. federal regulations, a suspected adverse reaction is any adverse event where there is evidence to suggest a causal relationship between the drug and the event. This is intentionally a lower bar than definitive proof.
Evidence that supports a causal link between a drug and a side effect includes: a single occurrence of a reaction that’s rare in the general population but well-known to be associated with drug exposure (like severe liver injury or a particular skin condition), one or more cases of an uncommon event in the exposed population, or a statistical analysis showing the event occurs more frequently in the treatment group than in a control group. Drug manufacturers are required to report serious and unexpected suspected adverse reactions to the FDA within 15 calendar days.
This regulatory definition is deliberately broader than scientific proof of causation. The goal is safety: it’s better to flag a potential causal link early and investigate further than to wait for ironclad evidence while patients are harmed.

