Indirect evidence is any information that doesn’t prove something on its own but allows you to draw a logical conclusion that it’s true. Instead of observing a fact directly, you’re inferring it from other facts that point in the same direction. The concept shows up in law, medicine, and science, and while the details differ across fields, the core idea is the same: you’re connecting dots rather than seeing the picture firsthand.
Direct vs. Indirect Evidence
The easiest way to understand indirect evidence is to contrast it with direct evidence. Direct evidence proves a fact without any inference. A witness who saw a car run a red light is providing direct evidence. A security camera that recorded the event is direct evidence. There’s no logical gap between the evidence and the conclusion.
Indirect evidence requires an extra step. If no one saw the car run the red light, but a mechanic testifies that the car’s brakes were failing, and traffic data shows it was moving too fast to stop, those facts together let a jury reasonably conclude the driver ran the light. Each piece of information describes a parameter or situation from which a reliable conclusion can be drawn, but none of them, by themselves, settle the question.
Indirect Evidence in Law
In legal settings, indirect evidence is typically called circumstantial evidence. The two terms are functionally interchangeable. Cornell Law Institute defines circumstantial evidence as indirect evidence that does not, on its face, prove a fact in issue but gives rise to a logical inference that the fact exists.
This kind of evidence is everywhere in courtrooms. In a discrimination case, for example, no one may have said anything explicitly discriminatory. But suspicious timing, inconsistent treatment of employees, personal animus, and ambiguous statements can, taken together, allow a jury to reasonably infer intentional discrimination. None of those facts alone is a smoking gun, but the pattern builds a case.
A common misconception is that circumstantial evidence is weak or unreliable. Courts treat it as perfectly valid. Many criminal convictions rest entirely on circumstantial evidence, and in some cases it’s considered more reliable than eyewitness testimony, which is notoriously prone to error.
Indirect Evidence in Medicine
Medicine relies on indirect evidence constantly, especially when studying whether a treatment actually works. The clearest example is surrogate endpoints: measurable lab results or biomarkers that stand in for the health outcome you really care about.
Say researchers want to know if a new drug prevents heart attacks. Running a trial long enough to count actual heart attacks in thousands of patients takes years and enormous expense. Instead, they might measure whether the drug lowers LDL cholesterol, because high LDL is strongly linked to heart attacks. That cholesterol reduction is indirect evidence that the drug will reduce heart attacks. The FDA has approved drugs on this basis, using LDL cholesterol as a surrogate endpoint for cardiovascular disease.
Other surrogate endpoints the FDA recognizes include lung function measurements for asthma drugs, a specific protein level in the blood for growth disorders, and reduction of brain plaques for Alzheimer’s treatments. In Duchenne muscular dystrophy, measuring a structural protein in muscle tissue serves as indirect evidence that a gene therapy is working, even before patients show visible improvement in movement.
The catch is that surrogate endpoints don’t always predict what matters to patients. A drug might improve a lab number without actually making people live longer or feel better. This is why medical guidelines generally treat surrogate-based evidence as lower quality than evidence from trials that measure real health outcomes directly.
How Medical Guidelines Rate Indirectness
The system most widely used to judge the quality of medical evidence is called GRADE, and indirectness is one of five reasons it gives for downgrading confidence in a finding. Evidence can be considered indirect for four specific reasons: the patients studied don’t match the patients you’re making decisions about, the treatment tested isn’t quite the same as the one in question, the comparison group differs from what’s relevant, or the outcome measured is a surrogate rather than the actual health result you care about.
When one of these problems is present, the certainty of the evidence drops by one level. When multiple forms of indirectness overlap, it can drop by two levels. For vaccines, this comes up frequently. Researchers sometimes have data on immune response (antibody levels) but not on whether the vaccine actually prevented disease. Unless there’s a well-established link between that immune response and real-world protection, the evidence gets downgraded for indirectness.
Comparing Treatments That Were Never Tested Together
One of the most practical uses of indirect evidence in medicine is comparing two treatments that have never been studied head to head. Imagine Drug A has been tested against a placebo, and Drug B has also been tested against a placebo, but no trial has ever compared Drug A directly to Drug B. Researchers can use the placebo results as a bridge: if Drug A beat placebo by a certain margin, and Drug B beat placebo by a different margin, the difference between those margins gives an indirect estimate of how A and B compare to each other.
This method, called indirect treatment comparison, is used by drug regulators and insurance agencies worldwide to make coverage and pricing decisions. When the evidence network gets more complex, with many drugs each compared to different alternatives, researchers use a technique called network meta-analysis to estimate how every treatment stacks up against every other one simultaneously.
Regulators consistently flag the same weaknesses in these comparisons. The biggest concern is that the patients in different trials aren’t similar enough. If one trial enrolled younger, healthier patients and another enrolled older patients with more complications, the comparison is distorted. Other common criticisms include differences in study design, statistical imprecision (wide margins of uncertainty in the results), and failure to account for factors that could confound the comparison. These issues don’t make indirect comparisons useless, but they do mean the results carry more uncertainty than a direct head-to-head trial would.
Indirect Evidence in Physics and Astronomy
Some of the biggest discoveries in science rest on indirect evidence. Dark matter is the most striking example. No one has ever directly detected a dark matter particle. Everything scientists know about it comes from its gravitational effects: galaxies rotate faster than they should given their visible mass, light bends around seemingly empty regions of space, and the large-scale structure of the universe doesn’t match predictions unless you add an unseen source of gravity. All of this is indirect evidence that something massive is out there, even though no instrument has captured it directly.
Black holes followed a similar path. For decades, their existence was inferred from the behavior of nearby stars, from X-ray emissions produced by superheated material spiraling inward, and from gravitational wave signals. The first direct image of a black hole’s shadow didn’t arrive until 2019, long after indirect evidence had made their existence a near certainty.
Strengths and Limits
Indirect evidence is not lesser evidence by default. Its reliability depends entirely on how strong the logical link is between what you observed and what you’re concluding. When the chain of inference is short and well-supported, indirect evidence can be compelling. LDL cholesterol’s connection to heart disease, for instance, is backed by decades of data, making it a reasonably trustworthy surrogate. When the chain is long, speculative, or built on mismatched data, the conclusion weakens.
The key question to ask about any piece of indirect evidence is simple: how many assumptions do I need to make to get from this observation to that conclusion? The fewer the assumptions, and the better each one is supported, the more weight the evidence carries. In a courtroom, a medical review, or an astrophysics paper, that principle holds the same.

