The base rate fallacy happens when you ignore the overall probability of something (the “base rate”) and focus instead on specific details that feel more relevant. The classic example: a medical test with 99% accuracy returns a positive result for a rare disease, and you assume you almost certainly have the disease. In reality, if only 1 in 100,000 people have it, the vast majority of positive results are false alarms. That gut reaction, the one that skips over how rare the disease is, is the base rate fallacy in action.
The Medical Test Example, Step by Step
This is the most widely cited example, and the math makes it vivid. Imagine a disease that affects 1 in 100,000 people. The test for it is excellent: it correctly identifies 99% of people who have the disease (sensitivity) and correctly clears 99% of people who don’t (specificity). You test positive. What are the odds you actually have the disease?
Most people guess somewhere around 99%. The real answer is roughly 0.1%. Out of every 100,000 people tested, about 1 person truly has the disease and gets a correct positive result. But about 1,000 healthy people also get false positives (1% of the remaining 99,999). So for every true positive, there are about 1,000 false positives. Your positive result puts you in a pool of around 1,001 people, and only 1 of them is actually sick. That means 99.9% of the positive results are wrong.
The base rate fallacy is the mistake of hearing “99% accurate” and assuming your positive result is 99% reliable, without factoring in how rare the disease is in the first place. The rarity of the condition completely overwhelms the test’s accuracy.
Mammography and Cancer Screening
This isn’t just a thought experiment. It plays out every day in real medical screening. Mammography in the United States has a sensitivity of about 82 to 92% and a specificity around 83 to 92% for subsequent screens. The actual rate of breast cancer detection is roughly 10 per 1,000 screens. Because the disease is relatively uncommon in any single round of screening, a significant number of positive mammograms turn out to be false alarms. In the U.S., where specificity is lower than in countries like Denmark (where it exceeds 98%), women are more likely to be called back for additional imaging that ultimately shows no cancer.
This doesn’t mean screening is useless. It means that a positive result on a screening test is the beginning of a diagnostic process, not a diagnosis. The base rate fallacy would be assuming that a positive mammogram means you probably have cancer, when the math often says the opposite.
The Eyewitness and the Blue Car
A well-known example from psychology goes like this: a cab was involved in a hit-and-run at night. Two cab companies operate in the city. The Green company runs 85% of the cabs; the Blue company runs 15%. A witness identifies the cab as Blue, and testing shows this witness correctly identifies cab colors at night about 80% of the time.
Most people hear “80% accurate witness says Blue” and conclude the cab was probably Blue. But the base rate matters enormously. Only 15% of cabs in the city are Blue. When you combine the witness accuracy with the base rate, the probability the cab was actually Blue is only about 41%. The Green cab is still more likely, despite the eyewitness testimony. People commit the base rate fallacy here by latching onto the specific, vivid detail (the witness’s identification) and ignoring the background frequency (most cabs are Green).
DNA Evidence in Criminal Trials
The courtroom version of this fallacy is sometimes called the prosecutor’s fallacy. It works like this: DNA found at a crime scene matches the defendant, and the probability of a random person sharing that DNA profile is 1 in 10,000. A prosecutor then tells the jury there’s only a 1 in 10,000 chance the defendant is innocent.
That leap is the base rate fallacy. The random match probability tells you how common the DNA profile is in the general population. It does not tell you the probability of guilt. In a city of 10 million people, roughly 1,000 individuals could share that same DNA profile. The defendant is one of potentially 1,000 matches, which is very different from a 99.99% chance of guilt. The U.S. Supreme Court addressed this directly in McDaniel v. Brown (2010), explaining that confusing a random match probability with source probability, and then equating source probability with guilt, compounds the error. A defendant with a DNA match but a verified alibi (say, hospital records proving they were elsewhere) illustrates why the base rate of possible sources matters as much as the match itself.
COVID Testing in Low-Prevalence Settings
During the pandemic, rapid antigen tests became a real-world lesson in base rates. These tests have lower sensitivity than lab-based methods, meaning they miss more true infections. But the base rate problem cuts the other direction too: when community transmission is very low, a positive rapid test is more likely to be a false positive than a true case. The WHO specifically recommended against using rapid antigen tests in low-prevalence settings like airports or pre-surgical screening for this reason. The CDC similarly warned that negative results in high-risk situations should be treated as “presumptive” and confirmed with a more sensitive test.
The underlying principle is the same one from the rare disease example. When the thing you’re testing for is uncommon, even a good test produces misleading results more often than you’d expect.
Why Your Brain Falls for It
The base rate fallacy isn’t a sign of poor intelligence. It reflects how human reasoning works. Your brain uses what psychologists call the representativeness heuristic: you judge how likely something is based on how well it matches a specific description or piece of evidence, rather than by calculating overall probabilities. When a witness says “Blue cab,” your brain processes that vivid, case-specific detail and treats it as the whole story. The dry statistical background (85% of cabs are Green) feels abstract and less compelling.
Research consistently shows that people are more sensitive to specific predictor information than to base rates, even when the predictor’s actual reliability is questionable. In experiments on probabilistic learning, participants gravitated toward case-specific cues and underweighted the overall frequency of outcomes. This tendency is robust across many contexts. What’s frequent in one dimension (a witness saying “Blue”) gets treated as if it should be frequent in the real world, even when it shouldn’t be.
How to Avoid the Fallacy
The corrective tool is straightforward in principle: always ask “how common is this in the first place?” before interpreting a specific piece of evidence. In medicine, this is formalized through a concept called pre-test probability. Before interpreting any test result, a doctor considers how likely the condition is based on the patient’s risk factors and the prevalence in the population. A positive test in a high-risk patient means something very different from the same positive test in a low-risk patient, even though the test itself hasn’t changed.
Using natural frequencies instead of percentages also helps. Rather than thinking “the test is 99% accurate,” imagine 10,000 people being tested. How many would actually have the disease? How many healthy people would get false positives? Laying out the numbers this way makes the base rate impossible to ignore, because you can see the two groups side by side. Studies on medical education have found that deliberately switching from quick pattern recognition to this kind of step-by-step analytical thinking reduces diagnostic errors driven by cognitive bias.
The core lesson across all these examples is the same: a single piece of evidence, no matter how compelling, changes meaning dramatically depending on how common the thing you’re looking for actually is. Ignoring that background frequency is the base rate fallacy, and it leads to wrong conclusions in medicine, law, public health, and everyday life.

