The base rate fallacy is a thinking error where you focus on specific, descriptive information and ignore the underlying statistical odds of something being true. It’s one of the most well-documented cognitive biases in psychology, and it affects decisions in medicine, finance, law, and everyday life. The core problem: when you hear a vivid detail or a compelling piece of evidence, your brain tends to treat it as the whole story, even when the background statistics tell a very different one.
How the Fallacy Works
A “base rate” is simply the overall frequency of something in a population. If 1% of women over 40 who get routine mammograms have breast cancer, that 1% is the base rate. If 5% of startups succeed past year five, that 5% is the base rate. These numbers set the statistical backdrop for any individual case.
The fallacy kicks in when new information arrives, like a test result, a witness statement, or a persuasive story, and you give that information too much weight while forgetting the backdrop entirely. Psychologists call this “base rate neglect.” The result is that your estimate of how likely something is can be wildly off, sometimes by an order of magnitude.
The Famous Taxi Cab Problem
The classic demonstration comes from psychologists Daniel Kahneman and Amos Tversky. In their scenario, a taxi is involved in a hit-and-run at night. A city has two cab companies: Green cabs make up 85% of the fleet, and Blue cabs make up 15%. A witness says the cab was Blue, and testing shows the witness correctly identifies cab colors 80% of the time.
Most people in the study simply gave the witness’s accuracy (80%) as their answer for the probability the cab was Blue. They ignored the base rate entirely. But when you factor in that Blue cabs are only 15% of taxis on the road, the actual probability the cab was Blue drops to around 41%. The base rate dramatically changes the math, yet participants skipped right over it.
Why Your Brain Does This
The dominant explanation in cognitive psychology involves two modes of thinking. Fast, intuitive processing handles vivid, specific information with ease. A witness report, a personality description, a test result: these feel concrete and meaningful. Base rates, by contrast, are abstract and statistical. Processing them requires slower, more effortful reasoning. When both types of information compete for your attention, the vivid details tend to win.
This isn’t just laziness. Your brain evolved to respond to immediate, specific cues rather than population-level statistics. If a friend tells you a restaurant gave them food poisoning, that single story carries more psychological weight than knowing that 99.7% of meals at that restaurant are fine. The specific case feels relevant in a way that percentages don’t.
The Medical Test Trap
Nowhere is the base rate fallacy more consequential than in medical testing. Consider this textbook example: breast cancer affects about 1% of women over 40 who get routine screening mammograms. If a woman has breast cancer, the mammogram will detect it 80% of the time. But if she doesn’t have cancer, there’s still a 9.6% chance the mammogram will come back positive anyway (a false positive).
So a woman gets a positive result. What’s the probability she actually has cancer? Most people, including many doctors, guess somewhere around 80%. The real answer is roughly 7.8%. The reason: for every 1,000 women screened, about 10 have cancer and 8 of those get correctly flagged. But of the 990 healthy women, about 95 also get flagged. So out of 103 total positive results, only 8 are true cases.
This pattern repeats with other rare conditions. Prenatal testing for trisomy 21 (Down syndrome) has detection rates up to 99% and false positive rates as low as 0.1%. That sounds nearly perfect. But because the condition occurs in roughly 1 out of every 800 births, a positive result only means about a 55% chance the fetus is actually affected. For rarer conditions like trisomy 13, a positive test result carries just a 6% probability of being correct.
Physicians Aren’t Immune
You might assume medical professionals would handle these numbers better. They often don’t. In one study, physicians were asked to estimate the probability of metastasized lung cancer in a patient with a history of the disease. The correct probability given the clinical information was above 99.5%. The physicians’ average estimate was 48%. In another scenario from the same study, doctors were asked the probability of a brain tumor given a negative CT scan result. The correct answer was 93%, but physicians estimated it at just 11% when they’d received the patient history at the start of the case. Even trained professionals anchor on the specific test result and lose sight of how common or rare the condition is to begin with.
How It Affects Financial Decisions
In investing, the base rate fallacy shows up whenever a compelling story overrides the statistical reality. A startup founder gives a dazzling pitch, and an investor focuses on the specific qualities of the company while ignoring that roughly 90% of startups fail. A stock picks up media attention for a strong quarter, and buyers pile in without considering that most individual stocks underperform index funds over time. The vivid narrative (a charismatic CEO, a revolutionary product) displaces the dull but critical background rate of success or failure.
Insurance is another area where neglecting base rates leads to poor choices. People tend to overinsure against dramatic but rare events (plane crashes, terrorism) while underinsuring against common ones (car accidents, home water damage). The emotional weight of specific, vivid scenarios distorts the perception of actual risk.
Thinking in Counts, Not Percentages
The most effective way to counteract this fallacy doesn’t require learning statistics. It requires changing how numbers are presented. Research by Gerd Gigerenzer and colleagues demonstrated that when information is framed as “natural frequencies,” the number of people who reason correctly about it nearly triples, jumping from 16% to 46% in experiments.
Natural frequencies are simple counts rather than percentages. Instead of saying “the test has a sensitivity of 80% and a false positive rate of 9.6%,” you say: “Out of 1,000 women, 10 have cancer. Of those 10, 8 will test positive. Of the remaining 990 healthy women, about 95 will also test positive. So of roughly 103 women who test positive, about 8 actually have cancer.” The math is identical, but the format lets your brain track the actual numbers of people at each step instead of juggling abstract probabilities.
The reason percentages are so hard to work with is that they strip away the base rate information. When you hear “80% detection rate,” that number has already been separated from the prevalence of the disease. Your brain has to mentally recombine them, and it consistently fails to do so. Natural frequencies keep the base rate baked into the numbers, so you don’t have to reassemble the picture yourself.
Spotting It in Everyday Life
Once you know about base rate neglect, you’ll notice it everywhere. A friend says their neighborhood is dangerous because they saw a crime report on the news, without considering that thousands of people live there uneventfully. A hiring manager rejects a candidate from a school with a “low reputation,” ignoring that plenty of successful employees came from that same background. A jury convicts partly because a forensic test “matches” the defendant, without weighing how many other people in the city would also match.
The practical correction is straightforward: before reacting to any specific piece of evidence, ask yourself how common the thing is in the first place. If a disease is rare, a positive test is less meaningful than it sounds. If business failures are common, a good pitch is less predictive than it feels. If a type of crime is infrequent, a single anecdote is less informative than it seems. The base rate doesn’t make the specific evidence worthless. It just tells you how much that evidence should actually move the needle.

