What Is Representative Bias? Definition and Examples

Representative bias (more formally called the representativeness heuristic) is a mental shortcut where you judge how likely something is based on how closely it matches a mental prototype, rather than by looking at actual statistics or logic. If someone “looks like” a librarian or a stock feels like a winner, your brain treats that resemblance as evidence, even when the numbers tell a different story. Psychologists Daniel Kahneman and Amos Tversky first described this pattern in 1972, and a 2024 pre-registered replication study successfully reproduced eight of the nine original experiments, confirming the effect holds up decades later.

How Representative Bias Works

Your brain constantly sorts new information into categories. When you meet someone, hear a medical symptom, or evaluate an investment, you instinctively compare it to a template you already carry: what a “typical” engineer sounds like, what a “classic” heart attack looks like, what a “good” stock behaves like. Representative bias kicks in when that pattern-matching overrides the actual probability of something being true.

Kahneman and Tversky defined the heuristic as judging probability by two things: how similar something is to the group it supposedly belongs to, and how well it reflects the process that generated it. In plain terms, your brain asks “does this fit the stereotype?” instead of “what are the odds?” The answer to those two questions can be wildly different.

The Linda Problem: A Classic Demonstration

The most famous illustration is the Linda problem. Participants read a description of a fictional woman named Linda: she’s 31, single, outspoken, bright, and deeply concerned with social justice. Then they’re asked which is more probable: that Linda is a bank teller, or that Linda is a bank teller who is also active in the feminist movement.

Logically, there is no way that “bank teller AND feminist” can be more likely than just “bank teller,” because the second category is a subset of the first. It’s like saying there are more red apples than apples. Yet in Kahneman and Tversky’s original experiment, 85% of participants rated the combined description as more probable. In some versions the error rate reached 87%. People chose the answer that fit the story of Linda over the answer that obeyed basic logic.

Base Rate Neglect: Ignoring the Numbers

One of the most damaging consequences of representative bias is base rate neglect, where you ignore background statistics in favor of a vivid description or a single piece of testimony. A well-known example from Kahneman and Tversky’s research involves a taxi cab accident.

Participants are told a city has two cab companies: 85% of the cabs are green, 15% are blue. A witness to a hit-and-run identifies the cab as blue, and testing shows this witness correctly distinguishes cab colors 80% of the time. Most people estimate an 80% chance the cab was blue, matching the witness’s accuracy rate. The actual probability is only 41%. Because green cabs vastly outnumber blue ones, a large chunk of the witness’s “blue” identifications are really misidentified green cabs. Out of every 29 cabs the witness would call blue, only 12 would actually be blue. People latch onto the witness’s testimony because it’s concrete and vivid, while the dry statistic (85% of cabs are green) fades into the background.

Representative Bias in Investing

Financial decisions are especially vulnerable. Investors routinely treat a stock’s recent performance as representative of its future trajectory. If a company has posted strong returns for several quarters, it “looks like” a winner, and people pile in. This is the engine behind momentum investing, where people buy whatever has been going up and sell whatever has been going down, expecting past trends to continue.

Research summarized by the SEC and Library of Congress found that investors consistently overestimate the ability of high-growth “glamour” stocks to maintain above-average performance, while underestimating the ability of beaten-down “value” stocks to rebound. Over time, growth stocks fail to meet those optimistic expectations and value stocks exceed pessimistic ones. Investors also focus heavily on annualized past returns when choosing funds, even though those returns do not predict future performance. The pattern looks representative of quality, so the brain treats it as proof of quality.

How It Affects Medical Diagnosis

Doctors are not immune. When a physician has recently seen or studied a particular disease, patients who vaguely resemble that condition can get misdiagnosed. In a randomized controlled trial, one group of physicians was given information about dengue fever before reviewing patient cases. The other group received no priming. Among the primed doctors, 71% to 100% of their misdiagnoses were cases incorrectly labeled as dengue fever, even when the actual condition was measles, scarlet fever, or typhoid. Their diagnostic accuracy dropped significantly compared to the control group. The disease they’d just been thinking about became their mental prototype, and symptoms that loosely matched it got funneled into that category.

This effect matters because the consequences are concrete: wrong diagnoses lead to wrong treatments and delayed care. It’s one reason medical training increasingly emphasizes structured checklists and deliberate reflection before settling on a diagnosis.

Representative Bias in Hiring

Hiring decisions run on prototypes. Interviewers carry a mental image of what a “good fit” looks like for a given role, and candidates get evaluated against that image rather than against objective criteria. Research from Lehigh University describes how stereotypes function as cognitive shortcuts during hiring. Assumptions about a candidate’s commitment, capability, or suitability get made based on identity rather than qualifications.

The bias is strongest when selection criteria are vague or ambiguous. If a hiring committee hasn’t clearly defined what “qualified” means before reviewing applications, interviewers default to pattern-matching: does this person seem like the kind of person who does well here? That opens the door to speculation about resume gaps, personal circumstances, or other factors that have nothing to do with job performance. One practical countermeasure is making evaluation criteria explicit and identical for every applicant before any resumes are reviewed.

Reducing Representative Bias

You can’t eliminate this bias entirely because it’s baked into how your brain processes information quickly. But you can weaken its grip. Research published in BMJ Quality & Safety outlines several strategies that have been tested in clinical and experimental settings, and they apply well beyond medicine.

  • Consider the opposite. Before committing to a judgment, deliberately look for evidence that supports the alternative. In experiments, this “consider-the-opposite” procedure reduced biased assessments of personality traits and other snap judgments.
  • Learn about base rates. People trained in basic statistical reasoning commit fewer base rate errors. Even a rough awareness that background probabilities matter can shift your thinking from “this matches my expectation” to “but how common is this actually?”
  • Use checklists and structured processes. Forcing yourself to collect data systematically, rather than going with a gut-level “spot diagnosis,” ensures you don’t overlook information that doesn’t fit your initial impression. This principle has reduced errors in fields from surgery to aviation.
  • Pause and reflect. Deliberately disengaging from your first intuitive judgment and running through an analytical check has been shown to improve accuracy in difficult decisions. The goal isn’t to distrust your instincts entirely, but to treat them as a hypothesis rather than a conclusion.

The core insight behind representative bias is simple but hard to internalize: resemblance is not probability. Something can look exactly like what you’d expect and still be statistically unlikely. Training yourself to notice the gap between “this fits my mental picture” and “this is actually probable” is one of the most useful thinking skills you can develop.