An illusory correlation is the perception of a relationship between two things that either aren’t connected at all, are connected less strongly than you believe, or are actually connected in the opposite direction from what you perceive. The term was coined by psychologist Loren Chapman in 1967, and it describes one of the most persistent ways human thinking goes wrong. It affects everything from how you judge other people to how you interpret medical symptoms, financial patterns, and everyday coincidences.
How Illusory Correlations Form
There are two main routes to seeing a connection that isn’t there, and they work through different mental processes.
The first is expectancy-based. You already believe two things are related, so you notice and remember evidence that confirms the link while overlooking evidence that contradicts it. If you think rainy weather makes your joints ache, you’ll remember the days it rained and you felt stiff, but forget the many rainy days you felt fine or the sunny days your joints hurt. Your prior belief acts as a filter on your memory.
The second is distinctiveness-based, and it’s subtler. When two unusual things happen at the same time, your brain encodes that pairing more deeply than ordinary combinations of events. Because that pairing is easier to recall later, you overestimate how often it actually occurred. Your brain mistakes memorability for frequency.
Why Unusual Events Stick Together
The distinctiveness-based route was mapped out in a landmark 1976 study by David Hamilton and Robert Gifford. Their explanation centers on how memory handles rare events. When two infrequent things co-occur, their shared rarity makes the combination stand out. That distinctive pairing gets encoded more strongly than common, everyday pairings. Later, when you try to estimate how often those two things appeared together, you’re drawing on what’s available in memory. The rare combination is disproportionately available, so you overestimate its frequency.
This is closely related to what psychologists call the availability heuristic: the mental shortcut where you judge how common something is based on how easily examples come to mind. Vivid, unusual, or emotionally charged events spring to mind faster, so they feel more frequent than they are. Illusory correlation is what happens when that shortcut gets applied to pairs of events rather than single ones.
Where This Goes Wrong With Groups
The most studied real-world consequence of illusory correlation is its role in forming stereotypes. The mechanism is surprisingly simple. In a typical experiment, participants read descriptions of behaviors performed by members of two groups: a majority group (more members, more total behaviors described) and a minority group (fewer members, fewer behaviors). The ratio of positive to negative behaviors is kept exactly the same for both groups. There is no actual difference in how the groups behave.
Despite this, participants consistently rate the minority group more negatively. They overestimate how many negative behaviors minority group members performed and judge those members as less likable. The reason traces back to distinctiveness: the minority group is already less common, and negative behaviors are less common (most described behaviors are positive). When a minority group member does something negative, both elements are infrequent, so the combination stands out. That pairing gets locked into memory more firmly, and at judgment time, it’s easier to recall.
This finding has serious implications. It suggests that negative stereotypes about social minorities can form even without any real behavioral differences, simply because the brain processes rare-plus-rare combinations differently. No prejudice or malice is required at the input stage. The bias is baked into how memory works.
How the Brain Misjudges Patterns
Research on how people evaluate relationships between events (using what scientists call contingency tables, essentially tallies of how often two things do or don’t co-occur) reveals specific patterns in where judgment breaks down. People tend to overestimate the connection between two things when both occur frequently. If you take a supplement most days and you feel good most days, you’re likely to conclude the supplement is helping, even if the data shows no real relationship. The sheer volume of “both present” instances overwhelms your ability to weigh the cases where you felt good without the supplement or took it and still felt lousy.
The hardest scenario for people to evaluate correctly is when both the potential cause and the potential outcome happen often. In these cases, the overlap between them is large just by chance, and people consistently mistake that coincidental overlap for a meaningful connection. This is why so many folk remedies and superstitions persist: the “treatment” is common, the “recovery” is common, and the frequent co-occurrence feels like proof.
Everyday Examples
Illusory correlations show up in contexts you might not expect. A manager who believes younger employees are less committed may remember every time a twenty-something left early but not notice when older employees did the same thing. An investor who had two bad trades on Mondays might start avoiding Monday trades entirely, even though the day of the week has no relationship to market outcomes. A nurse might believe that emergency rooms get busier during full moons because the chaotic full-moon shifts are memorable, while the quiet ones aren’t.
In medicine, both patients and clinicians can fall into this trap. If a particular food and a symptom flare-up coincide a few memorable times, you may become convinced of a link that controlled testing wouldn’t support. The emotional weight of the symptom makes those co-occurrences vivid, and vividness drives availability in memory.
What Makes It So Hard to Correct
Illusory correlations are stubborn because they’re self-reinforcing. Once you believe a connection exists, your expectations bias what you notice and remember going forward, turning the distinctiveness-based error into an expectancy-based one. The initial false pattern becomes a lens through which you interpret new information, and disconfirming evidence gets filtered out.
Even showing people the raw data often isn’t enough. Studies on contingency judgment find that when participants are walked through the numbers, their subjective estimates improve, but they rarely fully align with the actual statistics. The pull of memorable, distinctive co-occurrences remains strong even in the face of contradictory evidence. This is part of why stereotypes, superstitions, and pseudoscientific beliefs are so resistant to correction: they feel true because memory makes them feel frequent.
The most effective countermeasure is structured record-keeping. When you track all four possible combinations (both present, first present but second absent, first absent but second present, both absent) rather than relying on memory, the illusion loses its power. It’s the difference between “I feel like these two things go together” and “here’s what actually happened across 50 instances.” The gap between those two is often striking.

