Overconfidence is a cognitive bias in which people’s subjective certainty about their knowledge, abilities, or predictions exceeds their actual accuracy. It’s one of the most robust and well-documented findings in behavioral psychology, showing up in everything from trivia questions to medical diagnoses to financial trading. Psychologists generally break overconfidence into three distinct varieties, each with different triggers and consequences.
The Three Types of Overconfidence
A widely used framework identifies three ways overconfidence shows up in human thinking: overestimation, overplacement, and overprecision.
Overestimation is the simplest form. It’s when you think you did better than you actually did. A student walks out of an exam sure they got a 90 and comes back with a 72. A project manager budgets three months for a task that takes seven. People tend to overestimate most on difficult tasks and, interestingly, sometimes underestimate themselves on easy ones.
Overplacement is the “better-than-average” effect. This is the tendency to rank yourself above others on traits like intelligence, driving skill, or leadership ability. Research consistently finds that people rate themselves significantly above the midpoint on most abilities, especially ones perceived as easy or common. Getting along with others, spoken communication, and leadership show particularly large effects. The logic seems to be: if most people can do something, the “average” person who comes to mind is someone doing it poorly, making your own performance look good by comparison. The effect is strongest for easy, familiar tasks and weakest (or even reversed) for genuinely hard skills like advanced math or athletics.
Overprecision is the least intuitive but possibly the most pervasive type. It’s an unjustified certainty about the accuracy of your beliefs. In a classic test, people are asked to give a range they’re 90% sure contains the correct answer to a factual question (like the length of the Nile). If people were well-calibrated, the true answer would fall inside their range 90% of the time. In practice, hit rates often land around 50% to 74%, meaning people draw their ranges far too narrow. In one set of experiments, 90% confidence intervals contained the correct answer only about 54% of the time. People think they know more precisely than they do, leaving too little room for surprise.
Why the Brain Defaults to Overconfidence
Several psychological mechanisms keep overconfidence in place. The most prominent is confirmation bias: the tendency to seek, notice, and remember information that supports what you already believe. Neuroscience research has identified selective integration of choice-consistent information as a key mechanism. Once you form an initial judgment, your brain preferentially processes evidence that confirms it and discounts evidence against it.
This filtering gets stronger the more confident you are. After high-confidence decisions, people show a pronounced confirmation bias, making them especially resistant to corrective information. After low-confidence decisions, the system opens up and becomes more receptive to new data. In a sense, confidence acts as an internal control signal: high confidence locks in your position, low confidence keeps you flexible.
This isn’t entirely a flaw. Modeling studies have found that when people have good self-awareness about when they’re right and wrong (strong metacognition), selectively weighting confirming evidence can actually improve performance compared to treating all evidence equally. The problem is that most people’s metacognition isn’t that precise, so the filtering tends to entrench errors rather than protect correct judgments.
Motivation plays a role too. People want to see themselves favorably, and that desire subtly shapes how they interpret ambiguous feedback. A vague compliment becomes evidence of exceptional talent; a poor outcome gets attributed to bad luck.
The Dunning-Kruger Effect
The most famous illustration of overconfidence comes from psychologists David Dunning and Justin Kruger. Across four studies, participants who scored in the bottom quartile on tests of humor, grammar, and logic estimated themselves to be around the 62nd percentile, despite actually performing at the 12th percentile. That’s a gap of 50 percentage points between perceived and actual ability.
The pattern held consistently: the bottom two quartiles overestimated their skills, with the least competent group overestimating the most. Meanwhile, top-quartile performers slightly underestimated themselves. The explanation is that the skills needed to do well on these tasks are the same skills needed to recognize what “doing well” looks like. Without that metacognitive ability, poor performers lack the tools to notice their own mistakes.
Experts Aren’t Immune
You might expect that expertise would cure overconfidence. It doesn’t, at least not reliably. Research on calibration (how well confidence matches accuracy) finds that both laypeople and experts are systematically overconfident about the precision of their judgments. Overconfidence is most extreme on difficult tasks, and experts are typically consulted precisely for the hardest, most uncertain questions, the ones where other data sources aren’t available. This creates a paradox: the situations where expert judgment matters most are the same situations where it’s most likely to be overconfident.
The Bias Blind Spot
Overconfidence is reinforced by a related phenomenon called the bias blind spot. People readily spot cognitive biases in others while rating themselves as less susceptible to those same biases. This is partly a metacognitive error, but research suggests it may also stem from how people evaluate themselves versus others. When assessing your own thinking, you rely heavily on introspection, reviewing your internal reasoning and finding it sound. When assessing someone else, you rely on observable behavior, which makes their biases more visible. The result is that you walk away convinced that your judgment is clearer than most people’s, which is itself a form of overplacement.
Real-World Consequences
Overconfidence has measurable effects in high-stakes settings. In medicine, an estimated 75% of medical errors in internal medicine are attributed to cognitive biases, and overconfidence is considered one of the most significant contributors. The specific mechanism is premature diagnostic closure: a physician settles on a diagnosis too early, feels confident in it, and stops considering alternatives. Overconfident clinicians may not fully appreciate the range of findings a patient presents because they believe their initial interpretation is correct.
In finance, overconfident investors trade more frequently and incur higher transaction costs, with both effects increasing as overconfidence levels rise. The theoretical prediction is straightforward: excessive trading racks up fees that eat into returns. Interestingly, the empirical picture is more complicated than expected. Some recent research has found that overconfident investors don’t always underperform, and in some cases their returns actually increase with higher confidence levels, possibly because confidence drives them to take on risk that happens to be rewarded in rising markets. Still, the higher transaction costs are consistent and well-documented, and the increased risk exposure cuts both ways.
Strategies That Reduce Overconfidence
Overconfidence is stubborn, but a few techniques have shown real effects. One of the most practical is the pre-mortem. Instead of asking a team “how will this succeed?”, a pre-mortem starts with the premise that the project has already failed. The prompt is simple: “It’s 2035. This project failed. Why?” Team members brainstorm causes of failure, discuss how those problems might develop, group them into categories, and then work backward to the present to prioritize risks and identify actions to take now. By making failure the default scenario, the exercise bypasses the confirmation bias that normally filters out negative possibilities.
Another effective approach targets overprecision directly. Traditional confidence intervals (where you set upper and lower bounds) consistently produce overconfident ranges. An alternative technique called SPIES, which breaks the probability space into multiple bins rather than asking for a single range, dramatically improves calibration. In experiments, the standard 90% confidence interval method captured the correct answer only about 54% of the time, while the alternative method hit rates around 74 to 77%, much closer to the intended 90%.
On an individual level, actively seeking out disconfirming evidence is the most straightforward countermeasure. This means deliberately asking “what would change my mind?” before committing to a judgment. The challenge is that high confidence suppresses exactly this kind of thinking, so building the habit when stakes are low is key to deploying it when stakes are high.

