Omission bias is the tendency to judge harmful actions as worse than equally harmful inactions. If doing nothing and doing something both lead to the same bad outcome, most people will view the person who acted as more blameworthy than the person who stood by. This pattern shapes decisions in medicine, public policy, personal finance, and everyday moral reasoning, often in ways people don’t recognize.
How Omission Bias Works
The core of omission bias is a gap in how people assign cause and blame. When someone actively does something that leads to harm, observers draw a straight line from action to outcome. But when someone fails to act, and that failure causes the same harm, the connection feels weaker. The outcome of an omission appears less intended than the outcome of an action, even when the person fully understood what would happen.
In foundational experiments by psychologists Mark Spranca, Elisa Minsk, and Jonathan Baron, participants read scenarios where two people faced identical choices with identical consequences. One person caused harm by acting, the other by not acting. Intentions, motives, and outcomes were held constant. Participants consistently rated the person who acted as more immoral. This wasn’t simply an exaggerated reaction to action: when the outcomes were positive rather than harmful, the reverse effect didn’t appear. A few participants were even willing to accept greater overall harm just to avoid being the one who acted.
A 2021 meta-analysis across multiple studies found a moderate but consistent effect (g = 0.45), with the bias showing up more strongly in moral judgments and blame than in actual decisions. In other words, people are somewhat better at overriding the bias when making their own choices than when evaluating someone else’s behavior.
Why the Brain Favors Inaction
Several psychological forces feed omission bias. The most important is perceived causality. When you act and something goes wrong, you clearly caused it. When you don’t act and the same thing goes wrong, other causes fill the gap. The outcome would have happened anyway if you hadn’t been there at all, or if you simply hadn’t known about it. That makes inaction feel less like a decision and more like the natural course of events.
Anticipated regret plays a major role too. People are motivated to avoid negative emotions, and the regret from a bad outcome you caused by acting feels sharper and more personal than regret from a bad outcome you allowed by doing nothing. This asymmetry in expected emotional pain pushes people toward inaction even when the math favors taking a risk.
From an evolutionary perspective, there may be a practical reason omissions are judged less harshly: they leave little material evidence. A harmful action creates witnesses, physical traces, and a clear narrative of blame. A harmful omission is harder to prove. If third parties consistently punish actions more than omissions, then choosing inaction becomes a rational strategy for avoiding social consequences. The bias in moral judgment, in this view, reflects a real difference in how easy it is to hold someone accountable.
Omission Bias vs. Status Quo Bias
Omission bias is often confused with status quo bias, and they do frequently overlap, but they’re distinct. Status quo bias is a preference for the current state of affairs over any change. Omission bias is a preference for inaction over action, regardless of whether inaction preserves the status quo. In most real-world situations, doing nothing and maintaining the status quo are the same thing, so both biases push in the same direction. But they can be separated. When all available options involve change (so there’s no status quo to preserve), omission bias can still appear as a preference for whichever option feels more passive.
Research illustrates this split neatly: people prefer improving the environment from a polluted state over preventing an equal decline from a clean state. Both options involve taking action, so this isn’t omission bias. It’s status quo bias, a pull toward restoring what feels like the natural baseline. Understanding the distinction matters because the two biases respond to different interventions.
Vaccination and Medical Decisions
Omission bias has its most visible real-world impact in vaccine hesitancy. A vaccine carries a small risk of side effects. Not vaccinating carries a risk of disease. When both risks are weighed rationally, vaccination is almost always the safer bet. But omission bias warps the calculation. If your child has a bad reaction to a vaccine you chose to give, that feels like something you caused. If your child catches a preventable disease because you skipped the vaccine, that feels like something that happened to them. The harm is equal, but the sense of personal responsibility is not.
This pattern is reinforced by what researchers call “protected values,” deeply held beliefs that certain principles (like a parent’s right to refuse medical interventions) shouldn’t be compromised regardless of consequences. Protected values make people more willing to accept worse outcomes from inaction because the decision not to act feels like a principled stand rather than a choice with trade-offs.
Omission Bias in Healthcare Systems
Beyond individual medical choices, omission bias shapes how healthcare professionals report errors. Many physicians and nurses don’t consider near misses or medication omissions to be reportable events. In a survey across six South Australian hospitals, more than 40% of staff had never filed an incident report despite 98.3% knowing the reporting system existed. Among 338 internal medicine physicians and residents in another study, 84.3% believed reporting errors improved quality, yet their actual reporting rate was only 16.9% for minor errors and 3.8% for major ones.
The pattern is consistent across countries and hospital systems. Errors of omission, things that should have been done but weren’t, are systematically underreported compared to errors of commission, things that were done incorrectly. A medication that was never administered feels less like an “error” than a medication given at the wrong dose, even when the consequences are identical. This reporting gap means healthcare systems have an incomplete picture of where failures actually occur.
Recognizing It in Your Own Thinking
Omission bias tends to be strongest when decisions involve uncertainty, when outcomes are emotional rather than financial, and when you’re evaluating someone else’s choices rather than making your own. You’re most vulnerable to it when a decision involves potential harm either way and one option feels like “not doing anything.”
The most effective way to counteract omission bias is to reframe inaction as a choice with consequences. Instead of asking “should I do X?” try asking “what happens if I do X versus what happens if I don’t?” Listing the expected outcomes of both action and inaction side by side forces the comparison that omission bias obscures. It also helps to focus on outcomes rather than on who or what caused them. When you catch yourself thinking “at least I didn’t cause it,” that’s the bias talking.
Omission bias is sometimes described as an overgeneralization of a useful rule. In many everyday situations, not interfering genuinely is the safer choice, because you have incomplete information and your actions could make things worse. The bias becomes a problem when it’s applied to situations where you do have enough information, where the costs of inaction are clear, and where doing nothing is itself a decision with real consequences.

