Bounded rationality is the idea that people don’t make perfectly logical decisions because they face real limits: not enough time, not enough information, and not enough mental processing power. The term was coined by economist and cognitive scientist Herbert Simon in 1957 as a direct challenge to the prevailing assumption in economics that humans act as perfectly rational agents who always maximize their self-interest.
The concept reframes how we think about decision-making. Rather than treating imperfect decisions as failures of logic, bounded rationality treats them as reasonable responses to the constraints every human brain operates under.
The Problem With “Perfect” Rationality
Classical economics rests on a model sometimes called “homo economicus,” a hypothetical person who has access to all relevant information, can process it instantly, and always chooses the option that maximizes their benefit. This model is useful for building economic theories, but it describes no actual human being.
Simon argued that real decision-makers face three fundamental constraints. First, the information available to them is almost always incomplete. You rarely know every option on the table, let alone the consequences of each one. Second, time is limited. Decisions often need to happen in minutes or seconds, not the infinite time horizon that perfect rationality assumes. Third, and perhaps most importantly, the human brain itself has processing limits. Working memory, the mental workspace where you hold and manipulate information in real time, can handle roughly three to four novel items at once. That’s a hard ceiling on how much complexity you can juggle when weighing a decision.
Simon’s insight was that these aren’t bugs in human cognition. They’re the basic operating conditions. Any useful theory of decision-making needs to account for them.
Satisficing: Choosing “Good Enough”
If people can’t optimize every decision, what do they actually do? Simon coined another term for it: satisficing, a blend of “satisfy” and “suffice.” Instead of exhaustively comparing every possible option to find the absolute best one, people tend to search through options until they find one that meets their minimum criteria, then stop.
Think about choosing a restaurant for dinner. A perfectly rational agent would review every restaurant in the city, compare menus, prices, reviews, travel time, and wait times, then select the mathematically optimal choice. Nobody does this. You think of a few places, maybe check a couple of reviews, and pick one that sounds good enough. That’s satisficing.
The alternative, maximizing, is the effort to select the objectively best option. Research on the two styles reveals a paradox: maximizers tend to make objectively better choices on measurable criteria, but satisficers feel subjectively better about the choices they make. The original theory from Simon suggests that maximizing is actually maladaptive in most real-world situations, while satisficing is adaptive. Spending three hours researching the perfect restaurant doesn’t improve your evening enough to justify the time and mental energy it costs.
Heuristics: Mental Shortcuts That Usually Work
Bounded rationality doesn’t mean people stumble through decisions randomly. Instead, they rely on heuristics, simple rules of thumb that skip the heavy computation and still land on a reasonable answer most of the time. These shortcuts conserve scarce cognitive resources and are easy to apply under pressure.
Some heuristics are so ingrained you don’t notice them. “If a brand name is familiar, it’s probably reliable” is one. “If my doctor friend recommends something, I’ll trust it” is another. These aren’t logically airtight, but they work well enough in most everyday situations that they save enormous amounts of mental effort.
Daniel Kahneman and Amos Tversky built on Simon’s framework starting in the 1970s, cataloging specific heuristics and the predictable biases they produce. Their work mapped what Kahneman later described as two modes of thinking: effortless intuition (fast, automatic, heuristic-driven) and deliberate reasoning (slow, effortful, more logical). Most decisions run on intuition. The biases that result, like overweighting recent experiences or being swayed by how a question is framed, aren’t signs of stupidity. They’re the natural side effects of a system that is, as Kahneman put it, “generally skilled and successful” but not infallible.
When Shortcuts Match the Situation
Psychologist Gerd Gigerenzer pushed the idea further with a concept called ecological rationality. The core argument: a heuristic isn’t inherently good or bad. Its quality depends on whether it fits the environment where it’s being used. A shortcut is ecologically rational to the degree that it matches the structure of the situation you’re in.
For example, the heuristic “go with the option you recognize” works surprisingly well when choosing between products in a competitive market, because widespread recognition often correlates with quality or reliability in that context. The same heuristic would be useless, or even harmful, if you were evaluating unfamiliar medical treatments where name recognition has nothing to do with effectiveness. The environment determines whether a given shortcut leads you astray or quietly delivers a solid answer.
This perspective reframes heuristics not as cognitive errors to be corrected but as tools in a mental toolkit. Some fit the job, some don’t. The interesting question isn’t “why do people use shortcuts?” but “which shortcuts work best in which situations?”
Bounded Rationality in Medical Decisions
One of the clearest illustrations of bounded rationality in action comes from clinical medicine. Doctors face enormous information loads, time pressure, and high stakes, exactly the conditions where bounded rationality predicts people will rely on heuristics rather than exhaustive analysis.
A study examining how physicians actually reason during patient appointments found that their process looks nothing like the complex decision trees in clinical practice guidelines. Instead, doctors used simple, robust heuristics. They relied heavily on their social environment, reading how patients behaved and what they expected. They tested a diagnostic hypothesis by looking for confirming evidence while checking for obvious contradictions. And they reached a “saturation point,” a moment where they had enough information to act and stopped gathering more.
This approach carries real tradeoffs. A doctor who has recently seen several cases of a particular illness may overdiagnose that condition in the next patient, a well-known bias called the availability heuristic. Some physicians collect more data than necessary because of a conservative bias toward certainty. But the study’s broader finding was that these fast-and-frugal strategies generally worked. In clinical environments, the cost of trying to make a theoretically optimal decision (more tests, longer deliberation, delayed treatment) often exceeds the cost of making a satisfactory one quickly.
Why It Matters Beyond Economics
Bounded rationality has reshaped how researchers and practitioners think about decisions in fields far beyond its origins in economics. Policy designers use it to structure choices so that people’s natural shortcuts lead to better outcomes, an approach sometimes called “nudging.” Financial planners account for it when designing retirement savings plans, knowing that most people won’t calculate optimal contribution rates but will stick with a reasonable default.
On a personal level, understanding bounded rationality can change how you evaluate your own decisions. The feeling that you “should have” researched more, compared more options, or thought longer often assumes a standard of perfect rationality that no human brain can meet. In most situations, the decision that felt good enough probably was good enough. The mental energy you saved was available for something else. That’s not a flaw in your thinking. It’s your brain doing exactly what it evolved to do: making workable decisions with limited resources in a complex world.

