What Is the Cobra Effect? When Incentives Backfire

The cobra effect describes what happens when a well-intentioned solution backfires and makes the original problem worse. The term comes from a colonial-era bounty program in Delhi, India, where a plan to reduce the cobra population actually increased it. It’s one of the most famous examples of a perverse incentive, and versions of it show up constantly in government policy, corporate management, healthcare, and environmental regulation.

The Original Cobra Bounty in Delhi

British colonial authorities in Delhi faced a serious cobra problem. Their solution was straightforward: pay a bounty to anyone who brought in a dead cobra. At first, it worked. People killed cobras and collected their reward.

Then locals realized it was far easier to breed cobras than to hunt wild ones hiding in cracks in walls and shadowy corners. Cobra farms sprang up. Breeders raised snakes, killed them, and turned in the bodies for bounty money. The authorities kept paying, and the breeders kept producing. When officials finally caught on and canceled the program, the breeders had no reason to keep their now-worthless cobras. They released them into the streets. Delhi ended up with more cobras than before the program started.

The story illustrates a pattern that repeats across centuries and contexts: the incentive targeted the wrong thing. It rewarded dead cobras (the metric) rather than actually reducing the wild cobra population (the goal). That gap between metric and goal is where the cobra effect lives.

The Hanoi Rat Massacre

A strikingly similar case played out in French colonial Hanoi in 1902. Authorities offered a bounty on rats infesting the city’s new sewer system. To make collection easy, they only required proof of a kill: a severed rat tail. Citizens quickly figured out they could cut off tails and release the rats alive to keep breeding. Others began farming rats outright. The campaign killed over 20,000 rats in June 1902 alone, yet the rat population kept growing. The bounty had turned rats from a nuisance into a commodity.

How It Shows Up in Modern Policy

The cobra effect isn’t just a historical curiosity. It operates anywhere a reward or penalty is attached to a measurable target that people can game.

One of the clearest modern examples involves carbon credits. The United Nations created a program to pay companies for destroying a potent greenhouse gas called HFC-23, a byproduct of manufacturing refrigerant chemicals. The payments were so lucrative that companies in China and India ramped up production of the refrigerant specifically to generate more of the byproduct they’d be paid to destroy. Between 2004 and 2009, global production of the refrigerant nearly doubled, from 15 million to 28 million tons, tracking almost perfectly with the growth of the offset program. A policy designed to reduce harmful emissions had created a financial incentive to produce more of them.

In U.S. healthcare, the Hospital Readmissions Reduction Program penalized hospitals financially when patients returned within 30 days of discharge. Readmission rates did drop. But a retrospective analysis of more than 8 million Medicare hospitalizations found that 30-day mortality after discharge rose for patients with heart failure or pneumonia following the program’s announcement. The penalty gave hospitals a reason to avoid readmitting sick patients, and some of those patients may have died at home instead.

The Wells Fargo Case

Corporate incentive structures are fertile ground for the cobra effect. At Wells Fargo, branch managers were assigned daily quotas for the number of financial products sold to customers. If a branch missed its target, the shortfall rolled over and was added to the next day’s goals. Personal bankers could earn bonuses of 15 to 20 percent of their salary for hitting cross-selling targets, while tellers could earn up to 3 percent.

The pressure was relentless, and employees responded by opening millions of accounts that customers never asked for and didn’t know about. The incentive was supposed to deepen customer relationships. Instead, it produced mass fraud. As one senator put it during hearings, if a teller had simply stolen cash from the drawer, they’d face criminal charges. But the system squeezed employees until cheating customers became the path of least resistance.

Why Metrics Become Traps

The cobra effect is closely related to a principle known as Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. The idea is simple. Once people know they’ll be rewarded for hitting a number or punished for missing it, they optimize their behavior to hit that number, even if doing so undermines the original purpose.

Cobra bounties measured dead snakes, not snake population. Carbon credits measured gas destruction, not emissions reduction. Hospital penalties measured readmission rates, not patient survival. Wells Fargo measured accounts opened, not customer satisfaction. In every case, the metric was a reasonable proxy for the real goal, until people started optimizing for the metric itself. The proxy then diverged from reality, and decision-makers, still watching the metric, believed things were improving while the underlying problem got worse.

This is what makes the cobra effect so persistent. The people designing incentives are usually thinking logically. Pay for dead cobras, get fewer cobras. Penalize readmissions, get better care. The logic holds only if you assume people will pursue the goal the way you intended. In practice, people find the cheapest path to the reward, and that path often has nothing to do with the goal.

Designing Incentives That Don’t Backfire

The cobra effect isn’t inevitable. It tends to emerge when policies are designed in isolation, targeting a single metric without considering how the broader system will respond. Systems thinking, an approach that maps how different parts of a problem interact, offers a useful corrective. The core principle is to bring the whole system into the discussion from the beginning, not just the piece you’re trying to fix. Changes made in one part of a system can adversely affect parts that weren’t initially considered.

In practical terms, this means asking a few questions before attaching rewards or penalties to any target. First, can the metric be achieved without achieving the goal? If yes, someone will find that shortcut. Second, what happens to the people or systems adjacent to the incentive? The hospital readmission penalty reduced readmissions but didn’t account for what happened to patients who weren’t readmitted. Third, what new market or behavior does the incentive create? The cobra bounty created a cobra-breeding market that didn’t exist before. If a policy makes a problem profitable, the problem will grow.

The most robust incentive systems measure outcomes from multiple angles, making it harder to game any single metric. They also build in monitoring for second-order effects and adjust quickly when unintended patterns emerge. The cobra breeders operated for some time before authorities noticed. Faster feedback loops can catch a cobra effect before it compounds.