When Did Healthcare Become an Issue in the US?

Healthcare has been a contested political issue in the United States for over a century, but it didn’t arrive as a single crisis. It emerged in waves, each one driven by a different problem: first access, then cost, then the question of who deserves coverage at all. The debate traces back to 1912, and every decade since has added new urgency without producing a lasting resolution.

The First Push: 1912 and the Progressive Era

The story starts in 1912, when Teddy Roosevelt and the Progressive Party endorsed social insurance, including health insurance, as part of their platform. That same year, the National Convention of Insurance Commissioners developed its first model of state law to regulate health insurance. By 1915, the American Association for Labor Legislation had drafted a bill to require health insurance for workers. It never passed. The United States entered World War I, and the proposal quietly died.

At this point, healthcare wasn’t the sprawling industry it is today. Most medical care was relatively simple, and most people paid doctors directly. The idea of insurance felt radical, more aligned with European welfare states than American individualism. But the seed was planted: if workers could be insured against workplace injury, why not against illness?

Truman’s Failed National Plan in 1945

The issue resurfaced with force after World War II. In November 1945, just seven months into his presidency, Harry Truman proposed a universal national health insurance program. He laid out five goals: increasing hospital capacity, expanding public health services (especially for mothers and children), boosting federal funding for medical research, reducing high medical costs for working people and the poor, and replacing wages lost during serious illness.

Truman’s plan asked every wage-earning American to pay monthly fees or taxes to cover medical expenses and lost income from illness or injury. He framed it as a matter of basic opportunity, telling Congress that “millions of our citizens do not now have a full measure of opportunity to achieve and enjoy good health.”

The American Medical Association killed it. Almost immediately, the AMA attacked the proposal as “socialized medicine” promoted by “followers of the Moscow party line,” capitalizing on Cold War anxieties that were just beginning to grip the country. The label stuck. National health insurance became politically toxic for nearly two decades.

Medicare, Medicaid, and the 1965 Breakthrough

What changed the equation was a specific, visible population: elderly Americans. Through the 1950s, employer-based health insurance became common for working people. But retirees lost their ties to employers and, with relatively high rates of illness, represented a terrible risk for private insurance companies. They were the group the market couldn’t solve for.

Reformers pivoted. Instead of insuring everyone, they focused on extending coverage to Social Security beneficiaries, who were overwhelmingly elderly. Social Security itself had grown enormously popular, with Congress raising benefit levels in 1950, 1952, 1954, 1956, and 1958. Attaching health insurance to that system made political sense.

In 1960, Congress created the Kerr-Mills program, which offered federal grants to states for medical services for the elderly poor. But participation was uneven, and federal officials in the Department of Health, Education, and Welfare pushed for expansion, particularly to cover children on welfare, who made up the single largest category of welfare beneficiaries. By 1965, the result was two programs at once: Medicare for Americans over 65, and Medicaid for low-income individuals and families. For the first time, the federal government was directly financing healthcare for tens of millions of people.

The 1970s: When Cost Became the Problem

Medicare and Medicaid solved the access problem for two large groups, but they also introduced a new one. With the federal government now paying medical bills on a massive scale, healthcare spending started climbing fast. In 1960, national health expenditures were a modest share of the economy. By the early 1970s, costs were rising so steeply that policymakers began looking for ways to restructure how care was delivered.

The answer, at least in theory, was the Health Maintenance Organization. The HMO Act of 1973 offered federal funds to develop HMOs, which bundled care under a single organization that was responsible for keeping patients healthy rather than billing for each service. The hope was to improve care quality, emphasize prevention, and slow spending growth. HMOs grew rapidly in the decades that followed, but they didn’t solve the cost problem. They did, however, mark a permanent shift in the debate. From the 1970s forward, healthcare politics would always be about two things at once: who gets covered and how much it all costs.

The Clinton Plan Collapse of 1993

By the early 1990s, roughly 37 million Americans lacked health insurance, and costs were still climbing. Bill Clinton made reform a centerpiece of his presidency, appointing Hillary Clinton to lead a task force that produced the Health Security Act in 1993. The plan aimed to guarantee coverage for every American through a system of managed competition among insurance plans.

It failed spectacularly. The insurance industry launched a devastating ad campaign (the famous “Harry and Louise” television spots), and the bill’s complexity made it easy to attack and hard to defend. But the deeper problem, according to analysts who studied the collapse, was a lack of political will to confront the major players in medical care funding, particularly insurance companies and large employers who benefited from the existing system. The bill never even came to a vote in Congress.

The failure had a chilling effect. No president would attempt comprehensive reform for another 16 years.

The Affordable Care Act and Its Aftermath

When Barack Obama signed the Affordable Care Act in 2010, 48 million nonelderly Americans were uninsured, representing 18.2% of the nonelderly population. The law expanded Medicaid eligibility, created insurance marketplaces with subsidized premiums, and required most Americans to carry coverage.

The results were significant. By 2016, the number of uninsured nonelderly Americans had dropped to 28.2 million, a decline of 20 million people. The uninsured rate fell from 18.2% to 10.4%, a reduction of more than 40%. The sharpest drop came in 2014, when the law’s major provisions took effect. But the ACA also became one of the most polarizing laws in modern American history, surviving repeated repeal attempts and a Supreme Court challenge.

The uninsured rate crept back up after 2016, reaching 30 million by early 2020, before the pandemic triggered another round of emergency coverage expansions.

Why the Debate Never Ends

Healthcare spending in the United States reached $5.3 trillion in 2024, or about $15,474 per person, consuming 18% of the entire economy. That figure has grown almost every year for decades, dwarfing what other wealthy nations spend. The sheer scale of the industry means that any reform threatens someone’s revenue, which is why organized opposition has defeated or weakened every major proposal since 1915.

The question “when did healthcare become an issue” doesn’t have a single answer because the issue itself keeps shapeshifting. In 1912, it was about whether workers deserved insurance at all. In 1945, it was about whether the government should provide it. In 1965, it was about the elderly and the poor. In the 1970s, it became about runaway costs. In the 1990s and 2010s, it was about uninsured Americans falling through the cracks of an employer-based system designed for a different era. Each generation has inherited the unfinished business of the one before it.