When Did Health Insurance Start in the US?

Health insurance in the United States traces back to 1798, when President John Adams signed the Act for the Relief of Sick and Disabled Seamen. But the kind of health insurance most Americans would recognize today, where you pay a monthly premium in exchange for hospital coverage, began in 1929. The system that grew from that point was shaped less by deliberate planning than by wars, tax policy, and a series of laws that each tried to patch gaps left by the last.

The 1798 Law That Started It All

On July 16, 1798, President John Adams signed legislation creating a fund for the medical care of merchant sailors. Twenty cents was deducted from each seaman’s monthly wages, and the federal government used that money to build or rent hospitals and pay for treatment. The logic was straightforward: a healthy merchant marine was essential for trade and national defense. This program eventually evolved into what is now the U.S. Public Health Service.

It wasn’t insurance in the modern sense. Sailors didn’t choose a plan or pick a provider. But the basic structure, a mandatory payroll deduction funding a pool of medical services, made it the earliest ancestor of American health coverage.

The 1929 Plan That Created Modern Health Insurance

The first recognizable health insurance plan launched in 1929, when Baylor University Hospital in Dallas agreed to cover 1,500 local schoolteachers. For $6 per person per year (roughly $110 today), each teacher received up to 21 days of hospital care annually. The idea spread quickly, and similar hospital-based plans merged into what became Blue Cross. Blue Shield plans, covering physician services, followed shortly after.

These early plans emerged because hospitals needed a reliable revenue stream during the Great Depression, and patients needed a way to afford care without saving for a catastrophic bill. The concept of spreading risk across a group of people, rather than paying out of pocket, proved enormously popular. By the early 1940s, enrollment in group hospital plans had grown from a few thousand to millions.

How World War II Shaped Employer-Based Coverage

The single biggest reason most Americans get health insurance through their jobs is a wartime policy from the 1940s. In 1942, Congress passed the Stabilization Act, which capped wages to control inflation during World War II. Employers couldn’t offer higher pay to attract scarce workers, but the law permitted them to offer insurance benefits instead. Health coverage became a recruiting tool almost overnight, and unions began negotiating for it in collective bargaining agreements.

A tax ruling in 1953, later codified in the Internal Revenue Code of 1954, cemented the arrangement. The IRS determined that employer contributions toward health insurance premiums were not taxable income for employees. This made employer-sponsored coverage far cheaper than buying insurance on your own, since every dollar your employer spent on your premiums was a dollar you never paid income tax on. That tax advantage still exists today, and it remains the foundation of the American system. More than half of all Americans get their coverage through an employer.

Medicare, Medicaid, and the 1965 Expansion

For the first few decades of modern health insurance, people who didn’t have a job with benefits, particularly the elderly and the poor, were largely left out. By the early 1960s, roughly half of Americans over 65 had no health insurance at all.

On July 30, 1965, President Lyndon B. Johnson signed the Social Security Amendments into law, creating two new programs. Medicare provided health insurance for Americans aged 65 and older who were entitled to Social Security benefits, with hospital coverage (Part A) beginning in July 1966. Medicaid, funded jointly by federal and state governments, covered people with limited income. Together, these programs brought tens of millions of previously uninsured Americans into the health coverage system for the first time.

HMOs, COBRA, and HIPAA

Through the 1970s, 1980s, and 1990s, Congress passed a series of laws that reshaped how insurance worked in practice. The Health Maintenance Organization Act of 1973 required employers who offered health benefits to also give workers the option of joining an HMO. Federal grants of up to $50,000 helped fund the development of new HMOs across the country. This ushered in the era of managed care, where insurers played a more active role in controlling costs by limiting which doctors and hospitals patients could use.

In 1985, the COBRA law gave workers who lost or changed jobs the right to temporarily continue their employer-sponsored coverage by paying the full premium themselves. It was expensive, but it closed a gap that had left millions of people uninsured during job transitions. Then in 1996, the Health Insurance Portability and Accountability Act (HIPAA) went further, improving the ability of workers to maintain continuous coverage when moving between jobs and restricting insurers from denying group coverage based on health status.

The Affordable Care Act and Pre-Existing Conditions

Despite decades of incremental reforms, a core problem persisted: people who didn’t get insurance through work, Medicare, or Medicaid often couldn’t get affordable coverage at all, especially if they had a pre-existing condition like diabetes or a history of cancer. Insurers could deny coverage outright, charge dramatically higher premiums, or exclude the very conditions a person most needed treated.

The Affordable Care Act, signed by President Barack Obama in 2010, was the largest overhaul of the system since Medicare. It prohibited insurers from denying coverage or charging higher premiums because of pre-existing conditions. It created online marketplaces where individuals could compare and purchase plans, often with income-based subsidies to reduce costs. And it originally required most Americans to carry health insurance or pay a tax penalty, a provision known as the individual mandate. That penalty was reduced to $0 in 2019 after Congress effectively eliminated it through tax reform legislation in late 2017.

Why the System Looks the Way It Does

The American health insurance system wasn’t designed from scratch. It accumulated in layers: a payroll deduction for sailors in 1798, a hospital’s deal with schoolteachers in 1929, a wartime workaround in the 1940s, a tax break in the 1950s, government programs for the elderly and poor in the 1960s, managed care in the 1970s, job-loss protections in the 1980s and 1990s, and market reforms in 2010. Each layer addressed a specific problem of its era but left other gaps intact.

That layered history explains many of the features Americans find confusing or frustrating: why coverage is tied to employment, why losing a job can mean losing insurance, why Medicare covers people over 65 but not younger uninsured adults, and why the rules vary so much from state to state. The system reflects not a single vision but more than two centuries of incremental fixes, each one building on what came before.