When Did Big Pharma Start? From the 1800s to Today

The pharmaceutical industry as we know it today didn’t appear all at once. It evolved over roughly 150 years, from small apothecaries and chemical dye shops in the 1800s into the trillion-dollar global force that earned the nickname “Big Pharma” in the late 20th century. The transformation happened in distinct waves: early chemistry experiments, wartime manufacturing breakthroughs, new regulations that raised the cost of doing business, a merger frenzy that created mega-corporations, and a loosening of advertising rules that turned prescription drugs into consumer products.

The 1800s: Dye Makers and Druggists

Most of today’s largest pharmaceutical companies trace their origins to the mid-1800s, and almost none of them started out making medicine. Bayer, for example, was founded on August 1, 1863, by a dye salesman and a master dyer who used two kitchen stoves to figure out how to produce the dye fuchsine. Between 1881 and 1914, Bayer expanded into a full chemical company, but dyestuffs remained its largest division for decades. Pfizer, founded in 1849 in Brooklyn, started as a fine chemicals business. Merck began as a German apothecary in 1668 and didn’t establish an American presence until the late 1800s. Eli Lilly opened a small laboratory in Indianapolis in 1876.

These companies were tiny by modern standards, often family-run, producing a handful of chemical products. The idea of a powerful, consolidated “pharmaceutical industry” simply didn’t exist yet. Drug quality was wildly inconsistent, government oversight was minimal, and most medicines were still plant-based concoctions mixed by local pharmacists.

World War II: The Factory Floor Scales Up

The single biggest catalyst for industrial-scale drug manufacturing was penicillin production during World War II. After British scientists demonstrated that penicillin could treat bacterial infections, researcher Howard Florey traveled to the United States to convince drug companies and the federal government to invest in mass production. A key breakthrough came from a USDA lab in Peoria, Illinois, where researchers discovered that corn steep liquor, a cheap waste product from cornstarch manufacturing, dramatically boosted mold growth.

When the U.S. entered the war, the government took over all penicillin production. Private drug companies developed deep-tank fermentation, a technique that replaced shallow dishes with large aerated tanks capable of growing enormous quantities of the mold. This was a turning point. Companies that had been modest chemical operations suddenly had factory infrastructure, government contracts, and the technical expertise to produce drugs at massive scale. The war ended, but that manufacturing capacity didn’t disappear. It became the backbone of the modern pharmaceutical industry.

The 1960s: Regulation Raises the Stakes

Before 1962, getting a drug to market in the United States was relatively simple. Companies had to show their product was safe, but they didn’t have to prove it actually worked. That changed after the thalidomide disaster, in which a sedative prescribed to pregnant women caused severe birth defects in thousands of children across Europe. Although thalidomide was largely kept off the American market, the crisis created political momentum for stricter oversight.

In October 1962, Congress passed the Kefauver-Harris Drug Amendments. For the first time, companies had to provide substantial evidence of effectiveness through adequate and well-controlled studies before the FDA would approve a drug. The amendments also required the FDA to specifically approve a marketing application before any drug could be sold, formalized manufacturing quality standards, mandated reporting of adverse events, and required informed consent from study participants. The FDA also took over regulation of prescription drug advertising from the Federal Trade Commission.

These requirements dramatically increased the cost of bringing a drug to market. Clinical trials became longer, more complex, and more expensive. The companies best positioned to absorb those costs were the ones with the deepest pockets, which began tilting the industry toward consolidation.

The 1984 Patent Deal That Shaped the Industry

In 1984, Congress passed the Hatch-Waxman Amendments, a law designed to balance two competing interests. Generic drug makers gained a streamlined path to FDA approval: they could develop and test copies of brand-name drugs without facing patent infringement lawsuits during the development process, and they could seek approval before the original patents expired. In exchange, brand-name manufacturers received guaranteed periods of market exclusivity before any generic competitor could enter.

This framework created the business model that still defines the industry. Brand-name companies invest heavily in research and development, then race to recoup those costs (and generate profit) during their exclusivity window. The financial incentive to maximize revenue before patent expiration drives aggressive pricing and marketing. It also makes acquiring competitors with promising drug pipelines more attractive than building from scratch, which set the stage for the merger wave that followed.

The 1990s Merger Frenzy

Between roughly 1990 and 2005, the pharmaceutical industry consolidated at a breathtaking pace. The mergers didn’t just make companies bigger. They created entirely new corporate entities that dominated global markets.

  • Novartis was formed in 1996 when Ciba-Geigy merged with Sandoz.
  • Glaxo Wellcome emerged in 1995 from the merger of Glaxo and Wellcome, then merged again with SmithKline Beckman in 2000 to create GlaxoSmithKline, with $30 billion in annual sales by 2002.
  • Pfizer absorbed Warner-Lambert in 2000, then Pharmacia in 2002 (which had itself already merged with Upjohn and Monsanto), reaching $48 billion in 2002 sales.
  • Aventis came together through a chain of mergers involving Rhone-Poulenc, Hoechst, Rorer, and Fisons across the 1990s.
  • Roche acquired a stake in biotech pioneer Genentech in 1990, completing the merger in 1995.

By 2002, the top handful of companies each commanded tens of billions in annual revenue. The industry’s trade group, the Pharmaceutical Research and Manufacturers of America (PhRMA), had been founded back in 1958, but it was during this consolidation era that it became one of the most powerful lobbying forces in Washington. A small number of corporations now controlled a vast share of the world’s drug supply, research pipelines, and political influence.

1997: Drug Ads Hit Television

Until 1997, pharmaceutical companies that wanted to advertise prescription drugs on television had to include full contraindications, boxed warnings, and common precautions. That requirement made broadcast ads impractical because reading the safety information would eat up most of the airtime. In 1997, the FDA loosened the rules, allowing companies to recite a brief “major-risk statement” and direct viewers to a website, toll-free number, or print insert for the full details.

The result was an explosion of direct-to-consumer advertising. Suddenly, prescription drugs were marketed like cars or cereal. Patients began asking their doctors for specific brand-name medications they’d seen on TV. The U.S. Department of Health and Human Services has noted that this shift contributed to public confusion about drug risks, inappropriate demand for medications, and misallocation of healthcare resources. The United States and New Zealand remain the only two developed countries that allow direct-to-consumer prescription drug advertising.

The Financial Scale Today

The numbers tell the story of how far the industry has come. U.S. prescription drug sales alone grew from $582 billion in 2017 to $716 billion in 2022, a 23 percent increase in just five years. The rest of the world saw only 2 percent growth over the same period. The U.S. accounts for roughly 50 percent of worldwide pharmaceutical sales while representing only about 13 percent of total prescription volume, meaning Americans buy fewer drugs than their global share would suggest but pay far more per prescription.

The term “Big Pharma” itself gained widespread use in the 1990s and early 2000s, as the merger wave, aggressive marketing, and high-profile controversies (most notably Purdue Pharma’s promotion of OxyContin) crystallized public skepticism about the industry’s priorities. The phrase captures something specific: not just that pharmaceutical companies are large, but that a relatively small number of corporations wield outsized influence over drug pricing, medical research priorities, regulatory policy, and even how people think about health itself.

So when did Big Pharma start? The companies themselves date to the 1800s. The manufacturing infrastructure was built during World War II. The regulatory and patent frameworks that shape the modern business model came in 1962 and 1984. But “Big Pharma” as a concentrated, politically powerful, consumer-facing industry really took its current form in the 1990s, when mergers, patent protections, and television advertising converged to create something the founders of those original dye shops and apothecaries would never have recognized.