When Did Pollution Start? From Prehistoric Fire to Plastic

Pollution started far earlier than most people assume. The earliest evidence dates back roughly 50,000 years, to Neanderthal hearths inside Gorham’s Cave in Gibraltar, where heavy metals from wood fires accumulated in the enclosed space. That makes pollution nearly as old as controlled fire itself. But the scale, chemistry, and consequences of pollution have changed dramatically across human history, from trace metals in cave soot to synthetic chemicals now detectable in human blood worldwide.

Prehistoric Fires and the First Pollutants

When early humans and Neanderthals began burning wood inside caves, they created the first enclosed environments contaminated by human activity. Combustion released copper, zinc, and manganese into the air, and the people breathing that air absorbed those metals over a lifetime. Research on sediment layers from Gorham’s Cave shows measurable heavy metal enrichment tied directly to ancient hearths, making it one of the earliest known milestones of human-caused pollution.

This wasn’t harmless. Cave fires produced fine particulate matter and toxic fumes with nowhere to ventilate. Over tens of thousands of years, repeated exposure to these pollutants may have even shaped human tolerance to environmental contaminants, a factor researchers have only recently begun to consider.

Ancient Greece, Rome, and Hemispheric Lead

Pollution jumped from a local nuisance to a regional problem once civilizations began mining and smelting metals at scale. Greek and Roman operations to extract lead and silver, peaking around 2,000 years ago, released enough lead into the atmosphere to contaminate the entire Northern Hemisphere. We know this because Greenland ice cores preserve a chemical record of what was falling from the sky in any given century. The lead deposited in Greenland during those eight centuries of Greek and Roman activity reached about 15 percent of the total lead fallout caused by leaded gasoline in the 20th century. That’s a striking number for a pre-industrial civilization working with furnaces and manual labor.

London’s River of Sewage

Water pollution has its own long history, but perhaps no event captured it more vividly than London’s Great Stink of 1858. By the 1850s, the River Thames was an open sewer carrying human waste, dead animals, discarded food, and chemical runoff from riverside factories. During an unusually hot summer, with temperatures holding above 30°C (86°F) for weeks, the river level dropped and left banks of exposed sewage baking in the heat. The stench was unbearable, but the real danger was invisible: cholera and typhoid spread rapidly through the city. The crisis finally forced Parliament to fund a modern sewer system, one of the first times a government invested heavily in pollution infrastructure.

The Industrial Revolution Changed Everything

Coal-powered factories in 18th and 19th century Britain transformed pollution from a problem of specific rivers or mining towns into a defining feature of urban life. Smoke from coal burning hung over cities in thick, persistent haze, and the health effects were severe even by the standards of an era with high baseline mortality. Research analyzing British data from 1851 to 1860 found that a significant increase in local coal use raised infant mortality by 6 to 8 percent. Industrial coal burning explained roughly one-third of the excess death rate observed in cities compared to rural areas during that period. Air pollution was, in effect, one of the largest killers in industrial towns, though it went largely unmeasured at the time.

Leaded Gasoline and the Automobile Age

In 1923, the United States began adding tetraethyl lead to gasoline as an anti-knock agent to improve engine performance. The practice spread globally and continued for decades, pumping lead directly into the air of every city with car traffic. Atmospheric lead levels climbed steadily, and the metal accumulated in soil, water, and human bones. Lead is a potent neurotoxin, particularly dangerous to children, and the decision to put it in fuel is now widely considered one of the most damaging public health choices of the 20th century. Most countries didn’t begin phasing out leaded gasoline until the 1970s and 1980s, and a few continued using it into the 2000s.

The 1948 Donora Disaster

The consequences of unchecked industrial air pollution became impossible to ignore in late October 1948, when a temperature inversion trapped emissions from steel and zinc plants over Donora, Pennsylvania. For five days, a toxic fog of hydrogen fluoride, sulfur compounds, nitrogen dioxide, carbon monoxide, and heavy metals in fine particulate matter blanketed the town. Twenty people died. Another 5,900, representing 43 percent of Donora’s population, fell ill, with roughly 1,440 experiencing serious symptoms. The event was a turning point in American environmental policy and helped inspire what eventually became the Clean Air Act.

Nuclear Fallout and the Anthropocene Marker

On July 16, 1945, the United States detonated the first nuclear weapon in the Trinity test in New Mexico. Plutonium from that single explosion traveled on wind currents and reached as far as Crawford Lake in Ontario, Canada, arriving just four days later. Scientists have proposed the presence of that plutonium in lake sediments as a geological marker for the beginning of the Anthropocene, the proposed epoch defined by humanity’s measurable impact on Earth’s systems. The hundreds of nuclear tests that followed through the 1950s and 1960s spread radioactive isotopes across the globe, creating a contamination signature that will remain detectable in soil and ice for thousands of years.

Synthetic Chemicals and Plastic

The mid-20th century introduced entirely new categories of pollution that had never existed in nature. Industrial production of PFAS, a family of synthetic chemicals used in nonstick coatings, waterproof fabrics, and firefighting foam, began in the 1940s. By the 1970s, independent scientists discovered that several types of PFAS were already widespread in human blood and in the outdoor environment. These chemicals are extraordinarily persistent, earning the nickname “forever chemicals” because they do not break down naturally.

Plastic followed a similar trajectory. Industrial plastic production began in the 1930s, and sediment core samples show microplastics appearing in ocean floor layers from that same decade. When mass production ramped up in the 1950s, microplastic deposits increased sharply. From the 1930s to the 2020s, the rate of microplastic burial in ocean sediments has grown exponentially, tracking almost perfectly with global plastic production figures. Today, microplastics have been found in deep ocean trenches, Arctic ice, and human lung tissue.

Pollution as a Cumulative Record

The history of pollution isn’t a single starting point but a series of escalations. Neanderthal cave fires contaminated a room. Roman smelting contaminated a hemisphere. Industrial coal contaminated cities badly enough to kill infants at measurably higher rates. Leaded gasoline, nuclear fallout, and synthetic chemicals contaminated the entire planet with substances that persist for decades or centuries. Each new technology brought new pollutants, often recognized as dangerous only after widespread exposure had already occurred. The ice cores, sediment layers, and archaeological sites that record this history make one thing clear: humans have been altering their environment with pollution for at least 50,000 years, but the scale of the last 200 years is unlike anything before it.