When Was Chemical Warfare First Used in History?

Chemical warfare has roots stretching back thousands of years, long before the gas attacks of World War I that most people associate with it. The earliest documented use of toxic substances as weapons dates to ancient Greece in the fifth century BCE, when Spartan forces burned sulfur and pitch to create poisonous fumes during a siege. From those crude beginnings, the deliberate use of chemicals to harm enemies evolved slowly over millennia before exploding into industrial-scale horror on the battlefields of 1915.

Ancient Greece and the Siege of Plataea

The oldest well-documented case of chemical warfare took place during the Peloponnesian War between Athens and Sparta. In 429 BCE, a Spartan army besieging the Athenian-allied city of Plataea placed a burning mixture of sulfur, pitch, and wood beneath the city walls. The goal was straightforward: the toxic smoke would incapacitate the defenders so they couldn’t resist the assault. Sulfur combustion produces sulfur dioxide, a choking gas that burns the eyes, throat, and lungs. Whether the attack achieved its intended effect at Plataea is debated, but the tactic itself was clearly deliberate.

This wasn’t an isolated experiment. Ancient armies across multiple civilizations understood that burning certain materials could produce noxious or incapacitating fumes. The concept reappeared in various forms for centuries, from toxic smoke pots in Chinese warfare to poisoned arrows tipped with plant-derived toxins across cultures worldwide.

Poison Gas in an Ancient Siege Tunnel

One of the most striking pieces of archaeological evidence for ancient chemical warfare comes from Dura-Europos, a Roman garrison city in modern-day Syria. Around 256 CE, Sasanian Persian forces besieged the city and dug tunnels beneath its walls to undermine the fortifications. Roman defenders dug counter-tunnels to intercept them, and a deadly underground confrontation followed near Tower 19 on the city’s western wall.

When archaeologists first excavated the tunnels in the 1930s, they found the remains of at least 19 Roman soldiers and one Sasanian fighter. The original excavator assumed they had died by sword or fire. But a reanalysis of the excavation records, published in the American Journal of Archaeology, revealed something far more disturbing: the Roman soldiers appeared to have been deliberately gassed. The Sasanian attackers likely burned sulfur and bitumen in the confined tunnel space, flooding the Roman counter-mine with toxic fumes. The soldiers were found stacked near the tunnel entrance, consistent with people collapsing while trying to flee a cloud of poison gas rather than dying in hand-to-hand combat.

This finding pushed the confirmed use of chemical warfare as a calculated military tactic back nearly 1,800 years and demonstrated that ancient armies understood not just that toxic fumes were deadly but how to weaponize enclosed spaces to maximize their effect.

The 1800s: Modern Chemistry Meets Warfare

As chemistry advanced during the Industrial Revolution, proposals to use toxic chemicals in battle became more sophisticated. In 1854, during the Crimean War, British chemist Lyon Playfair proposed filling artillery shells with a cyanide-based compound and firing them at Russian ships during the siege of Sevastopol. Admiral Thomas Cochrane of the Royal Navy backed the idea, but the British Ordnance Department rejected it, calling it “as bad a mode of warfare as poisoning the wells of the enemy.”

That rejection reflected a widespread attitude in 19th-century military culture: poison was considered dishonorable, a coward’s weapon unworthy of a professional army. International law eventually codified this sentiment. The Hague Conventions of 1899 and 1907 included declarations prohibiting the use of projectiles designed to spread asphyxiating or poisonous gases. These agreements, however, carried no enforcement mechanism and contained enough ambiguity that military planners would later argue their way around them.

World War I: Chemical Warfare on an Industrial Scale

Everything changed on April 22, 1915, near the Belgian city of Ypres. German forces opened thousands of cylinders along a four-mile front and released roughly 168 tons of chlorine gas, letting the wind carry a greenish-yellow cloud toward French and Algerian troops in the trenches. The attack killed more than 1,100 soldiers and injured thousands more. Survivors described the gas as producing violent coughing, choking, and a sensation of drowning as the chlorine reacted with moisture in their lungs.

This was not technically the first use of chemical agents in World War I. Both sides had experimented with tear gas and other irritants earlier in the conflict. The Germans had tried a chemical shell on the Eastern Front in January 1915, though cold weather rendered it largely ineffective. But Ypres was the moment chemical warfare became a defining feature of the war. The sheer scale of casualties and the visible horror of the gas cloud forced every army to take the threat seriously.

Within months, both sides were developing and deploying their own chemical weapons. The British carried out their first large-scale chlorine attack at the Battle of Loos in September 1915, and the arms race escalated rapidly from there. Phosgene, which was harder to detect and more lethal than chlorine, entered use by late 1915. Mustard gas followed in 1917, introduced again by the Germans at Ypres. Unlike chlorine and phosgene, mustard gas was a blistering agent that burned exposed skin and could contaminate an area for days, making it as much a denial weapon as a direct killer.

By the war’s end, chemical weapons had caused an estimated 1.3 million casualties, including roughly 90,000 deaths. Every major combatant had used them.

The 1925 Geneva Protocol

The widespread revulsion at chemical warfare during World War I led to the Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare, signed in Geneva on June 17, 1925, and entering into force on February 8, 1928. Drawn up under the auspices of the League of Nations, the protocol banned the use of chemical and biological weapons in war.

The protocol had a significant limitation: it prohibited use but not development, production, or stockpiling. Many signatory nations maintained large chemical arsenals throughout the 20th century, and several used them in conflicts where enforcement was absent. Italy deployed mustard gas in Ethiopia in the 1930s. Japan used chemical agents in China during World War II. Iraq used them extensively against Iran and against its own Kurdish population in the 1980s. The more comprehensive Chemical Weapons Convention, which banned production and stockpiling as well, didn’t open for signature until 1993.

The gap between the ancient use of sulfur smoke beneath city walls and the chlorine clouds at Ypres spans roughly 2,400 years. The underlying logic never changed: force an enemy to breathe something that incapacitates or kills them. What changed was the scale. Industrial chemistry turned a crude siege tactic into a weapon capable of blanketing miles of battlefield in minutes.