Weather forecasting began around 650 B.C., when Babylonians predicted short-term weather changes by reading cloud formations and optical phenomena like halos around the sun and moon. But forecasting as a scientific practice, with instruments and data, didn’t emerge until the 1600s. The story stretches from ancient sky-watching to satellites orbiting the planet, and each leap forward came from a specific breakthrough in technology or understanding.
Ancient Roots: Clouds, Philosophy, and Pattern Recognition
The earliest known attempts at weather prediction come from Babylon. Around 650 B.C., Babylonian observers cataloged cloud types and atmospheric optical effects, using them to anticipate rain, wind, and storms in the near term. These weren’t random guesses. They were pattern-based systems refined over generations, connecting what the sky looked like today with what typically happened tomorrow.
Two centuries later, Aristotle wrote Meteorologica around 340 B.C., the first systematic attempt to explain weather through natural causes rather than the actions of gods. He proposed that the atmosphere was driven by two types of evaporation: a moist one that produced rain and a dry one that generated wind, thunder, and lightning. He even argued that earthquakes came from dry exhalation trapped underground. His framework was wrong in many specifics, but it introduced a crucial idea: weather follows physical rules that can be studied and, eventually, predicted. That text dominated Western thinking about the atmosphere for nearly 2,000 years.
Instruments Change Everything
For millennia, forecasting relied entirely on human senses and folk wisdom. That changed in the 1600s with two inventions. Galileo contributed early work on measuring the weight of air (published in 1638), and in 1644, Evangelista Torricelli built the first mercury barometer. Torricelli filled a glass tube with mercury, inverted it, and watched the column settle at a height supported by the pressure of the surrounding air. His famous insight: “We live submerged at the bottom of an ocean of the element air, which by unquestioned experiments is known to have weight.”
This was transformative. For the first time, people could measure an invisible atmospheric property, air pressure, and track its changes. Just four years later, in 1648, Blaise Pascal demonstrated that barometric pressure drops with altitude, confirming that the barometer was genuinely measuring the atmosphere and not some quirk of the apparatus. With instruments like the barometer and thermometer, weather observation shifted from subjective description to quantitative measurement. You could now record conditions as numbers, compare them across locations, and start building a real science.
The Telegraph and the First Forecasts
Instruments could measure the weather, but they couldn’t communicate it fast enough to matter. A barometer reading from a city 200 miles away was useless if it arrived by horse three days later. The electric telegraph solved this. In 1849, the Smithsonian Institution began collecting wind and weather observations via telegraph from stations across the United States, compiling the data into weather maps. For the first time, forecasters could see a snapshot of conditions across a wide region in near-real time.
This capability made public forecasting possible. In Britain, Vice-Admiral Robert FitzRoy (best known as captain of the HMS Beagle during Darwin’s voyage) wrote the first public weather forecast, published for July 31, 1861. He established a regular public forecast service the following month through what would become the Met Office. FitzRoy’s forecasts were based on telegraph reports from coastal stations and his own understanding of storm patterns. They were crude by modern standards, but they represented a genuine turning point: weather prediction was no longer a private curiosity. It was a public service.
Forecasting by Mathematics
By the early 1900s, scientists understood that the atmosphere obeyed the laws of physics and could, in theory, be predicted by solving equations describing the motion of air, heat, and moisture. In 1922, English meteorologist Lewis Fry Richardson published Weather Prediction by Numerical Process, laying out a method for computing future weather states using mathematical equations developed by Norwegian physicist Vilhelm Bjerknes. Richardson actually attempted a forecast by hand, calculating pressure changes for a single point in central Europe. The results were wildly wrong, predicting a pressure change roughly 100 times larger than what actually occurred.
The failure wasn’t in the concept but in the execution. The equations required enormous numbers of calculations performed simultaneously across a grid of points, and doing them by hand introduced errors and took far too long. The approach was shelved for over two decades. It wasn’t until 1950 that a team including meteorologist Jule Charney used ENIAC, one of the first electronic computers, to produce a computer-generated weather forecast. The machine could do in hours what would have taken Richardson’s imagined army of human calculators weeks. Numerical weather prediction, the foundation of every modern forecast, was born.
Satellites and the View From Above
Ground stations and weather balloons could only sample the atmosphere at scattered points. Vast stretches of ocean, desert, and polar ice had almost no observations at all. On April 1, 1960, NASA launched TIROS-1, the world’s first weather satellite. Designed to test whether Earth’s cloud cover could be studied from space, it carried television cameras that sent back images of cloud patterns across entire continents. The satellite proved the concept immediately, and a new era of global weather observation began.
Satellites filled the enormous gaps in coverage that had limited forecasters for a century. Tropical cyclones that once struck coastlines with little warning could now be tracked from formation to landfall. Before satellite tracking, hurricanes were identified by their latitude and longitude coordinates, a cumbersome system prone to confusion when multiple storms were active. A systematic naming convention was established in 1950, and by 1953 the United States began assigning female names to Atlantic storms (male names were added in 1979). The naming system itself reflects how forecasting had matured: you only need distinctive names for storms when you can actually see and track them in real time.
How Accurate Forecasts Are Now
The improvement in forecast skill over the past several decades is striking. Three-day forecasts have been fairly accurate since the 1980s, but they’ve continued to improve. Today, three-day forecasts are accurate about 97% of the time. Five-day forecasts reached a level considered “highly accurate” by the early 2000s. Hurricane track forecasts tell a similar story: the average error in a 72-hour hurricane position forecast was over 400 nautical miles in the 1960s and 1970s. Today it’s less than 80 miles.
These gains come from three converging advances: better observations (more satellites, ocean buoys, and weather stations), faster computers running higher-resolution atmospheric models, and improved mathematical methods for combining observational data with model predictions. A five-day forecast today is roughly as accurate as a one-day forecast was in the 1980s. The same physics that Aristotle tried to reason through, that Richardson tried to compute by hand, now runs on supercomputers processing billions of calculations per second, turning raw atmospheric data into the forecasts on your phone.

