A weather model is a computer program that simulates the atmosphere using math and physics to predict future weather conditions. It divides the atmosphere into a three-dimensional grid of cells, applies equations governing how air, moisture, and energy move, then steps forward in time to produce a forecast. Every forecast you check on your phone traces back to one or more of these models running on some of the most powerful supercomputers on the planet.
How Weather Models Build a Forecast
The process starts with data. Satellites, ground-based weather stations, weather balloons (called radiosondes), ocean buoys, aircraft sensors, and remote sensing instruments all feed measurements into the model. Temperature, humidity, wind speed, air pressure, and dozens of other variables get collected from millions of observation points around the globe. A step called data assimilation blends all of these observations into a single, coherent snapshot of the current atmosphere. That snapshot is the model’s starting point.
From there, the model applies the fundamental equations of fluid dynamics and thermodynamics to every grid cell in its domain. These equations describe how air masses interact, how heat transfers, how moisture condenses into clouds, and how pressure differences drive wind. The model calculates what happens in the next few minutes, updates every grid cell, then repeats. Step by step, it builds a forecast hours or days into the future.
Spatial resolution describes the distance between grid cells. A model with 9-kilometer resolution, for example, treats the atmosphere as a mosaic of 9 km by 9 km columns, each stacked vertically in layers from the surface up through the stratosphere. Smaller grid spacing means the model can capture local effects like mountain-driven winds, sea breezes, and variations caused by different land surfaces. Larger grid spacing covers more territory but smooths over those local details.
The Major Global Models
Two global models dominate weather forecasting worldwide. The Integrated Forecasting System (IFS), run by the European Centre for Medium-Range Weather Forecasts (ECMWF), and the Global Forecast System (GFS), developed by the U.S. National Centers for Environmental Prediction, are the most widely used forecast products on Earth.
The ECMWF model currently runs at 9-kilometer horizontal resolution. As of a June 2023 upgrade, both its high-resolution forecast and its ensemble forecasts operate at this same 9 km spacing. The GFS runs at a coarser resolution, roughly 13 kilometers for its deterministic forecast. Both models produce global forecasts extending out to about 10 to 16 days.
In head-to-head comparisons, the European model consistently outperforms the GFS. Studies show it delivers roughly 3 to 4 percent lower error rates across variables like temperature, wind speed, and precipitation, with that advantage growing at longer forecast lead times. One study found it reduced wind power forecast errors by 8 percent compared to GFS across Austrian wind farms. Another confirmed its edge in wind predictions across 262 Chinese wind farms spanning a range of climates. This is why meteorologists often refer to the European model as the gold standard, though the GFS remains a critical tool and sometimes outperforms on specific events.
Regional and Rapid-Update Models
Global models give you the big picture, but regional models zoom in. In the United States, the High-Resolution Rapid Refresh (HRRR) model covers the continental U.S. at roughly 3-kilometer resolution and updates every hour. That tight grid spacing lets it resolve individual thunderstorm cells, something a global model simply cannot do. The North American Mesoscale (NAM) model runs at both 12 km and 4 km resolution, providing another layer of detail for medium-range forecasts over North America.
These regional models are especially valuable for short-term forecasts: the next 1 to 18 hours. If you’re checking whether a storm will hit your neighborhood this afternoon, the answer likely comes from a model like the HRRR rather than a global model designed to forecast continental-scale weather patterns days in advance.
Ensemble Forecasting and Uncertainty
A single model run gives you one answer: here’s what the atmosphere will look like in 72 hours. But the atmosphere is chaotic. Tiny errors in the starting data can grow into large forecast differences over time. Ensemble forecasting tackles this problem by running the same model dozens of times, each with slightly different starting conditions and small variations in how the physics are calculated.
The ECMWF ensemble system runs 50 members. The U.S. Global Ensemble Forecast System (GEFS) runs 30. Canada’s Global Ensemble Prediction System runs 20. Researchers have even combined all three into a “super ensemble” of 100 members to push forecast reliability further. When most members agree on an outcome, forecasters have high confidence. When members diverge wildly, that tells you the atmosphere is in a state where small differences matter and the forecast is genuinely uncertain.
This is the difference between a deterministic forecast (“it will rain Tuesday”) and a probabilistic one (“there’s a 70 percent chance of rain Tuesday”). Ensemble systems are the reason your weather app can show you probability percentages at all.
The Computing Power Behind It All
Running these simulations requires staggering computational muscle. NOAA operates twin supercomputers, named Dogwood and Cactus, located in Virginia and Arizona. Each runs at 14.5 petaflops, meaning the pair can process 29 quadrillion calculations per second. Combined with NOAA’s research supercomputers in West Virginia, Tennessee, Mississippi, and Colorado, the total capacity supporting U.S. weather forecasting and research reaches 49 petaflops.
That power is necessary because every increase in resolution multiplies the workload exponentially. Halving the grid spacing from 18 km to 9 km doesn’t just double the number of cells. It roughly increases the computation by a factor of eight or more, because you have more cells in each horizontal direction, more vertical layers to resolve, and smaller time steps to keep the simulation stable.
AI Models: A New Approach
A newer class of weather models skips the traditional physics equations entirely. Instead of solving fluid dynamics step by step, AI-based models like GraphCast, FuXi, FengWu, and Pangu-Weather learn patterns directly from decades of historical weather data. GraphCast, for instance, uses a type of neural network that maps atmospheric data onto a layered geometric grid and predicts how conditions evolve over time. It matches or beats the traditional ECMWF model on about 90 percent of atmospheric variables, and it does so in minutes on a single machine rather than hours on a supercomputer.
The speed advantage is dramatic: several orders of magnitude faster than traditional numerical models. But there are real limitations. AI models tend to produce smoother forecasts, which means they underestimate extreme events. When researchers tested multiple AI models against Storm Ciaran in 2023, all of them failed to capture the full intensity of the storm’s winds. Tropical cyclone intensity prediction remains a particular weak spot. These models also show increasing bias at longer forecast lead times.
For now, AI models work best as a complement to traditional physics-based systems rather than a replacement. They excel at rapid, large-scale pattern prediction while traditional models remain more reliable for extreme weather and fine-scale details.
Why Different Models Disagree
If you’ve ever compared forecasts from multiple sources and noticed they don’t match, this is why. Each model uses different grid resolutions, different mathematical approaches to processes too small to simulate directly (like individual cloud formation), different data assimilation methods, and different starting data. They’re all solving the same fundamental equations, but the choices made in building each model lead to different strengths and weaknesses.
Forecasters at the National Weather Service and private weather companies don’t rely on any single model. They compare outputs from multiple models, weigh ensemble spread, factor in local knowledge, and apply their own judgment. The forecast you see is almost always a blend of model guidance and human expertise, not the raw output of one computer run.

