Earthquakes remain essentially unpredictable because the process that triggers them is invisible, silent, and indistinguishable from the constant background stress happening deep in Earth’s crust at all times. Despite decades of research, thousands of monitoring stations, and increasingly powerful computing, scientists cannot tell you where, when, and how large an earthquake will be before it happens. The reasons come down to what’s physically happening underground, the failure of every proposed warning sign, and the sheer complexity of fault systems.
What Happens Underground Before a Quake
Earth’s tectonic plates are always moving, grinding past and pushing against each other. This builds stress in the rock along fault lines over years, decades, or centuries. At some point, the stress exceeds what the rock can hold, and a section of the fault slips suddenly. That slip is an earthquake.
The critical problem is what scientists call the nucleation phase: the brief period when a fault begins transitioning from locked to slipping. Laboratory experiments have shown that this nucleation process is almost entirely aseismic, meaning it produces virtually no detectable seismic waves. It starts as a slow, quiet process deep underground and only evolves into a rapid, violent rupture at the very last moment. By the time instruments can detect anything, the earthquake is already underway. There’s no distinctive “wind-up” that separates the start of a major earthquake from the millions of tiny stress adjustments happening along faults every day.
Every Proposed Warning Sign Has Failed
Scientists have spent decades searching for reliable precursors, measurable changes that consistently happen before an earthquake. The list of candidates is long: spikes in radon gas seeping from the ground, changes in groundwater levels, unusual animal behavior, shifts in electrical signals in rock, and patterns of small foreshocks. None has proven reliable enough to base a prediction on.
Radon gas is one of the most studied precursors. It’s a radioactive gas that can increase when rock cracks and shifts underground, and some studies have detected radon anomalies before significant earthquakes. But radon levels also fluctuate with rainfall, humidity, temperature, tidal forces, and seasonal atmospheric changes. In cave monitoring studies, researchers found that radon readings correlated strongly with outside humidity and temperature differences, making it impossible to separate a potential earthquake signal from normal weather patterns. One research team found that radon alone could not distinguish a genuine pre-earthquake anomaly from ordinary summer peaks caused by atmospheric conditions.
Foreshocks, smaller earthquakes that sometimes precede a larger one, seem like they should be useful. But most small earthquakes are not followed by a bigger one, and most large earthquakes are not preceded by identifiable foreshocks. The statistical pattern of earthquake sizes, known as the Gutenberg-Richter law, shows that small earthquakes vastly outnumber large ones in a smooth, predictable distribution. This means any individual small earthquake looks statistically identical to every other small earthquake. There’s no reliable way to look at a magnitude 3 event and determine whether it’s a standalone tremor or the opening act of a magnitude 7.
The Parkfield Experiment
The most famous attempt at earthquake prediction played out along the San Andreas Fault in central California. Five magnitude-6 earthquakes had struck the Parkfield section between 1857 and 1966, with a seemingly regular interval. In 1985, USGS scientists used this pattern to predict that the next one would hit by January 1988, with a 95 percent confidence window extending to 1993. They installed an extensive network of instruments to catch the buildup in real time.
The earthquake didn’t arrive until 2004, more than a decade late. The Parkfield experiment became a landmark case study in why even the most data-rich, pattern-based predictions fall short. Faults don’t operate on schedules. The same fault segment, under broadly similar conditions, can behave differently each time because of countless variables in rock composition, fluid pressure, temperature, and the stress transferred from neighboring fault sections.
Why Artificial Intelligence Hasn’t Solved It
Machine learning has transformed many fields, and researchers have applied it aggressively to earthquake science. AI can process seismic data far faster than humans and detect subtle patterns in enormous datasets. But the results so far highlight the same fundamental obstacles in a new way.
When machine learning algorithms are turned loose on seismic records, they tend to add large numbers of poorly constrained, small-magnitude events to earthquake catalogs while sometimes missing larger, clearly felt earthquakes. In studies across Alaska and the Pacific Northwest, 30 to 40 percent of the additional events flagged by machine learning turned out to be noise, glacial quakes, or duplicate detections of the same earthquake. In ice-bearing regions, algorithms trained on global datasets couldn’t reliably distinguish between actual earthquakes and vibrations from glaciers, because their training data included few examples from those environments.
There are also architectural limitations. One common machine learning approach analyzes seismic signals in 60-second windows, which can cause it to misinterpret waves from a single distant earthquake as two separate local events. The training datasets themselves cap out at about 350 kilometers from the source, meaning the algorithms struggle with anything farther away. These aren’t small technical glitches. They reflect a deeper issue: earthquake signals are embedded in a noisy, variable, geologically diverse planet, and no algorithm can extract a prediction signal that may not physically exist in the data.
Forecasting Is Not Predicting
Scientists draw a sharp line between earthquake prediction and earthquake forecasting. Prediction means specifying the location, time, and magnitude of a future earthquake with enough precision to act on. Forecasting means estimating the probability that an earthquake of a certain size will occur in a given region over a span of years or decades. Forecasting works reasonably well. The USGS can say, for instance, that there is a high probability of a major earthquake along the southern San Andreas Fault in the next 30 years. That information shapes building codes, insurance rates, and emergency planning.
But a 30-year window doesn’t help you decide whether to go to work tomorrow. Short-term deterministic prediction, the kind people actually want, remains out of reach and may be fundamentally impossible. The system is what physicists call chaotic: tiny, unmeasurable differences in starting conditions can lead to completely different outcomes. Even if you could map every square centimeter of a fault and measure every variable, the math of rupture propagation might still not allow a precise prediction.
Early Warning Is Not the Same as Prediction
Systems like ShakeAlert in the western United States can detect an earthquake within seconds of it starting and send alerts to people farther from the epicenter before the shaking reaches them. Warning times typically range from a few seconds to tens of seconds, with the longest possible alerts of 50 to 80 seconds available in the Pacific Northwest for users far from the source. These systems save lives by giving people time to drop and take cover, and they can trigger automated responses like slowing trains and opening firehouse doors.
But early warning is fundamentally reactive. It detects shaking that has already begun and races the slower, more damaging waves to your location. It cannot tell you an earthquake is coming before it starts. The USGS states this plainly: ShakeAlert cannot predict earthquakes.
The stakes of getting this wrong are real. In late 2025, a false ShakeAlert notification was sent to cellphones hundreds of miles from a supposed epicenter in Nevada, prompting public alarm and a congressional inquiry. A similar false alarm two years earlier jolted users awake at 3:19 a.m., seven hours ahead of schedule for what was supposed to be a publicized test. These incidents erode public trust in the one earthquake system that actually works, illustrating why scientists are cautious about issuing any alert without high confidence behind it.
The Core Problem
Earthquake prediction isn’t hard in the way that, say, building a faster computer chip is hard. It may be hard in the way that predicting exactly which snowflake will trigger an avalanche is hard: the answer might not exist at the level of precision we want. Faults are buried kilometers underground, inaccessible to direct observation. The forces involved operate over areas of hundreds or thousands of square kilometers. The transition from a locked fault to a rupturing one happens in a process that is nearly silent until the moment it isn’t. And the statistical behavior of earthquakes means that every small tremor looks like every other small tremor until, rarely and without clear warning, one of them grows into something catastrophic.
What science can offer is a clear picture of which regions are at risk, how strong the shaking could be, and how to build structures and systems that survive it. Prediction may never come. Preparedness already works.

