Stationarity means a time series has statistical properties that don’t change over time. Specifically, a stationary series has a constant average value, a constant spread (variance), and a consistent relationship between observations at different time points. This concept matters because most standard forecasting models assume the data they’re working with is stationary. If your data violates that assumption, predictions become unreliable.
The Three Conditions for Stationarity
A time series is considered stationary (in the practical, “weak” sense most analysts care about) when it meets three conditions simultaneously:
- Constant mean. The average value doesn’t drift upward or downward over time. A stock price that trends upward for years violates this condition. Temperature readings that hover around the same average year after year satisfy it.
- Constant variance. The amount the data fluctuates stays roughly the same throughout the series. If price swings are tiny in January but enormous in December, variance isn’t constant.
- Autocovariance depends only on lag, not on time. The statistical relationship between any two observations depends only on how far apart they are in time (the “lag”), not on when they occur. The correlation between Monday and Wednesday should look similar whether you’re measuring it in January or July.
When all three hold, you can take a snapshot of the series at any point and it looks, statistically speaking, like any other snapshot. There’s no drift, no expanding volatility, no seasonal pattern reshaping the data.
Weak vs. Strict Stationarity
The three conditions above define what statisticians call weak stationarity (also called covariance stationarity or second-order stationarity). It only constrains the mean, variance, and covariance structure. This is the version you’ll encounter in virtually all applied work.
Strict stationarity is a stronger requirement: the entire probability distribution of any set of observations must be identical when you shift it forward or backward in time. Not just the mean and variance, but every statistical property, including skewness, the shape of the tails, and every higher-order moment. Strict stationarity implies weak stationarity (as long as variance is finite), but the reverse isn’t true. In practice, strict stationarity is nearly impossible to verify with real data, so analysts work with the weak version.
Why Stationarity Matters for Forecasting
Classical time series models like ARIMA assume the data is stationary. The reason is straightforward: if the statistical properties of your data are shifting over time, any pattern you learn from the past may not apply to the future. A model trained on a rising trend will keep predicting a rise, even if the underlying dynamics have changed.
Stationarity also affects how you interpret the autocorrelation function (ACF), which measures how strongly each observation correlates with past observations. For a stationary series, the ACF drops to zero relatively quickly. For non-stationary data, it decays slowly, indicating that observations remain correlated over very long lags. This slow decay is a signal the series needs to be transformed before modeling.
Random walk processes, common in finance, are a classic example of non-stationarity. A random walk’s forecast for tomorrow is simply today’s value, because future movements are equally likely to go up or down. You can’t extract a useful pattern from a random walk, which is why differencing it (see below) is often the first step.
Spotting Non-Stationarity Visually
Before running any formal test, you can often spot non-stationarity by looking at a plot. A series with a visible upward or downward trend is non-stationary because the mean is changing. A series with a repeating seasonal pattern (higher every summer, lower every winter) is non-stationary because the value at any point depends on the time of year. A series where the fluctuations get larger over time has non-constant variance.
A stationary series, by contrast, looks like it’s fluctuating around a fixed horizontal level with roughly consistent spread. Think of it as visual “sameness” across the timeline. White noise, a series of completely random values with no pattern, is the simplest example of a stationary process.
Testing for Stationarity
Two statistical tests are widely used, and they approach the question from opposite directions.
The Augmented Dickey-Fuller (ADF) test starts with the assumption that the series is non-stationary (it has a “unit root,” meaning shocks to the series persist forever rather than fading). If the test’s p-value falls below 0.05, you reject that assumption and conclude the series is stationary. The ADF test is the most common starting point in applied work and is available in every major statistics package.
The KPSS test flips the logic. Its starting assumption is that the series is stationary. A low p-value here means you reject stationarity. Running both tests together gives you more confidence. If the ADF says “stationary” and the KPSS agrees, you’re on solid ground. If they contradict each other, the series may be borderline and need closer inspection or transformation.
How to Make a Series Stationary
Differencing
The most common transformation is differencing: subtracting each observation from the one before it. If your value at time t is Y(t), the differenced series is simply Y(t) minus Y(t-1). This removes trends by converting the raw values into period-to-period changes.
A classic example: a stock price that climbs steadily is non-stationary, but the daily price changes (today’s close minus yesterday’s close) often are stationary. The Google stock price illustrates this well. The raw price series trends upward and is clearly non-stationary, while the series of daily changes fluctuates around zero with no visible trend.
One round of differencing (called first-order differencing) handles a linear trend. If the series has a more complex, curving trend, you may need to difference twice (second-order differencing). In seasonal data, you can also take a “seasonal difference,” subtracting the value from the same season in the prior year, to strip out repeating patterns.
If the first difference of a series is stationary and completely random (no autocorrelation), the original series is described by a random walk model. If the first difference is stationary but still shows autocorrelation, that’s where ARIMA modeling picks up, capturing the remaining structure.
Log Transformations
When the issue is non-constant variance rather than a shifting mean, taking the logarithm of each value can help. This is common with financial and economic data where larger values tend to come with larger fluctuations. A log transformation compresses the scale so that percentage changes become additive, stabilizing the variance. In many real-world cases, you’ll apply a log transformation first to stabilize variance, then difference the result to remove the trend.
Detrending
Another option is to fit a trend line (linear or polynomial) to the data and subtract it, leaving only the residuals. This works when the trend is deterministic, meaning it follows a predictable path. Differencing is generally preferred when the trend is stochastic (driven by accumulating random shocks), which is more common in practice. Choosing the wrong method can leave hidden non-stationarity in your data, so the distinction matters.
Stationarity in Practice
Most real-world time series are not stationary in their raw form. Sales figures grow over time. Temperatures follow seasonal cycles. Website traffic spikes around product launches. The practical workflow is to identify what makes the series non-stationary, apply the appropriate transformation, verify stationarity with a test like the ADF, and then fit your model to the transformed data.
The “I” in ARIMA stands for “integrated,” which refers to how many times the series needs to be differenced to become stationary. An ARIMA(1,1,1) model, for instance, applies one round of differencing before fitting. The model handles this automatically once you specify the differencing order, but understanding what’s happening under the hood helps you diagnose problems when forecasts go wrong.
Stationarity isn’t just a checkbox to get past before modeling. It reflects a deeper question: does this data have stable, learnable patterns? If the answer is no, no amount of model complexity will produce reliable forecasts. Making a series stationary is how you isolate those stable patterns from the noise of trends and seasonal shifts.

