Interpolation is estimating a value between known data points. Extrapolation is estimating a value beyond them. Both techniques use existing data to fill in unknowns, but they differ in one critical way: interpolation works within the range of what you’ve already observed, while extrapolation projects outside that range into uncharted territory. That distinction matters because it directly affects how much you can trust the result.
How Interpolation Works
Imagine you have temperature readings at 8 a.m. and 10 a.m., and you want to estimate what the temperature was at 9 a.m. You’re filling in a gap between two known values. That’s interpolation. The core assumption is straightforward: if you know what happened on either side of a missing point, the value in between probably follows the same pattern.
The simplest version is linear interpolation, which draws a straight line between two neighboring data points and reads off the value you need. If a plant was 3 cm tall at week one and 7 cm tall at week three, linear interpolation estimates 5 cm at week two. It’s fast, intuitive, and works well when data changes at a fairly steady rate.
When data curves rather than following a straight line, you need more flexible tools. Polynomial interpolation fits a single curved equation through all your data points at once. This works well for small data sets, but it has a well-known weakness: as the number of points increases, the curve can develop wild oscillations between them. Mathematicians call this Runge’s phenomenon. A classic demonstration uses just 11 evenly spaced points on a simple curve. The resulting polynomial swings so dramatically near the edges that it becomes useless as an estimate, even though it passes perfectly through every known point.
Cubic spline interpolation solves this problem by taking a divide-and-conquer approach. Instead of forcing one equation through all the data, it fits a separate smooth curve between each pair of neighboring points, then stitches them together so there are no abrupt jumps or kinks at the connections. The result looks and behaves like a flexible beam pinned at each data point, which is actually the physical analogy that inspired the technique. Splines are widely considered the best general-purpose interpolation method and are the default in most graphing and engineering software.
How Extrapolation Works
Extrapolation takes a pattern you’ve observed and extends it forward (or backward) past your last known data point. If you’ve tracked a company’s revenue for the past five years and want to predict next year’s number, you’re extrapolating. The process typically involves fitting a trend line or curve to existing data and then projecting that line into the future.
Linear extrapolation is the simplest form: draw a straight line through your data and extend it. This works reasonably well over short distances when trends are stable. Polynomial extrapolation uses curved equations, which can capture acceleration or deceleration in a trend but become unreliable quickly because curves can shoot off in unexpected directions once they leave the range of known data.
For longer-term forecasting, analysts often use growth curves that model how real-world processes behave. S-shaped curves, for instance, capture patterns where growth starts slowly, accelerates, then levels off, like the adoption of a new technology or the spread of an infectious disease. Exponential growth models capture processes that compound over time. The choice of curve shape encodes an assumption about what the future looks like, which is both the power and the danger of extrapolation.
Why Extrapolation Is Riskier
The fundamental problem with extrapolation is that you have no data to anchor your estimate. With interpolation, known points sit on both sides of the gap, bracketing your answer. With extrapolation, you’re relying entirely on the assumption that the pattern you observed will continue unchanged.
Consider a regression model built from observed data with a strong correlation. Within the range of that data, predictions carry reasonable confidence because the model has been validated against real measurements. But once you push the input values beyond the observed range, there’s no particular indication the model still holds. A relationship that looks linear over a small window might curve, flatten, or reverse outside it. Housing prices don’t rise forever. Population growth doesn’t stay exponential. A patient’s response to a medication at double the tested dose isn’t necessarily twice the response at the tested dose.
This is why extrapolation demands more caution and more domain knowledge. The math doesn’t know when the pattern will break. You have to.
Interpolation vs. Extrapolation on a Graph
On a scatter plot, the difference is visual. Plot your known data points, then draw or fit a line through them. Any estimate you read off the line between the first and last data point is interpolation. Any estimate you read off the line past either end is extrapolation. Some graphs use a solid line for the interpolated region and a dashed line for the extrapolated portion, signaling the shift from observed territory to projection.
If your data covers months one through four and you estimate a value at month two, you’re interpolating. If you estimate a value at month five, you’re extrapolating, because month five sits above (or beyond) the range of values you actually measured.
Where Each Method Shows Up
Interpolation is everywhere in computing, often invisibly. When you zoom into a digital photo and the software fills in pixels between the ones that were actually captured, that’s interpolation. When a GPS calculates your position between satellite readings, that’s interpolation. Weather maps that show smooth temperature gradients across a region are built by interpolating between scattered weather station measurements. Audio resampling, 3D animation, and scientific instrument calibration all rely on it constantly.
Extrapolation drives forecasting and planning. Economic projections, climate models, and technology roadmaps all extrapolate from historical data. In medicine, researchers use extrapolation to estimate how clinical trial results would apply to patients who weren’t included in the original study. For example, if a heart medication was tested on younger patients, statistical methods can project what the outcomes might look like for older patients or those with different health histories. These techniques have been applied to estimate the real-world effectiveness of insulin delivery methods, blood pressure medications, and blood thinners in populations that differ from the original trial participants.
Regression analysis, one of the most common tools in statistics, uses both. Predictions within the range of your original data are interpolation. Predictions outside that range are extrapolation. The regression equation is the same either way, but your confidence in the result should not be.
Choosing the Right Approach
When you need a value between known measurements, interpolation is your tool. The main decision is how complex your method should be. For data that changes gradually, linear interpolation is fast and sufficient. For data with curves or waves, cubic splines handle the shape without the instability of high-degree polynomials. If you have only a handful of points and the data doesn’t oscillate much, polynomial interpolation is a reasonable middle ground.
When you need a value beyond your data, you’re extrapolating whether you want to or not. Keep projections as short-range as possible, because error grows with distance from your last known point. Use a curve shape that reflects how the underlying process actually behaves, not just the shape that fits the historical data most tightly. A model that perfectly matches the past can still be wildly wrong about the future if it’s capturing noise rather than the real trend. And always treat extrapolated values as estimates with increasing uncertainty, not as facts with the same weight as your measured data.

