What Is a Sensitivity Analysis and How Does It Work?

Sensitivity analysis is a method for testing how changes in the inputs of a model affect its output. If you’ve built a financial forecast, designed a clinical trial, or created any model that relies on assumptions, sensitivity analysis tells you which assumptions matter most and how much your results would shift if those assumptions turned out to be wrong. It’s used across finance, healthcare, engineering, and policy making to stress-test decisions before committing to them.

The Core Idea

Every model depends on inputs: estimated costs, predicted growth rates, survival probabilities, interest rates. These inputs are rarely known with certainty. Sensitivity analysis systematically changes one or more of those inputs to see how the final result responds. The output is a set of sensitivity measures for each input, telling you which variables have the biggest influence on your conclusion and which ones barely matter at all.

This serves two purposes. First, it identifies where to focus your effort. If your business case collapses when a single cost estimate shifts by 5%, that estimate deserves more research and tighter data. Second, it reveals how robust your conclusion is. A decision that holds up across a wide range of assumptions is far more trustworthy than one that only works under a narrow set of conditions.

Local vs. Global Methods

Sensitivity analysis comes in two broad flavors, and the distinction matters for understanding how thorough the analysis actually is.

Local sensitivity analysis changes one input at a time while holding everything else fixed. It’s simple to run and easy to interpret, which is why it remains the most common approach. But it assumes the inputs behave in a roughly linear way, and it can produce unreliable rankings when the model is complex or when inputs interact with each other. Think of it as checking one dial at a time on a mixing board: you learn what each dial does individually, but you miss how combinations of changes sound together.

Global sensitivity analysis varies all inputs simultaneously across their full plausible ranges. It captures interactions between variables and handles nonlinear relationships, giving a more complete picture of what drives uncertainty in the output. The tradeoff is computational cost. Global methods take significantly more processing time, which is why they’re typically reserved for high-stakes models in engineering, climate science, and large-scale health economic evaluations.

How It Works in Practice

The simplest version is a one-way sensitivity analysis. You pick a single input, vary it across a reasonable range, and record how the output changes. For example, if you’re modeling a retail business where you sell a product at $20 per unit and expect 500 sales per season, you might ask: what happens if customer traffic increases by 10%, 20%, or 40%? If a 10% traffic increase produces a 7% jump in sales, you can project that a 40% increase would boost revenue by roughly 28%. This gives you a concrete sense of how sensitive your revenue forecast is to foot traffic.

A scenario analysis (sometimes called multi-way sensitivity analysis) takes this further by changing several inputs at once. You might simultaneously adjust customer traffic, product price, and supplier costs to model a best-case, worst-case, and most-likely scenario. This is more realistic since real-world conditions rarely shift one variable in isolation, but it quickly becomes complex. With just five inputs, each tested at three levels, you’d have 243 possible combinations.

Probabilistic Sensitivity Analysis

The most rigorous approach assigns a probability distribution to each uncertain input rather than just testing a few fixed values. Instead of saying “interest rates could be 3%, 5%, or 7%,” you define a distribution reflecting that rates are most likely around 5% but could plausibly range from 2% to 8%. A computer then runs thousands of simulations (a technique called Monte Carlo simulation), each time randomly drawing a value for every input from its distribution and calculating the result.

After thousands of runs, you get a distribution of possible outcomes rather than a single number. You can report, for example, that your project has a 75% chance of being cost-effective or that the expected return falls between $2 million and $8 million in 95% of simulations. This is far more informative than a single point estimate. In health economics, probabilistic sensitivity analysis has become the gold standard. Over 70% of published health economic evaluations historically relied on simpler one-way methods, but the field has increasingly recognized that probabilistic approaches produce more realistic estimates of uncertainty.

Reading a Tornado Diagram

The most common way to visualize sensitivity analysis results is a tornado diagram. Each input variable gets a horizontal bar representing how much the output changes when that variable moves across its plausible range. The bars are stacked vertically, with the widest bar (the most influential variable) at the top and the narrowest at the bottom. The resulting shape looks like a tornado.

Reading one is straightforward. If the top bar stretches far in both directions, that variable is the primary driver of your result, and getting better data on it should be a priority. If a bar near the bottom is barely visible, that input has little practical impact on the outcome and isn’t worth worrying about. Tornado diagrams are especially useful for presentations because they immediately communicate which assumptions deserve the most scrutiny.

Sensitivity Analysis in Clinical Trials

In clinical research, sensitivity analysis plays a different but equally important role. Rather than testing financial assumptions, it tests whether a trial’s conclusions hold up under different analytical choices. For example, a drug trial might show that patients improved by a certain amount, but some participants dropped out before the study ended. The primary analysis handles those missing data one way. A sensitivity analysis then re-runs the analysis under different assumptions about what might have happened to the dropouts.

A valid sensitivity analysis in this context needs to meet a key criterion: it must be designed so that it could, in principle, produce a different conclusion than the primary analysis. If the alternative assumptions always lead to the same answer, the exercise hasn’t actually tested anything. When sensitivity analyses do confirm the primary result across a range of plausible assumptions, researchers can be confident that their conclusions are robust. When results diverge, it signals that the findings depend heavily on specific analytical choices, and that uncertainty should be acknowledged.

Where Sensitivity Analysis Is Used

In finance, sensitivity analysis is a standard part of investment modeling. Bond analysts use it to study how bond prices respond to changes in interest rates. Corporate finance teams run it on revenue forecasts, project valuations, and capital budgeting decisions. The scale of the model changes the approach: studying the effect of a 1-point change in interest rates requires a different model structure than studying a 20-point change, because the relationship between rates and prices isn’t always linear across large ranges.

In health economics, regulatory bodies and insurers often require sensitivity analysis before approving coverage for new treatments. The CHEERS 2022 reporting standards, which guide how health economic evaluations are published, specifically require authors to describe their methods for characterizing uncertainty and to report how that uncertainty affects their findings. This ensures that decision makers aren’t presented with a single cost-effectiveness number without understanding how fragile or stable that number actually is.

In engineering, sensitivity analysis helps identify which design parameters most affect system performance or safety. In environmental science, it’s used to understand which factors drive uncertainty in climate projections or pollution models. The underlying logic is identical across all these fields: figure out what matters most, quantify how much the answer could change, and make better decisions with that knowledge.

Tools for Running Sensitivity Analysis

For simple one-way or scenario analyses, a spreadsheet is often sufficient. You can build data tables in Excel or Google Sheets that automatically recalculate outputs as you vary inputs. For probabilistic and global methods, dedicated software or programming libraries are typically necessary. Platforms like Tableau and Power BI have added features for building sensitivity analysis dashboards with automated visualization. Specialized decision-analysis software like TreeAge is widely used in health economics for running Monte Carlo simulations and generating tornado diagrams. For more custom work, Python and R both have well-supported libraries for running sensitivity analysis on virtually any type of model.