Sensitivity analysis is a method for testing how much your results change when you adjust the assumptions behind them. Any model, study, or decision framework relies on inputs that carry some degree of uncertainty. Sensitivity analysis systematically varies those inputs to see which ones actually matter and whether the overall conclusion holds up or falls apart.
The technique is used across medicine, economics, engineering, finance, and environmental science. At its core, the question is always the same: if I’m wrong about one of my assumptions, does it change my answer?
Why It Matters
Every model is built on estimates. A cost-effectiveness study for a new drug relies on assumptions about how well the drug works, how much side effects cost to treat, and how patients value different health states. A climate model depends on assumptions about emissions rates, ocean absorption, and feedback loops. None of these numbers are known with perfect certainty.
Sensitivity analysis examines the robustness of results by running the same analysis under a range of plausible assumptions that differ from the original ones. If the conclusion stays the same no matter how you adjust those inputs, you can trust it. If changing one assumption by a small amount flips the result, that’s a red flag: the conclusion depends heavily on something you’re not sure about. The National Academy of Sciences has recommended that sensitivity analyses be a mandatory part of reporting findings from clinical trials, precisely because so much rides on untested assumptions.
One-Way vs. Multi-Way Analysis
The simplest form is one-way sensitivity analysis. You pick a single input, change it across a reasonable range, and hold everything else constant. For example, a health economics model might test what happens when a drug’s effectiveness drops from 80% to 60% while keeping all other numbers fixed. This tells you how sensitive the outcome is to that one variable.
Multi-way analysis changes two or more inputs simultaneously. This captures something one-way analysis misses: interactions between variables. A drug might look cost-effective when you lower its effectiveness alone, and still look cost-effective when you raise its price alone, but changing both at once could push it past the threshold. The tradeoff is that multi-way analysis gets harder to interpret quickly, especially when you’re adjusting three or more variables at the same time.
Scenario analysis is a related approach. Instead of methodically sliding variables up and down, you define specific “what if” situations. A best-case scenario, a worst-case scenario, or a scenario with particular policy relevance. Each scenario might involve changing several parameters at once to reflect a coherent alternative picture of reality.
Probabilistic Sensitivity Analysis
Deterministic methods (one-way, multi-way, scenario) pick specific values and test them. Probabilistic sensitivity analysis takes a fundamentally different approach: it assigns each uncertain input a probability distribution, then runs the model thousands of times using randomly sampled values from those distributions.
This is typically done through Monte Carlo simulation. Each run of the model draws a random value for every uncertain input, calculates the result, and records it. After thousands of iterations, you get a distribution of possible outcomes rather than a single number. The results show three useful things: the average outcome and how much it varies, how often each option comes out as the best choice, and the range of plausible results you should prepare for.
Probabilistic analysis is considered the gold standard in fields like health technology assessment because it captures the full landscape of uncertainty rather than just testing a few hand-picked scenarios. Professional guidelines from organizations like ISPOR (the International Society for Pharmacoeconomics and Outcomes Research) recommend probabilistic methods alongside deterministic ones when evaluating whether a treatment is worth its cost.
Local vs. Global Methods
Another way to categorize sensitivity analysis is by scope. Local methods examine how small changes to a single input affect the output while everything else stays fixed. They’re fast, computationally cheap, and easy to interpret. But they assume the model behaves in a roughly linear way, which isn’t always true for complex systems.
Global methods explore the entire range of all inputs simultaneously, mapping how uncertainty in the inputs contributes to uncertainty in the output across the full parameter space. They’re more thorough and can capture nonlinear effects and interactions that local methods miss entirely. Research comparing the two approaches has found that the ranking of which inputs matter most can differ between local and global methods, meaning a local analysis might mislead you about which variables deserve the most attention.
The cost of global methods is computation time. Variance-based techniques, for instance, require decomposing the model output into contributions from each input and every combination of inputs. For simple models this is straightforward, but for complex simulations with many parameters, the computational demands can be significant. That practical constraint is why local methods remain common despite their limitations.
Tornado Diagrams: Visualizing Results
One of the most widely used tools for presenting sensitivity analysis results is the tornado diagram. It displays horizontal bars for each variable tested, arranged from the most influential at the top to the least influential at the bottom. The width of each bar shows how much the outcome changes when that variable moves from its low value to its high value.
The result looks like a funnel, or a tornado, with the widest bars at the top. At a glance, you can see which two or three inputs are driving the result and which ones barely matter. In a pharmaceutical cost-effectiveness study, for example, a tornado diagram might show that the drug’s effectiveness and the discount rate dominate the result, while the cost of managing side effects has almost no impact.
Some newer variations add intermediate steps within each bar, showing not just the extremes but how the outcome changes at points in between. This gives a richer picture, especially when the relationship between an input and the outcome isn’t a straight line.
Common Uses in Practice
In clinical trials, sensitivity analysis is essential for handling missing data. When patients drop out of a study, the researchers have to make assumptions about what would have happened to them. Did they drop out because the treatment wasn’t working? Because of side effects? At random? Each assumption leads to a different result. Sensitivity analysis tests multiple plausible assumptions to see whether the trial’s conclusion is robust or depends on a particular guess about why data is missing.
In systematic reviews and meta-analyses, Cochrane guidelines recommend sensitivity analysis to check whether the overall findings change depending on decisions like which studies to include, which statistical method to use, or how to categorize the outcomes. If removing one low-quality study from a meta-analysis reverses the conclusion, that’s important to know.
In health economics, typical parameters tested include discount rates (commonly varied between 2% and 5%), treatment costs (often varied within plus or minus 30% of the estimated mean), utility values reflecting quality of life (varied within plus or minus 10%), and survival estimates drawn from clinical trial data. Each of these carries real uncertainty, and testing them reveals which uncertainties actually affect the decision about whether a treatment is worth funding.
How to Interpret the Results
The goal isn’t to find one “right” answer. It’s to understand the conditions under which your answer changes. A sensitivity analysis that shows consistent results across a wide range of assumptions is strong evidence that the conclusion is reliable. A sensitivity analysis that shows the result flipping back and forth depending on small changes to key inputs tells you the evidence isn’t as clear-cut as the primary result might suggest.
When reading a study that includes sensitivity analysis, look for whether the authors tested the assumptions you’d question yourself. If a cost-effectiveness study assumes a drug works for 10 years but only has 2 years of trial data, you’d want to see what happens when that duration is cut to 5 years. The value of sensitivity analysis lies not just in the technique but in the thoughtfulness of the assumptions being tested.

