How to Measure Risk: Key Methods and Formulas

Risk is measured by combining two factors: how likely something is to happen and how severe the consequences would be if it did. The core formula is simple: Likelihood × Impact = Risk Score. But the specific tools and scales you use depend on whether you’re assessing workplace safety, project threats, health outcomes, cybersecurity vulnerabilities, or financial exposure. This guide walks through the major approaches, from basic scoring matrices to advanced statistical models.

The Core Formula

Nearly every risk measurement method builds on the same foundation. You estimate the probability of an event occurring, estimate the damage it would cause, and multiply the two together. A risk that’s very likely but causes minimal harm might score the same as one that’s rare but catastrophic. This multiplication gives you a single number you can use to compare risks against each other and decide where to focus your resources.

What changes from field to field is how you define and score those two inputs. In workplace safety, “impact” might mean the difference between a minor scrape and a fatality. In cybersecurity, it could mean the loss of sensitive data, system downtime, or legal penalties. In medicine, it’s the probability of developing a disease or experiencing a side effect. The formula stays the same; the scales adapt.

Qualitative Risk: The 5×5 Matrix

The most widely used tool for measuring risk without hard data is a risk matrix, typically a 5×5 grid. One axis represents probability, the other represents impact, and each axis has five levels. You plot each risk on the grid and the resulting position tells you whether it’s low, medium, or high priority.

The standard probability levels are:

  • Rare: unlikely to happen, with negligible consequences
  • Unlikely: possible, with moderate consequences
  • Moderate: likely to happen, with serious consequences
  • Likely: almost sure to happen, with major consequences
  • Almost certain: sure to happen, with major consequences

Impact levels follow a similar scale:

  • Insignificant: no serious injuries, illnesses, or losses
  • Minor: mild injuries or small losses
  • Significant: injuries or losses requiring some intervention
  • Major: irreversible harm or sustained disruption
  • Severe: fatality or total loss

A risk rated “Likely” for probability and “Major” for impact lands in the red zone of the matrix and demands immediate attention. One rated “Rare” and “Minor” sits in the green zone and can be monitored without urgent action. The strength of this approach is speed: you can score dozens of risks in a single workshop without needing detailed statistical data. The weakness is subjectivity. Two people can rate the same risk differently, so it helps to define each level with concrete examples specific to your organization.

Semi-Quantitative and Quantitative Scoring

When qualitative labels feel too vague, you can assign numerical scales. The U.S. National Institute of Standards and Technology (NIST) outlines three tiers of risk assessment. Qualitative assessments use categories like “Very Low” through “Very High.” Semi-quantitative assessments map those categories onto number ranges, such as 0 to 4 for Very Low, 5 to 20 for Low, 21 to 79 for Moderate, 80 to 95 for High, and 96 to 100 for Very High. Fully quantitative assessments use actual data: measured probabilities, dollar values of losses, and historical incident rates.

Semi-quantitative scoring is a practical middle ground. It lets you do math with your risk scores (ranking, comparing, aggregating across departments) without pretending you have more precision than you actually do. NIST recommends applying these scales to each risk factor separately: the likelihood of a threat occurring, the likelihood it would actually cause damage, and the severity of that damage. Multiplying those scores together gives you an overall risk level.

Measuring Risk in Health and Medicine

Medical risk measurement uses a different vocabulary but the same underlying logic. The key metrics are absolute risk, relative risk, and hazard ratios.

Absolute risk is the straightforward probability of something happening. If 20 out of 100 people in a study develop a complication, the absolute risk is 20%. Absolute risk reduction tells you the actual difference a treatment makes. If 20% of untreated patients have a bad outcome versus 12% of treated patients, the absolute risk reduction is 8 percentage points. That’s the number most useful for making personal health decisions because it tells you the real-world difference the treatment would make for someone like you.

Relative risk compares two groups as a ratio. In that same example, dividing 12% by 20% gives a relative risk of 0.6, meaning treated patients had 60% of the risk that untreated patients faced. The relative risk reduction is 40%. This number sounds more impressive than the 8% absolute reduction, which is why drug advertisements tend to favor it. When you see a health claim expressed as a percentage reduction, always ask: reduction from what baseline? A 50% relative risk reduction means very different things depending on whether the starting risk was 40% or 0.4%.

Hazard ratios appear in studies that track outcomes over time. A hazard ratio of 2 for a treatment means that at any given point during the study, a treated patient who hasn’t yet recovered has twice the chance of recovering in the next time interval compared to someone in the control group. A hazard ratio of 2 corresponds to roughly a 67% chance that the treated patient heals first; a hazard ratio of 3 corresponds to about 75%. Importantly, a hazard ratio does not tell you how much faster healing occurs in absolute terms. It only tells you about the relative odds at each moment. This distinction is easy to miss, and hazard ratios are frequently misinterpreted in both clinical practice and media reporting.

Monte Carlo Simulation

Traditional risk analysis often relies on best-case, worst-case, and most-likely scenarios. The problem is that three scenarios can’t capture the full range of what might actually happen. Monte Carlo simulation solves this by running thousands or even millions of scenarios, each time randomly varying the inputs based on their known probability distributions.

The output is a frequency distribution: a bell-curve-style graph showing how likely each possible outcome is. Instead of a single risk estimate, you get the full range of possible outcomes and the probability of each one. The U.S. Environmental Protection Agency has noted that this approach is “clearly superior to the qualitative procedures currently used to analyze uncertainty and variability” because it shows decision-makers not just one number but the complete picture of what could happen and how likely each scenario is.

You don’t need to be a statistician to read the results. The output graph is intuitive: the tallest part of the curve shows the most likely outcome, while the tails show the extreme possibilities. This lets you answer questions like “What’s the probability that losses exceed $1 million?” or “What’s the 95th-percentile worst case?” rather than relying on a single point estimate that hides the uncertainty behind it.

Cybersecurity and Information Risk

Cybersecurity risk measurement follows the NIST framework, which defines risk as a function of adverse impact and likelihood of occurrence. The key risk factors are threat (who or what could cause harm), vulnerability (the weakness that could be exploited), impact (what you’d lose), likelihood (how probable the event is), and predisposing conditions (existing factors that increase or decrease exposure).

Each factor is scored on the Very Low to Very High scale described earlier. For adversarial threats, like a hacker targeting your systems, you assess both the likelihood that someone would attempt the attack and the likelihood they’d succeed given your current defenses. For non-adversarial threats, like a power outage or hardware failure, you assess how often the event typically occurs. The scores combine into an overall risk level that determines whether you accept, mitigate, transfer, or avoid the risk.

Impact in this context spans multiple dimensions: loss of confidentiality (sensitive data exposed), loss of integrity (data or systems altered without authorization), and loss of availability (systems going offline). A risk might score low on confidentiality impact but high on availability impact, or vice versa. Scoring each dimension separately prevents you from overlooking a risk that’s catastrophic in one area but harmless in another.

Climate and Environmental Risk

Climate risk measurement splits into two categories. Physical risk covers direct damage from climate events: floods, droughts, heat waves, and rising sea levels. Transition risk covers the financial impact of shifting to a low-carbon economy: new regulations, changing consumer preferences, and stranded assets like fossil fuel reserves that lose value.

The Task Force on Climate-Related Financial Disclosures (TCFD) recommends that organizations measure and report greenhouse gas emissions across three scopes: direct emissions from owned operations (Scope 1), emissions from purchased energy (Scope 2), and emissions from the broader supply chain (Scope 3). These metrics serve as proxies for transition risk exposure. Organizations with high emissions face greater regulatory and market risk as carbon pricing and environmental regulations expand.

For physical risk, the TCFD recommends scenario analysis: modeling how your operations and finances would hold up under different climate futures, including a scenario where global warming stays below 2°C. This is conceptually similar to Monte Carlo simulation but applied to decades-long time horizons and global climate models.

Choosing the Right Approach

The method you use depends on what data you have and what decisions you need to make. A 5×5 matrix works well for initial screening when you need to quickly prioritize a long list of risks without detailed data. Semi-quantitative scoring adds rigor when you need to compare risks across teams or departments and want consistent, defensible numbers. Full quantitative analysis, including Monte Carlo simulation, makes sense when the stakes are high enough to justify the effort and you have enough historical data to build reliable probability distributions.

ISO 31000, the international standard for risk management, doesn’t prescribe a single method. It provides a framework: identify risks, analyze them (using whatever scoring approach fits), evaluate them against your tolerance thresholds, treat the ones that exceed those thresholds, and continuously monitor. The measurement technique is one step in that cycle, not the whole process. Whatever method you choose, the value comes from making the measurement consistently and revisiting it as conditions change.