Negative predictive value (NPV) is calculated by dividing the number of true negatives by the total number of negative test results. The formula is: NPV = true negatives / (true negatives + false negatives). The result tells you the probability that a person with a negative test result actually doesn’t have the condition being tested for.
The NPV Formula
NPV uses just two numbers from your data:
- True negatives (TN): people correctly identified as not having the condition
- False negatives (FN): people incorrectly identified as healthy when they actually have the condition
The formula is:
NPV = TN / (TN + FN)
The denominator (TN + FN) represents everyone who tested negative, whether that result was correct or not. The numerator captures only those whose negative result was right. Multiply the result by 100 to express it as a percentage. An NPV of 0.95, or 95%, means that 95 out of every 100 people who test negative truly don’t have the disease.
Step-by-Step Calculation Using a 2×2 Table
Most textbooks and research papers organize diagnostic test data into a 2×2 contingency table. Understanding where NPV sits in this table makes the calculation straightforward.
Here’s the standard layout. The rows represent the test result (positive or negative), and the columns represent the true disease status (has the disease or doesn’t):
- Cell A: True positives (test positive, disease present)
- Cell B: False positives (test positive, disease absent)
- Cell C: False negatives (test negative, disease present)
- Cell D: True negatives (test negative, disease absent)
NPV uses the bottom row of the table: D / (C + D). That bottom row contains every person who received a negative test result. You’re simply asking: of all those negatives, what fraction were correct?
A Worked Example
Suppose a new screening test is evaluated in 1,000 people. Of those, 100 have the disease and 900 don’t. The test correctly identifies 90 of the 100 sick people (true positives) and correctly identifies 850 of the 900 healthy people (true negatives). That leaves 10 false negatives (sick people the test missed) and 50 false positives (healthy people the test flagged).
Your bottom row is: 10 false negatives + 850 true negatives = 860 total negative results.
NPV = 850 / 860 = 0.988, or 98.8%
This means if you took this test and got a negative result, there’s a 98.8% chance you genuinely don’t have the disease.
Why Prevalence Changes Your NPV
Here’s where NPV gets tricky. Unlike sensitivity and specificity, which are generally stable properties of a test itself, NPV shifts depending on how common the disease is in the population being tested. This is one of the most important things to understand about predictive values.
The core relationship works like this: the rarer the disease, the higher the NPV. When very few people in a population actually have the condition, most negative results will be correct simply because most people are healthy to begin with. Conversely, as a disease becomes more common, NPV drops. More sick people in the pool means more chances for the test to miss someone, increasing the false negative count in your denominator.
This means the same test, with identical sensitivity and specificity, can produce very different NPVs depending on who you’re testing. A screening test used in the general population (low prevalence) will have a higher NPV than the same test used in a high-risk clinic where the disease is much more common. If you’re calculating NPV for a specific context, the prevalence of the condition in that particular group matters enormously.
The Connection Between Sensitivity and NPV
A test’s sensitivity, its ability to correctly catch people who have the disease, has a direct relationship with NPV. A highly sensitive test produces very few false negatives. Since false negatives are the thing that drags NPV down, high sensitivity naturally pushes NPV up.
There’s a clinical shorthand for this: SnNOUT. A highly sensitive test, when negative, rules out disease. If a test catches 99% of people who are sick, the 1% it misses barely dents the NPV calculation, especially when prevalence is low.
A real-world example: the D-dimer blood test used to screen for pulmonary embolism (blood clots in the lungs). In a study published through the American Society of Hematology, the D-dimer test had an NPV of 99.3% at the standard threshold. That means when the test comes back negative, there’s less than a 1% chance a clot was missed. This makes it useful as a “rule-out” tool, where the clinical value lies almost entirely in the negative result.
NPV Compared to Positive Predictive Value
NPV and positive predictive value (PPV) are two sides of the same coin. NPV tells you how much to trust a negative result. PPV tells you how much to trust a positive one. NPV answers the question “If my test is negative, am I really disease-free?” PPV answers “If my test is positive, do I really have the disease?”
They move in opposite directions as prevalence changes. When a disease is rare, NPV is high (negative results are very reliable) but PPV is low (positive results include a lot of false alarms). When a disease is common, PPV climbs while NPV falls. This tradeoff is why screening programs designed for low-prevalence populations often require a second, confirmatory test after a positive result, but can confidently clear people with a single negative.
Calculating NPV From Sensitivity and Specificity
You won’t always have raw data in a 2×2 table. Sometimes you’ll know a test’s sensitivity, specificity, and the disease prevalence, and need to calculate NPV from those. This approach is rooted in Bayes’ theorem and uses the following formula:
NPV = (specificity × (1 − prevalence)) / ((specificity × (1 − prevalence)) + ((1 − sensitivity) × prevalence))
Breaking that down: the numerator is the probability of being healthy and testing negative. The denominator adds in the probability of being sick but testing negative anyway (the false negatives). All values should be expressed as proportions between 0 and 1.
A Worked Example
Say a test has 95% sensitivity (0.95), 90% specificity (0.90), and the disease prevalence is 5% (0.05).
Numerator: 0.90 × 0.95 = 0.855
Denominator: 0.855 + (0.05 × 0.05) = 0.855 + 0.0025 = 0.8575
NPV = 0.855 / 0.8575 = 0.997, or 99.7%
Now watch what happens if prevalence rises to 30% (0.30) with the same test:
Numerator: 0.90 × 0.70 = 0.63
Denominator: 0.63 + (0.05 × 0.30) = 0.63 + 0.015 = 0.645
NPV = 0.63 / 0.645 = 0.977, or 97.7%
The NPV dropped by two percentage points just from changing prevalence, even though the test itself didn’t change at all. In absolute terms that might sound small, but in a population of 100,000 people, it’s the difference between 300 and 2,300 people being falsely reassured by a negative result. This is why reporting NPV without specifying prevalence is incomplete.

