The false negative rate is the proportion of truly positive cases that a test incorrectly labels as negative. If 100 people actually have a disease and a screening test misses 10 of them, the false negative rate is 10%. It’s one of the most important metrics for evaluating any test, from medical diagnostics to spam filters, because it tells you how often the test fails to catch what it’s supposed to find.
How the False Negative Rate Is Calculated
The formula is straightforward: divide the number of false negatives by the total number of actual positives. If a test examines 200 people who truly have a condition and correctly identifies 170 but misses 30, the false negative rate is 30 ÷ 200, or 15%.
The false negative rate has a direct, inverse relationship with sensitivity (also called the true positive rate). Sensitivity measures how well a test catches positive cases, so the false negative rate is simply 1 minus the sensitivity. A test with 95% sensitivity has a 5% false negative rate. A test with 30% sensitivity has a 70% false negative rate. If you know one number, you automatically know the other.
This is different from a related metric called the false omission rate, which looks at the problem from the opposite direction. The false omission rate asks: of all the people who received a negative result, how many were actually positive? The false negative rate asks: of all the people who are actually positive, how many got a negative result? Both matter, but they answer different questions.
The Connection to Type II Errors
In statistics, a false negative is called a Type II error. When researchers run experiments, they test whether an effect is real by trying to reject a “null hypothesis” (the assumption that nothing is happening). A Type II error occurs when the researcher fails to reject this assumption even though a real effect exists. The probability of making this mistake is represented by the Greek letter beta (β), which is the statistical equivalent of the false negative rate.
Type I errors go the other direction: concluding something is real when it isn’t. Most people have heard of the p-value threshold (typically 0.05) that controls for Type I errors. The false negative rate, or beta, is its counterpart, controlling for the risk of missing real effects. In study design, researchers choose an acceptable beta (often 0.20, meaning a 20% chance of missing a true effect) and then calculate how many participants they need to keep the false negative rate at or below that level.
Why False Negatives Are Dangerous
A false negative gives you something worse than no information: it gives you false reassurance. A person who never takes a test knows they don’t have an answer. A person who gets a negative result believes they’re in the clear, and may skip follow-up testing, delay treatment, or unknowingly spread an infection.
Systematic reviews of screening programs have found that false negatives can delay detection of breast and cervical cancer, sometimes by years. In cervical screening specifically, missed cases have led to legal action and compensation payouts in both the UK and US health systems. Beyond individual harm, high-profile false negatives erode public confidence in screening programs overall, making people less likely to participate in the future.
Real-World Examples
COVID-19 Rapid Tests
Rapid antigen tests for COVID-19 illustrate how dramatically false negative rates can shift with circumstances. During omicron outbreaks, two widely used rapid tests (BinaxNOW and QuickVue) showed 0% sensitivity within the first 48 hours of a person testing positive by PCR. That translates to a false negative rate approaching 100% during the earliest, often infectious stage of illness. Between 48 and 72 hours after PCR positivity, sensitivity improved to only 29%.
Among asymptomatic people, the numbers were similarly poor. A Stanford University study found BinaxNOW had just 39.2% sensitivity in asymptomatic athletes, meaning roughly 6 out of 10 infected people got a negative result. A Dutch study found even lower sensitivity of 27.5% among asymptomatic individuals using a different brand. Even in the best-case scenario, where a symptomatic person had a 50% pretest probability of infection, about 18% of negative results were false negatives.
Pregnancy Tests
Home pregnancy tests are another familiar example. A study of women using home test kits found that those who tested less than nine days after their missed period had a false negative rate of 33%, compared to 21% for those who waited longer than nine days. Timing matters because the hormone these tests detect rises gradually after implantation, and testing too early means levels may sit below the test’s detection threshold.
What Makes False Negative Rates Higher
Several factors push false negative rates up. Timing is the most common culprit. Testing too early in an infection or pregnancy means the thing being measured (viral particles, hormones, antibodies) hasn’t reached detectable levels yet. The COVID-19 rapid test data makes this strikingly clear: the same test that misses nearly every case at 48 hours catches most symptomatic cases a few days later.
Sample collection quality also plays a role. A poorly collected nasal swab, a urine sample that’s too dilute, or a blood draw with technical issues can all produce false negatives regardless of the test’s inherent accuracy. Patient characteristics matter too. Research on fecal screening tests for colorectal cancer found that older age, smoking, and being female were all associated with a higher risk of false negative results, likely due to biological differences in how the condition presents.
The Trade-Off With False Positives
Every diagnostic test balances two types of errors, and adjusting for one always affects the other. A test works by setting a threshold: results above the line are called positive, results below are called negative. If you lower the threshold to catch more true positives (reducing false negatives), you inevitably start flagging more healthy people as positive (increasing false positives). Raise the threshold to reduce false alarms, and you’ll miss more real cases.
This is why sensitivity and specificity always move in opposite directions when you adjust a test’s cutoff. There’s no free lunch. The right balance depends entirely on the stakes. For a screening test meant to catch a deadly cancer, you’d tolerate more false positives (and the unnecessary follow-up biopsies they trigger) to minimize false negatives. For a test where a false positive leads to serious harm, like unnecessary surgery, you’d accept a higher false negative rate in exchange for fewer incorrect positive results.
In practice, this means no single false negative rate is universally “acceptable.” A 5% false negative rate might be perfectly fine for a routine wellness screening but unacceptable for a test used to clear blood donations of HIV. The context, specifically what happens when a case is missed, determines how aggressively the false negative rate needs to be minimized.

