What Does Low Sensitivity Mean in a Medical Test?

Low sensitivity means a medical test frequently misses people who actually have the condition it’s designed to detect. A test with low sensitivity produces a high number of false negatives, meaning it tells you “no disease found” when the disease is actually present. This matters because a missed diagnosis can delay treatment, spread infectious disease, or let a condition progress unchecked.

How Test Sensitivity Works

Sensitivity measures how well a test catches every true case of a condition. Specifically, it’s the proportion of people who genuinely have a disease and correctly receive a positive test result. If a test has 95% sensitivity, it will correctly identify 95 out of every 100 people who are sick. The remaining 5 people get a negative result even though they have the disease.

A test with low sensitivity, say 60%, would miss 40 out of every 100 people who are actually positive. Those 40 people walk away with a false sense of reassurance, believing they’re healthy when they’re not.

Why Some Tests Have Low Sensitivity

Every diagnostic test uses a threshold, a cutoff point, to decide what counts as a positive result. Shifting that threshold in one direction improves sensitivity (catching more true cases) but comes at a cost: the test also starts flagging healthy people as positive, which is called a decrease in specificity. This trade-off between sensitivity and specificity is baked into the design of nearly every test in medicine.

Some tests are intentionally designed to favor specificity over sensitivity. A rapid test you can do in a doctor’s office in minutes, for instance, often sacrifices some sensitivity for speed and convenience. The assumption is that if a quick test comes back negative but suspicion remains high, a slower, more accurate test can follow.

Real-World Examples

Rapid strep tests are a familiar example. These quick throat swabs have a sensitivity of roughly 65%, meaning they miss about a third of true strep infections. Their specificity is excellent at around 97%, so a positive result is almost certainly correct. But a negative result doesn’t rule strep out, which is why doctors sometimes send a follow-up throat culture when symptoms are convincing.

Prostate cancer screening tells an even more striking story. The standard PSA blood test, using the traditional cutoff of 4 ng/mL, has a sensitivity of only about 20.5%. That means nearly 80% of prostate cancers go undetected at that threshold. Lowering the cutoff catches more cancers at an earlier stage, but it also flags many biologically harmless growths that would never cause symptoms, leading to unnecessary biopsies and anxiety.

For rapid influenza tests, the FDA recommends that devices detecting influenza A achieve at least 60% sensitivity, and those detecting influenza B at least 55%. More advanced molecular tests are held to a higher bar of at least 90% sensitivity. The gap between those two standards illustrates how much sensitivity can vary depending on the technology behind the test.

What False Negatives Mean for You

The practical consequence of low sensitivity is straightforward: you can test negative and still be sick. This is especially concerning in a few scenarios. For infectious diseases, a false negative might mean you unknowingly spread an illness to others. For conditions like cancer, a missed result can delay treatment during a window when the disease is more treatable. For heart-related emergencies, a single negative result on a less sensitive test might not be enough to safely rule out a problem.

This is why clinicians don’t always take a single negative result at face value. The decision to trust a negative result depends on two things: the sensitivity of the test and how likely you were to have the condition in the first place. If you walk into a clinic with a textbook set of symptoms, a negative result from a low-sensitivity test carries much less weight than it would for someone with vague or unlikely symptoms.

How Clinicians Work Around Low Sensitivity

The most common strategy is repeat testing. Running the same test again, or running a different, more sensitive test, reduces the chance that a true case slips through. Hospitals evaluating chest pain, for example, have increasingly adopted serial testing protocols. One study found that drawing a second blood sample roughly two hours after the first increased serial testing rates by 48 percentage points, catching cases that a single draw would have missed.

Another approach is confirmatory testing with a different method. The rapid strep example captures this well: a quick antigen test gives an answer in minutes, and if it’s negative, a more sensitive throat culture can be performed as a backup. The first test is a useful screen, not the final word.

In some cases, clinicians simply choose a more sensitive test from the start. Molecular tests that analyze genetic material, for example, routinely achieve 90% or higher sensitivity compared to rapid antigen tests that hover around 60 to 70%. The trade-off is usually time, cost, or the need for specialized lab equipment.

Sensitivity Is Not Accuracy

A common misconception is that a test with low sensitivity is a “bad” test. Sensitivity only describes one dimension of performance: how well the test catches true positives. A test can have low sensitivity but very high specificity, meaning that when it does return a positive result, you can trust it. Rapid strep tests are exactly this kind of tool. They miss some cases, but they rarely tell you that you have strep when you don’t.

The reverse also matters. A highly sensitive test that lacks specificity will catch nearly every true case but also generate many false alarms. Neither sensitivity nor specificity alone tells you whether a test is good or bad. What matters is how those two properties fit the clinical situation. For screening a deadly condition where missing a case is catastrophic, high sensitivity is the priority. For confirming a diagnosis before starting aggressive treatment, high specificity matters more.