What Is True Positive Rate and Why It Matters

The true positive rate measures how well a test catches the thing it’s looking for. If 100 people truly have a condition and the test correctly identifies 90 of them, the true positive rate is 90%. It’s one of the most important numbers for evaluating any test, whether in medicine, machine learning, or quality control.

How the True Positive Rate Works

The formula is straightforward: divide the number of correctly identified positives (true positives) by the total number of people who actually have the condition. That total includes both the people the test caught (true positives) and the people the test missed (false negatives).

Say a blood test is designed to detect a specific infection. You test 200 people who are confirmed to have the infection. The test flags 180 of them as positive and misses 20. The true positive rate is 180 divided by 200, or 90%. Those 20 missed cases are false negatives, people who have the condition but were told they don’t.

In medical and clinical settings, the true positive rate goes by another name: sensitivity. The two terms mean exactly the same thing. When a doctor says a test has “high sensitivity,” they mean it has a high true positive rate, meaning it rarely misses real cases.

Why a High True Positive Rate Matters

A highly sensitive test means few false negatives, and false negatives can be dangerous. When a screening test misses someone who actually has a disease, that person walks away believing they’re healthy. They skip follow-up testing, and correct diagnosis gets delayed. For diseases where early treatment dramatically improves outcomes (breast cancer being a well-studied example), a missed case can mean the difference between catching the disease at a treatable stage and finding it after it has progressed.

This is why screening tests are generally designed to prioritize a high true positive rate. The goal is to cast a wide net and catch as many real cases as possible, even if that means some healthy people get flagged for additional testing. A few extra follow-up appointments are a far smaller cost than missing someone with a serious condition.

Real-World Benchmarks

True positive rates vary widely depending on the test and the circumstances. Digital mammography for breast cancer detection has a sensitivity of about 92%, meaning it correctly identifies roughly 92 out of every 100 cancers present. That’s considered strong performance for a screening tool.

COVID-19 rapid antigen tests illustrate how context changes the numbers. In symptomatic people, rapid tests achieved a true positive rate of about 80%. In asymptomatic people, that dropped to roughly 55%, meaning the test missed nearly half of actual infections. The same physical test, applied to different populations, produced dramatically different true positive rates. This is one reason public health guidance often recommended confirming negative rapid tests with a more sensitive lab-based test, especially when symptoms were present.

The Trade-off With False Positives

Here’s the catch: you can almost always increase the true positive rate by making a test more aggressive about flagging positives. Lower the threshold for what counts as a “positive” result, and you’ll catch more real cases. But you’ll also flag more people who don’t actually have the condition, increasing the false positive rate.

This trade-off is visualized with something called an ROC curve (receiver operating characteristic curve). It plots the true positive rate against the false positive rate at every possible threshold. Each point on the curve represents a different cutoff: slide the threshold one direction and you catch more real cases but generate more false alarms, slide it the other way and you get fewer false alarms but miss more real cases. A perfect test would achieve a 100% true positive rate with a 0% false positive rate, but no real-world test hits that mark.

Choosing the right threshold depends entirely on the stakes. For a screening test where missing a case could be fatal, you push the threshold toward higher sensitivity and accept more false positives. For a confirmatory test where a positive result triggers an invasive procedure, you might tolerate a slightly lower true positive rate to avoid putting healthy people through unnecessary treatment.

True Positive Rate vs. False Negative Rate

The true positive rate and the false negative rate are two sides of the same coin. If a test has a true positive rate of 90%, its false negative rate is 10%. The two always add up to 100%. This makes intuitive sense: every person who truly has the condition is either correctly identified (true positive) or missed (false negative). There’s no third option.

This relationship is useful when you encounter test performance data reported in different formats. Some sources report sensitivity, others report the miss rate. If you know one, you know the other. A test that “misses 5% of cases” has a true positive rate of 95%.

Beyond Medicine

The true positive rate applies anywhere a system makes yes-or-no classifications. Spam filters have a true positive rate for catching actual spam. Fraud detection systems have one for flagging real fraudulent transactions. Machine learning models are routinely evaluated by plotting their true positive rate against their false positive rate across different confidence thresholds, using the same ROC curve approach used in medicine.

In all of these cases, the core question is the same: out of all the things that genuinely are what you’re looking for, what percentage does your system actually find? That percentage is the true positive rate.