How Accurate Is DNA Testing for Crimes?

DNA testing is extraordinarily accurate when a clean, single-source sample is collected and analyzed properly. A standard forensic DNA profile examined across 20 or more genetic markers produces a random match probability as low as 1 in a quintillion (that’s a 1 followed by 18 zeros) for some populations. In practical terms, the chance of two unrelated people sharing the same full profile is vanishingly small. But “DNA testing” in criminal cases involves far more than the chemistry itself. The real accuracy depends on the quality of the sample, how many people’s DNA is mixed together, how the lab handles the evidence, and how the results are interpreted.

What Random Match Probability Actually Means

Forensic labs compare DNA at specific locations on the genome called short tandem repeats, or STRs. Modern kits test around 20 of these markers simultaneously. When a full profile is obtained from a single person, the statistical power is staggering. For African Americans, the probability that a random unrelated person would match is roughly 1 in 10^18. For U.S. Caucasians, it’s about 1 in 10^17. These numbers mean that, in a world of 8 billion people, you would need to test populations billions of times larger than Earth’s to expect a coincidental match.

This is the number prosecutors often cite in court, and it’s legitimate when the underlying sample is clean and complete. The problem is that crime scene samples are rarely clean and complete.

Where Accuracy Breaks Down: Mixed Samples

The single biggest challenge in forensic DNA work is interpreting mixtures, samples containing genetic material from two or more people. A doorknob, a weapon handle, or a piece of clothing can easily carry DNA from multiple individuals. When those profiles overlap, analysts must try to untangle which genetic markers belong to whom.

Research coordinated by the National Institute of Justice found significant variation in how different analysts and different laboratories interpret the same mixed sample. Two-person mixtures are generally interpretable when the DNA concentration is high enough, but three-person mixtures push beyond the reliable limits of most lab protocols. The more contributors in a sample, the more judgment calls an analyst has to make about which peaks in the data are real and which are noise. A 2016 review by the President’s Council of Advisors on Science and Technology flagged this as a serious concern, calling for more standardized, automated methods. Researchers at the National Institute of Standards and Technology have confirmed that these interpretation problems persist.

Factors that make mixtures harder to read include low overall DNA quantity, imbalanced ratios between contributors (where one person’s DNA drowns out another’s), and overlapping genetic markers that could belong to either individual. In these situations, two equally qualified analysts can look at the same data and reach different conclusions about whether a suspect can be included or excluded.

The Problem of Tiny Samples

Modern DNA testing is sensitive enough to generate a profile from just a few cells, but that sensitivity is a double-edged sword. When the starting amount of DNA is extremely low, random effects during the copying process can cause certain markers to drop out entirely or appear artificially inflated. Labs set minimum thresholds to distinguish real genetic signals from background noise, but these thresholds involve a built-in tradeoff: set the bar too high and you miss real data, set it too low and you start reading noise as evidence.

Below certain concentration levels, a marker from one copy of a chromosome might amplify while the paired marker doesn’t, making a person who carries two different versions of a gene look like they carry only one. This “allelic dropout” can cause a true contributor to be missed or, in mixed samples, distort the picture of who was present. Labs acknowledge that some classification errors are inherent at these low levels, with accepted error probabilities built into their threshold-setting procedures.

Secondary DNA Transfer

Your DNA doesn’t have to be at a crime scene for it to end up there. Secondary transfer occurs when DNA moves from one person to an object, and then from that object to another surface or person. You shake someone’s hand, they pick up a knife, and your DNA is now on that knife handle, even though you never touched it.

Studies on DNA transfer during stabbing scenarios found that the amount of DNA decreases at each step, from the original hand, to the first surface touched, to the second surface. But in some cases, the indirectly transferred DNA was present in quantities similar to that of the person who actually handled the object. This means that the sheer amount of DNA recovered is not always a reliable indicator of who directly touched an item. Analysts increasingly recognize that profile quality and relative contribution ratios matter more than raw quantity, but there is no consensus method for distinguishing direct from indirect transfer in every case.

Lab Standards and Human Error

U.S. forensic DNA laboratories must meet the FBI Director’s Quality Assurance Standards, which require formal accreditation by a nationally recognized forensic science organization. Every analyst must complete training, pass competency tests, and undergo external proficiency testing twice per year in each technology they use for casework. These proficiency tests are administered by outside providers who publish summary results.

These safeguards are real, but they don’t eliminate human error entirely. Sample switches, contamination during evidence handling, and mislabeling have all occurred in accredited labs. Cross-contamination can happen when items from the same case are packaged together, when tools or surfaces aren’t adequately cleaned between samples, or when investigators inadvertently transfer material during evidence collection. The Innocence Project has documented 353 DNA exonerations in the United States. In 45% of those cases, the original conviction involved a misapplication of forensic science, including flawed DNA analysis, faulty serology, or other forensic errors.

How Next-Generation Sequencing Improves Results

Traditional forensic DNA analysis is limited to roughly 20 to 30 STR markers per test. A newer approach called next-generation sequencing, or NGS, reads the actual DNA sequence across thousands of markers simultaneously. Each region of the genome can be covered by hundreds of independent reads, which provides a much higher degree of confidence in the result.

One NGS method called STR-Seq can characterize variants across more than 2,500 different STRs with over 83% accuracy per marker, a number that sounds modest until you consider the sheer volume of data points working together. More importantly for criminal cases, this method successfully identified minor contributors present at just 0.1% of a mixture, a level where traditional methods would dismiss the signal as stutter noise. For complex mixtures that have historically been uninterpretable, NGS represents a substantial leap in resolving power.

What This Means in Practice

A full, single-source DNA profile matched to a suspect is among the most powerful pieces of evidence in criminal justice. The underlying chemistry is not in dispute. What varies is everything surrounding it: how the sample was collected, whether it was contaminated, how many contributors are present, whether the quantity was sufficient for reliable analysis, and how the analyst interpreted ambiguous data. A clean buccal swab compared against a blood stain from a single source is, for all practical purposes, definitive. A trace amount of touch DNA recovered from a surface handled by multiple people and interpreted by an analyst making subjective judgment calls is far less certain.

The gap between those two scenarios is where wrongful convictions and wrongful exclusions live. DNA evidence is not a single technology with a single accuracy rate. It is a chain of collection, preservation, analysis, and interpretation, and each link introduces its own potential for error.