When Did Fingerprinting Start Being Used in Forensics?

Fingerprinting entered forensic use in the 1890s, with the first criminal case solved by fingerprint evidence occurring in Argentina in 1892. But the path from ancient thumbprints pressed into clay to courtroom evidence spans thousands of years, and the system we rely on today didn’t fully take shape until the early 1900s.

Ancient Uses of Fingerprints

Long before anyone understood the science behind ridge patterns, people were pressing their fingers into clay and wet ink as a form of signature. Bricks from the storehouse of the first king of the Lagash dynasty in Mesopotamia, dating to roughly 3000 B.C., bear individual finger impressions on their surfaces. Assyrian clay tablets recording contracts and deeds carried both personal seals and digital impressions. Chaldean bricks have been found with impressions of all five fingers, likely serving as a brickmaker’s personal mark.

In China, the practice was even more explicit. A loan contract nearly 1,200 years old includes the fingerprints of both parties and their witnesses, with a formula stating that both sides “have found this just and clear, and have affixed the impressions of their fingers.” Whether these ancient peoples understood that fingerprints were truly unique to each person is debatable. But they clearly treated them as personal identifiers, a kind of biological signature that tied an individual to a document or object.

The 1880 Breakthrough in Nature

The modern idea that fingerprints could catch criminals emerged in the 1870s, when Henry Faulds, a Scottish doctor working in Japan, noticed ancient pottery marked with its creator’s fingerprints. That observation sent him down a path of systematic study. In 1880, he published a letter in the journal Nature proposing that “when bloody finger-marks or impressions on clay, glass, etc., exist, they may lead to the scientific identification of criminals.” It was the first published suggestion that fingerprints had forensic potential.

The following month, Nature published a reply from William Herschel, a British magistrate stationed in India. Herschel had been collecting fingerprints since the 1860s, using them to verify identities on contracts and prevent fraud. He was convinced each person’s print was unique, but he had never made the leap to criminal investigation. Together, their work laid the intellectual groundwork: Herschel demonstrated that fingerprints were permanent and individual, while Faulds saw how that fact could solve crimes.

The First Murder Solved by Fingerprints

The pivotal moment came in 1892 in Argentina. Juan Vucetich, a police official in Buenos Aires, had developed his own fingerprint classification system. That year, a woman named Francisca Rojas was found with her two children murdered. She accused a neighbor, but investigators found a bloody thumbprint at the scene. Vucetich’s system matched the print to Rojas herself, and she confessed. It was the first known murder conviction secured through fingerprint evidence, and it proved the concept could work in practice, not just in theory.

Building a System That Could Scale

Solving a single case was one thing. Searching through thousands of prints to find a match was another problem entirely. In 1897, Sir Edward Henry, a British official working in Bengal, India, introduced a classification system that made large-scale fingerprint filing practical. His method assigned numerical values to each finger based on whether a whorl pattern appeared. Using a fraction-style formula, with values from even-numbered fingers as the numerator and odd-numbered fingers as the denominator, the system created 1,024 primary groupings. This meant an examiner didn’t have to compare a new print against every record on file. They could narrow the search to a small subset almost immediately.

The Henry Classification System was adopted by Scotland Yard in 1901 and quickly spread to police forces around the world. It remained the dominant method for organizing fingerprint records for most of the 20th century.

Fingerprints Enter American Courts

In the United States, the landmark case was People v. Jennings in 1911. Thomas Jennings was charged with murder in Cook County, Illinois, and prosecutors introduced expert testimony matching his fingerprints to those found at the crime scene. His defense argued that fingerprint science was too new and had no American legal precedent. The Illinois Supreme Court disagreed, ruling that fingerprint comparison was admissible as expert evidence when supported by a reliable scientific basis and demonstrated use in other countries. Jennings was convicted and sentenced to death. The decision opened the door for fingerprint evidence in courtrooms across the country.

The FBI Takes Over

By the early 1920s, fingerprint records were scattered across local agencies with no central coordination. That changed on July 1, 1924, when Acting Director J. Edgar Hoover established the FBI’s Identification Division. The new division consolidated 810,188 fingerprint files from two sources: the federal penitentiary in Leavenworth, Kansas, and the National Bureau of Criminal Identification, which had been managing crime data for the International Association of Chiefs of Police since 1896.

This consolidation transformed fingerprinting from a local tool into a national one. Police departments across the country could now submit prints to a single federal repository and search for matches against a growing database. Over the following decades, the FBI’s collection expanded into the millions, making it the largest fingerprint archive in the world.

Computers Change Everything

For most of the 20th century, fingerprint searches were done by hand. An examiner would classify a print, pull the corresponding file drawer, and visually compare cards one by one. Automated Fingerprint Identification Systems, known as AFIS, began changing that process in the 1980s. The first live scan fingerprint readers arrived in 1988, though early versions were clunky and unreliable compared to what followed. As computing power dropped in cost and sensor technology improved, automated systems became standard tools in law enforcement.

AFIS technology didn’t replace human examiners. Instead, it dramatically narrowed the field. A computer could scan millions of prints and return a shortlist of potential matches in minutes, a task that would have taken a human examiner days or weeks. The examiner then made the final comparison. This combination of speed and scale made fingerprinting far more powerful than it had ever been during the era of paper files.

Reliability Questions

Despite more than a century of courtroom use, fingerprinting has faced serious scientific scrutiny. A 2009 report from the National Research Council painted a picture of a field that had been used for decades without sufficient scientific underpinning. The report raised several concerns: the entire analysis process depends on the subjective judgment of the examiner, with no standardized threshold for declaring a match. Results may not be reproducible from one expert to another, and contextual bias, such as knowing other evidence in a case, has been shown to influence experts’ decisions.

Perhaps most striking, the report noted that when examiners declare a match, they typically use language of absolute certainty, stating that the mark could not possibly have come from a different person. The report urged more modest claims, and called for better statistical modeling of how fingerprint features vary across populations. Fingerprinting remains a widely accepted and useful forensic tool, but these critiques have pushed the field toward more rigorous standards and greater transparency about what a “match” actually means.