Fingerprinting was first used to solve a crime in 1892, when a bloody fingerprint helped convict a woman of murder in Argentina. But the path from ancient thumbprints to modern forensic databases stretches back thousands of years, with key breakthroughs clustering in the late 1800s and early 1900s.
Ancient Fingerprints on Clay and Contracts
Long before anyone connected fingerprints to crime, people were pressing their fingers into clay and documents as personal marks. Bricks from a storehouse of the first king of the Lagash dynasty in Mesopotamia, dating to around 3000 B.C., bear finger impressions. A Chinese clay seal from no later than the third century B.C. shows a deep, deliberate thumbprint on one side and a name stamped on the other. The impression is too precise and well-positioned to be accidental, suggesting its owner pressed it as a form of personal authentication alongside the written name.
By roughly 800 A.D., Chinese contracts included fingerprints from both parties and witnesses. One loan document from that era closes with the formula: “The two parties have found this just and clear, and have affixed the impressions of their fingers.” These early uses show that people intuitively understood fingerprints were personal and hard to forge, even without any scientific framework for why.
The 1800s: Fingerprints Meet Science
The modern story begins in 1858, when William Herschel, a British administrator working in India, started requiring native vendors to press their fingerprints onto contracts. Over nearly twenty years, he observed his own prints and those of prisoners, eventually concluding that ridge patterns never changed during a person’s lifetime. That discovery, permanence, was the first scientific pillar of fingerprint identification.
The second pillar came in 1880, when Henry Faulds, a Scottish physician working in Japan, published a letter in the journal Nature titled “On the Skin-Furrows of the Hand.” Faulds made the leap that Herschel hadn’t publicly articulated: fingerprints could catch criminals. He wrote that “when bloody finger-marks or impressions on clay, glass etc., exist, they may lead to the scientific identification of criminals,” because each pattern was unique. This was the first published proposal to use fingerprints as forensic evidence.
The First Crime Solved With a Fingerprint
In 1892, an Argentine police official named Juan Vucetich put the theory into practice. He had developed his own fingerprint classification system, and it got its first real test in the case of Francisca Rojas. Rojas claimed an intruder had killed her two children, but investigators found a bloody fingerprint at the scene that matched her own. Confronted with the evidence, she confessed. This is widely recognized as the first criminal conviction secured through fingerprint identification.
Fingerprints Enter U.S. Courts
The technique crossed into the American legal system in 1911 with People v. Jennings, a murder case in Illinois. Thomas Jennings had broken into a Chicago home through a kitchen window, killing the homeowner. The railing near the window had recently been painted, and the imprint of four fingers from someone’s left hand was found embedded in the fresh paint. Police matched those prints to Jennings, a recently paroled burglar whose fingerprint card was already on file.
Four fingerprint experts testified at trial that the prints were a conclusive match, and Jennings was convicted of murder on February 1, 1911. He appealed to the Supreme Court of Illinois, challenging whether fingerprint evidence should even be admissible. The court ruled that “there is a scientific basis for the system of fingerprint identification” and that the method was in such common use that courts could not refuse to recognize it. The conviction was upheld, setting a precedent that opened the door for fingerprint evidence across the United States.
Scotland Yard and the Henry System
For fingerprints to work at scale, police needed a way to organize and search through thousands of records. Before fingerprinting, European police relied on Bertillonage, a system of detailed body measurements (arm length, head width, ear size) to identify repeat offenders. It was slow, error-prone, and couldn’t handle large populations.
Edward Henry, a British official working in India, developed a classification system that sorted fingerprints by pattern type. The system assigned numerical values to each of the ten fingers based on whether they contained a whorl pattern, then expressed the result as a ratio. Additional layers of classification accounted for arches, loops, and ridge counts, creating enough categories to distinguish individuals within enormous collections. By 1900, the system’s success in India convinced Scotland Yard to abandon Bertillonage entirely. Henry transferred to London in 1901 and established Scotland Yard’s first central fingerprint bureau, training officers in the new method. Police departments around the world soon adopted variations of the Henry system.
National Databases and the FBI
As fingerprinting spread, the need for centralized records grew. In 1924, the FBI established an Identification Division to serve as a national fingerprint repository. For decades, this meant physical cards, millions of them, filed and searched by hand using classification formulas derived from the Henry system. The process worked but was labor-intensive. A single search could take weeks.
That changed in 1975 when the FBI installed its first automated fingerprint reader, marking the beginning of computerized fingerprint identification. These early systems digitized print images and used algorithms to compare ridge patterns, compressing search times from weeks to hours, then eventually to minutes. Modern systems can scan a print against databases of tens of millions of records and return candidates almost instantly, a capability that would have been unimaginable to Herschel pressing vendors’ fingers into contract paper in 1858.

