Police first used fingerprints to solve a crime in 1892, when an Argentine investigator named Juan Vucetich matched a bloody fingerprint to a murder suspect. But the routine, systematic use of fingerprints by police departments didn’t take hold until the early 1900s, when classification systems made it practical to file and search large numbers of prints. The full story spans decades, multiple continents, and a slow shift from an experimental curiosity to the backbone of criminal identification.
Fingerprints Before Police Work
The idea of using fingerprints for identification predates any police application by several decades. In 1858, a British magistrate named William Herschel, working in Jungipoor, India, required a local contractor named Rājyadhar Kōnāi to press his palmprint onto a road-building contract. Herschel’s original motive was simple intimidation: he wanted to discourage Kōnāi from later denying he had signed the agreement. But Herschel kept experimenting, eventually using fingerprints to authenticate contracts, deeds, payments, and warrants throughout his time as a colonial administrator. He noticed that prints stayed consistent over years, which hinted at their potential for identifying individuals reliably.
At this stage, nobody was thinking about crime scenes. Fingerprints were a bureaucratic tool, a way to prevent fraud in a system where many people couldn’t write their names. It took another three decades before anyone connected this idea to police work.
The 1892 Murder That Changed Everything
The first criminal case solved by fingerprint evidence happened in 1892 in Necochea, a small village near Buenos Aires, Argentina. Two young boys were brutally murdered, and suspicion initially fell on a man named Velasquez, who had been courting the children’s mother, Francisca Rojas. Investigators found a bloody fingerprint at the scene and contacted Juan Vucetich, an Argentine police official who had been developing a fingerprint identification system.
Vucetich compared the prints of both Rojas and Velasquez against the bloody mark. It matched Rojas, who had denied touching the bodies. Confronted with the evidence, she confessed. This was the first time fingerprint identification had been used to solve a murder, and it demonstrated something powerful: a single print left at a crime scene could directly link a specific person to the act.
Scotland Yard and the Henry System
Solving one case in Argentina was a proof of concept, but making fingerprints useful for everyday policing required a way to organize and search thousands of records quickly. That problem was solved by Edward Henry, a British police official working in India. Henry developed a classification system that sorted fingerprints into four basic pattern types: arches, loops, whorls, and composites. By categorizing prints this way, an examiner could narrow down a large collection to a manageable subset instead of comparing every card by hand.
In July 1897, the Governor-General of India made fingerprinting official policy across British-controlled India. Henry then brought his system to London, where it was initially used alongside an older identification method based on body measurements. On July 1, 1901, Henry established the Metropolitan Police Fingerprint Bureau at Scotland Yard, Britain’s first dedicated fingerprint unit. The Henry Classification System spread rapidly to police forces around the world and remained the standard organizational method for fingerprint records well into the computer age.
How Fingerprinting Reached the United States
American police departments were slower to adopt fingerprinting. In 1903, New York’s state identification bureau introduced a fingerprint department “for the purpose of experiment and test,” but its superintendent predicted that the older body-measurement system would remain the approved method in the United States. Fingerprints were seen as promising but unproven.
A turning point came at the 1904 St. Louis World’s Fair. A Scotland Yard sergeant named John Kenneth Ferrier, stationed at the fair to guard British crown jewels on display, set up a booth demonstrating fingerprint techniques to American law enforcement visitors. By all accounts, Ferrier was persuasive. His demonstrations helped convince police agencies across the country that fingerprinting was practical and reliable.
The legal system caught up in 1911, when the Illinois Supreme Court ruled in People v. Jennings that fingerprint evidence was admissible in court. Chief Justice Orrin Carter acknowledged there was no prior American case or statute on the matter, but concluded “there is a scientific basis for the system of finger-print identification and that the courts are justified in admitting this class of evidence.” This ruling gave police departments legal confidence to build cases around fingerprint matches.
The FBI Creates a National Database
For fingerprinting to work at a national scale, records needed to be centralized. On July 1, 1924, Acting FBI Director J. Edgar Hoover established the Bureau’s Identification Division, consolidating 810,188 fingerprint files from two sources: the federal penitentiary in Leavenworth, Kansas, and the National Bureau of Criminal Identification, which had been collecting crime data for the International Association of Chiefs of Police since 1896.
This was the moment fingerprinting became a truly coordinated system in the United States. Instead of each city maintaining its own isolated card files, a police department in any state could submit prints and check them against a growing national collection. The FBI’s fingerprint archive expanded steadily over the following decades, eventually holding tens of millions of records.
Setting Professional Standards
As fingerprinting became widespread, practitioners needed shared standards. In October 1915, a group of about twenty-two identification specialists met in Oakland, California, at the urging of Inspector Harry Caldwell of the Oakland Police Department. They founded the International Association for Criminal Identification, later renamed the International Association for Identification, to professionalize the field.
One persistent question was how many matching points between two fingerprints were needed to declare a positive identification. Different countries and agencies adopted different thresholds, sometimes requiring as many as sixteen matching characteristics. After a three-year study, the Association settled the debate in 1973 by adopting a landmark standard: “No valid basis exists to require a predetermined number of characteristics to exist between two fingerprint impressions in order to establish positive identity.” In other words, the determination was left to the trained examiner’s judgment rather than an arbitrary numerical cutoff.
A Timeline of Key Dates
- 1858: William Herschel uses palmprints on contracts in India, the earliest systematic use of prints for identification.
- 1892: Juan Vucetich solves the Rojas murder in Argentina, the first criminal case cracked by fingerprint evidence.
- 1897: Fingerprinting becomes official policy in British India under the Henry Classification System.
- 1901: Scotland Yard opens its Fingerprint Bureau, the first in Britain.
- 1903: New York introduces an experimental fingerprint department.
- 1904: Scotland Yard demonstrates fingerprinting to American police at the St. Louis World’s Fair.
- 1911: The Illinois Supreme Court rules fingerprint evidence admissible, the first such ruling in the U.S.
- 1924: The FBI establishes its Identification Division with over 810,000 consolidated fingerprint files.
From a single bloody fingerprint in an Argentine village to a centralized federal database holding hundreds of thousands of records, the adoption of fingerprinting by police took roughly three decades of incremental progress. Each step required not just scientific validation but practical tools for organizing records, legal rulings to make the evidence usable in court, and institutional buy-in from police departments that had been doing things differently for generations.

