When Did Fingerprinting Start for Police?

Police first used fingerprints to solve a crime in 1892, when a bloody print at a murder scene in Argentina led to a confession. But the technique didn’t become standard practice overnight. It took roughly two decades for fingerprinting to spread from a single case in South America to routine use in police departments across Europe and the United States.

The First Murder Solved by Fingerprints

In 1892, two boys were brutally murdered in the village of Necochea, near Buenos Aires, Argentina. Suspicion initially fell on a man named Velasquez, who had been courting the children’s mother, Francisca Rojas. But investigators found a bloody fingerprint at the crime scene and contacted Juan Vucetich, a police official who had been developing a system of fingerprint identification.

Vucetich compared the prints of both Rojas and Velasquez against the bloody mark. It matched Rojas. She had denied touching the bloody bodies, but confronted with the evidence, she confessed. This was the first successful use of fingerprint identification in a murder investigation. Vucetich refined his system afterward, calling it “comparative dactyloscopy,” and the province of Buenos Aires officially adopted it in 1903. From there, it spread rapidly throughout the Spanish-speaking world.

Scotland Yard Builds the First Database

Meanwhile, British officials were working on their own approach. Sir Edward Henry, a police administrator who had studied fingerprint patterns in India, developed a classification system that sorted prints into categories based on their ridge patterns. He presented this system to Scotland Yard in 1901, and the Metropolitan Police began collecting and classifying fingerprints that same year. The Henry Classification System became the foundation for fingerprint databases worldwide and remained the dominant method of organizing print records for most of the 20th century.

Henry’s system succeeded in part because of earlier mathematical work proving that fingerprints are unique to each individual. That scientific backing gave police and courts confidence that a fingerprint match was reliable, not just a hunch.

Fingerprinting Arrives in the United States

The U.S. was slightly behind. New York’s state prison system began collecting fingerprints from inmates in early 1904, initially alongside an older identification method called Bertillon measurements, which relied on body dimensions like arm length and head circumference. Captain James Parke, who had already fingerprinted nearly the entire New York State prison population using a classification system he developed independently, helped lead the transition. That same year, Parke represented New York’s Prison Department at the 1904 World’s Fair in St. Louis, where fingerprinting was showcased as a modern identification tool.

The real turning point for American law enforcement came in 1924, when the Bureau of Investigation (later renamed the FBI) established its Division of Identification. The new division consolidated criminal fingerprint records from the federal penitentiary at Leavenworth, Kansas, along with collections from the National Bureau of Criminal Identification, which had been gathering records from police departments since 1897. This gave the federal government a centralized fingerprint repository that local and state agencies could query, transforming fingerprinting from a local experiment into a national system.

The Court Case That Made Prints Admissible

Collecting fingerprints was one thing. Getting them accepted in court was another. The landmark case was People v. Jennings, decided by the Illinois Supreme Court in 1911. Thomas Jennings had been charged with murder, and prosecutors presented fingerprint comparison testimony as evidence of his identity at the scene. His defense argued that fingerprint evidence was too novel to be trusted.

The court disagreed. It ruled that fingerprint comparison was admissible as expert evidence of identity, holding that novelty alone does not bar a technique supported by recognized scientific materials and demonstrated use in other jurisdictions. The jury, the court said, could evaluate the weight and credibility of fingerprint evidence just as it would any other expert testimony. Jennings’s conviction and death sentence were affirmed. This ruling opened the door for fingerprint evidence in courtrooms across the country.

How Fingerprint Matching Works

Fingerprint examiners don’t compare entire prints as a single image. Instead, they look at specific features in the ridge patterns, called minutiae. These include points where ridges split, end, or form distinctive shapes. According to forensic guidelines, when two fingerprints share a minimum of 12 matched minutiae, they are considered to have come from the same finger. A trained examiner identifies these points manually, though computers now handle the initial searching.

This threshold exists because the more matching points two prints share, the more astronomically unlikely it becomes that they came from different people. Partial prints from crime scenes, which often contain fewer details than a clean set taken in a controlled setting, can still be matched if enough minutiae are visible.

From File Cabinets to Computers

For most of the 20th century, fingerprint records were stored on physical cards, filed by classification type. Searching meant a human technician manually sorting through drawers of cards to find a potential match. This process could take weeks or longer, and it only worked if you already had a suspect whose prints you could compare.

That changed with the development of Automated Fingerprint Identification Systems, known as AFIS. These computer systems electronically encode, search, and store fingerprint and palmprint images, allowing police to run an unknown print from a crime scene against millions of records in minutes rather than weeks. States began implementing AFIS in the late 1980s. Missouri, for example, launched its first system in 1989. The FBI followed with its own national automated database, eventually linking state and local systems into a searchable network.

The shift to automation didn’t just speed things up. It made “cold searches” practical for the first time, meaning investigators could identify a suspect from a crime scene print even when they had no leads. Cases that would have gone unsolved under the old card system suddenly became solvable, and fingerprinting moved from a confirmation tool to an investigative one.