Why Are Biometrics Bad? Spoofing, Surveillance, and More

Biometric systems carry a fundamental flaw that no other security method shares: if your data is stolen, you can’t reset it. You can change a compromised password in seconds, but you can’t change your fingerprints, your face, or your irises. That permanence creates a cascade of problems, from irreversible data breaches to racial bias in accuracy, surveillance overreach, and the exclusion of people whose bodies don’t fit the system’s expectations.

Stolen Biometrics Can’t Be Replaced

This is the single biggest problem with biometric security. When a company storing your fingerprint or facial scan gets breached, that data is compromised forever. There’s no “reset fingerprint” button. As one anti-fraud expert put it after a major government breach: “You only have 10 passwords, if you’re lucky to have all of your fingers.”

This isn’t hypothetical. In 2015, the U.S. Office of Personnel Management revealed that 5.6 million fingerprint records had been stolen in a data breach, more than five times the original estimate of 1.1 million. The CIA pulled officers from the U.S. Embassy in Beijing as a precaution, because adversaries could use the stolen biometric data to identify intelligence agents, defense personnel, and government contractors. A multi-agency working group involving the FBI, DHS, and the Department of Defense was created just to analyze the potential fallout. Years later, those 5.6 million fingerprints are still compromised, and there is nothing anyone can do about it.

Centralized biometric databases make this worse. When millions of biometric records sit in one location, that database becomes an extremely attractive target for hackers. A single successful breach exposes everyone at once. Decentralized storage, where biometric data stays on your personal device and is protected by cryptographic keys, reduces this risk significantly. But most government and corporate systems still rely on centralized databases.

Biometrics Are Less Accurate Than You Think

Biometric systems deal with two types of errors. A false acceptance means the system lets the wrong person in. A false rejection means it locks the right person out. These errors trade off against each other: tightening security to reduce false acceptances increases false rejections, and vice versa.

NIST testing found that a single index fingerprint can provide a 90% probability of correctly verifying someone’s identity, but only when accepting a 1% chance of falsely letting someone else through. Tighten that false acceptance threshold to 0.01%, and the correct verification rate drops to 77%. That means roughly 1 in 4 legitimate users gets rejected. For a system guarding something important, neither scenario is great: either you accept a meaningful chance of letting in impostors, or you lock out nearly a quarter of authorized users.

These error rates aren’t evenly distributed across the population, either. Facial recognition systems have consistently shown higher error rates for people with darker skin tones, women, and older adults. When a biometric system wrongly identifies someone in a law enforcement context, the consequences can range from missed flights to wrongful arrests.

Age and Disability Create Real Barriers

Biometric traits degrade over time. Fingerprints wear down with age and manual labor. Skin loses elasticity. Eyes change. The quality of a biometric scan captured from an older person tends to be worse than one from a younger person, leading to higher failure-to-enroll and failure-to-capture rates. If you’ve ever watched an elderly relative struggle repeatedly with a fingerprint scanner on a phone, you’ve seen this in action.

People with disabilities, injuries, or certain medical conditions face similar problems. Missing fingers, scarred skin, cataracts, or mobility limitations that make it hard to position yourself correctly for a scanner can all prevent enrollment entirely. When biometric systems are the only way to access a service, these errors don’t just cause inconvenience. They exclude marginalized and disadvantaged populations from vital services and create extra barriers for the people who can least afford them.

Biometrics Can Be Spoofed

Despite their reputation as high-tech security, biometric systems can be fooled with surprisingly low-tech methods. Researchers have demonstrated successful attacks using printed fingerprints pressed onto polymer or gelatin molds, printed iris images, cosmetic contact lenses that mimic another person’s iris pattern, and photos or videos held up to facial recognition cameras instead of a live face. More sophisticated attacks use 3D-printed masks or synthetic fingerprints molded from latent prints left on surfaces.

These are called presentation attacks, and they exploit the fact that a biometric sensor is ultimately just a camera or scanner. It reads a pattern. If something else produces that same pattern convincingly enough, the system accepts it. Liveness detection (checking that the biometric sample comes from a real, present human) has improved, but it remains an arms race between attackers and defenders.

Function Creep and Surveillance

Biometric data collected for one stated purpose has a persistent tendency to get used for others. This phenomenon, called function creep or mission creep, is one of the most serious long-term risks of widespread biometric adoption.

Facial recognition technology illustrates this clearly. A system installed to help find missing children at an airport can be expanded to track the movements of protesters, identify people at political rallies, or monitor employee behavior. Government agencies at various levels have expanded their discretion to retain, access, monitor, and use biometric data more broadly over time. Commercial facial recognition services have primarily focused on economic interests rather than supporting law enforcement, meaning your faceprint might be used to target you with advertising or track your shopping habits.

Clearview AI built a database of more than 10 billion faceprints scraped from people’s online photos without their knowledge or consent. The company sold access to this database to law enforcement and private entities alike. A lawsuit under Illinois’s Biometric Information Privacy Act (BIPA) resulted in a settlement that permanently banned Clearview from selling its database to most private entities nationwide and barred it from providing access to any government entity in Illinois for five years. But the existence of 10 billion faceprints in a single company’s database illustrates how quickly biometric collection can scale beyond anything individuals agreed to.

The Chilling Effect on Public Life

When people know their faces, voices, or movements are being captured and analyzed, they behave differently. Researchers have documented how the normalization of constant biometric monitoring creates a chilling effect on free speech and association. People become less likely to attend protests, visit certain websites, or associate with certain groups when they know they might be identified.

This erosion of anonymity in public spaces represents a fundamental shift in how societies function. Historically, being in a crowd offered a degree of privacy. Biometric surveillance eliminates that. Facial recognition can track and identify individuals without their knowledge or consent, and the vast amounts of data collected in crowds can be repurposed for objectives that deviate from their original intent, misused, or exploited to target specific demographic groups.

Legal Protections Are Uneven

Only a handful of jurisdictions have meaningful laws governing biometric data collection. Illinois’s BIPA, adopted in 2008, remains the strongest in the United States. It requires companies to notify individuals and obtain written consent before collecting biometric identifiers like fingerprints, faceprints, or iris scans. Violations carry real penalties: Facebook paid a $650 million settlement for collecting faceprints without consent under BIPA.

But most states and countries have no equivalent law. Without these frameworks, there is a real risk of biometric data being exploited for purposes far outside its original intent. Your employer might collect your fingerprint for a time clock, and nothing stops that data from being shared, sold, or subpoenaed unless a specific law prohibits it. The technology has outpaced regulation in most of the world, leaving people’s most permanent personal data protected by little more than a company’s privacy policy.