Facial recognition is real, widely deployed, and more accurate than most people realize. The top-performing algorithms tested by the National Institute of Standards and Technology fail to identify the correct person less than 0.1% of the time when searching against databases of millions of faces. The technology is already embedded in your phone, at airport security, and in police investigations across the world.
How the Technology Works
At its core, facial recognition captures an image of your face, extracts unique features from it, and compares those features against stored data. A camera picks up your face, the software maps key landmarks like the distance between your eyes, the shape of your jawline, and the contours of your cheekbones. Those measurements get compressed into a compact mathematical template, a kind of numerical fingerprint for your face.
Early systems relied on flat, two-dimensional photos and relatively simple techniques to reduce the complexity of facial data into something a computer could compare quickly. Modern systems go further. Three-dimensional facial recognition uses depth-sensing cameras (like the infrared sensor on newer smartphones) to capture the actual shape of your face, not just a flat image. Research has shown that 3D face recognition is significantly more accurate than 2D for upright faces, because the extra depth information helps the system process your face as a whole structure rather than a collection of flat features.
Where It’s Already in Use
If you’ve traveled through a U.S. airport recently, your face has likely been scanned. U.S. Customs and Border Protection uses facial comparison technology at 238 airports for travelers entering the country, plus 59 locations for international departures. TSA also uses it for eligible PreCheck and Global Entry members at security checkpoints and bag drop counters, replacing the traditional ID check with a camera glance.
Your smartphone is another everyday example. Apple’s Face ID claims a false acceptance rate of less than one in a million, meaning the odds of a random stranger unlocking your phone with their face are extraordinarily low. Banks, payment apps, and building security systems increasingly rely on the same type of technology for identity verification.
Law enforcement represents one of the most consequential uses. Clearview AI, one of the most prominent vendors in this space, has built a database of over 50 billion images scraped from websites and social media. At least 600 police departments use its service. In 2022, U.S. law enforcement conducted more than one million searches through the platform. By 2023, that number doubled. Ukrainian government agencies used it over 5,000 times in the early months of the war with Russia to identify individuals.
How Accurate It Really Is
The best algorithms are remarkably accurate under good conditions. NIST’s ongoing Face Recognition Technology Evaluation tests commercial systems against massive photo databases. The top-performing algorithms miss the correct match less than 0.07% of the time when searching a database of 12 million faces, and they do this while keeping the rate of false identifications extremely low.
But accuracy drops in less-than-ideal conditions. Poor lighting, unusual angles, aging, and low-resolution security footage all degrade performance. The gap between a controlled photo taken at a border checkpoint and a grainy still from a convenience store camera is enormous.
The more troubling accuracy gap is demographic. A landmark 2019 NIST study found that for one-to-one matching (confirming whether two photos show the same person), Asian and African American faces produced false positive rates 10 to 100 times higher than Caucasian faces, depending on the algorithm. American Indian and Pacific Islander groups had the highest false positive rates of all. African American women were disproportionately affected in one-to-many searches, the type police use to identify a suspect from a database. Interestingly, algorithms developed in Asian countries did not show the same disparity between Asian and Caucasian faces, suggesting the bias stems at least partly from the training data and development choices, not from anything inherent to the technology.
How It Can Be Fooled
Facial recognition systems are vulnerable to what security researchers call “spoofing.” Someone can try to trick a system using a printed photo, a video playing on a screen, a warped image, or even a realistic 3D mask. These attacks work more often than you might expect against basic systems that simply match a camera image to a stored template.
To counter this, modern systems use liveness detection: techniques that try to confirm a real, present human is in front of the camera rather than a photograph or video. These checks might analyze subtle skin texture, detect the micro-movements of a living face, or use infrared sensors to verify three-dimensional depth. Liveness detection adds computational cost and complexity, and the arms race between spoofing methods and countermeasures is ongoing. Consumer devices like iPhones use dedicated hardware for this, while cheaper or older systems may lack robust anti-spoofing features entirely.
Legal Restrictions Are Growing
The European Union’s AI Act, one of the most comprehensive AI regulations in the world, heavily restricts real-time facial recognition in public spaces. Police can only use live biometric identification systems in specific, narrow circumstances: searching for missing persons, preventing imminent terrorist threats, or locating suspects in serious crimes. Each use requires authorization from a judicial authority or independent administrative body, must be limited in time and geographic scope, and must be reported to data protection authorities. No legal decision with adverse effects on a person can be based solely on the output of a real-time facial recognition system.
Several U.S. cities, including San Francisco and Boston, have banned government use of facial recognition. Other jurisdictions have taken a lighter touch, requiring disclosure or limiting use to specific scenarios. The regulatory landscape is uneven and evolving quickly, but the trend is toward more oversight rather than less.
What This Means for You
Facial recognition is not science fiction or a fringe experiment. It is a mature, commercially deployed technology processing millions of faces daily. The version on your phone is highly accurate and difficult to spoof. The version used by law enforcement is powerful but carries real risks of misidentification, particularly for people of color. And the version scanning you at the airport is already so routine that most travelers pass through it without realizing it happened.
The core technology works. The open questions are about when it should be used, who gets to use it, and what safeguards prevent the inevitable errors from ruining someone’s life.

