False rejection rate (FRR) is the percentage of times a biometric or authentication system incorrectly denies access to a legitimate user. If your fingerprint scanner fails to recognize you one out of every hundred attempts, that system has a 1% FRR. It’s one of the two core error metrics used to evaluate any system that verifies identity, from facial recognition at airports to the fingerprint sensor on your phone.
How FRR Is Calculated
The formula is straightforward: divide the number of false rejections by the total number of legitimate authentication attempts. If 1,000 authorized users try to unlock a system and 15 are incorrectly turned away, the FRR is 1.5%. In technical literature, you’ll sometimes see this called the “false non-match rate” (FNMR), which means the same thing.
In statistics more broadly, a false rejection is known as a Type I error. It’s the system incorrectly saying “no” when the answer should be “yes.” The biometric world borrowed this concept and turned it into a practical measurement for evaluating real-world performance.
FRR vs. False Acceptance Rate
Every authentication system has two types of mistakes. FRR measures how often it rejects the right person. False acceptance rate (FAR) measures the opposite: how often it lets the wrong person in. These two errors sit on a seesaw. When you push one down, the other goes up.
This tradeoff is controlled by the system’s sensitivity threshold, essentially how strict the system is when comparing your biometric sample to the one stored on file. The system generates a similarity score for every attempt, then compares that score against a preset threshold. Raise the threshold and the system becomes pickier: fewer impostors get through (lower FAR), but more legitimate users get rejected (higher FRR). Lower the threshold and access becomes smoother for real users, but impostors have an easier time slipping past.
The point where FAR and FRR are exactly equal is called the Equal Error Rate (EER), sometimes called the Crossover Error Rate. This single number gives you a quick way to compare systems. A facial recognition algorithm with an EER of 0.5% is more accurate overall than one with an EER of 2%, regardless of how each is tuned.
What Causes False Rejections
Biometric systems work by capturing a sample of something unique to you, extracting measurable features, and comparing them against a stored template. Anything that disrupts that chain can trigger a false rejection.
For fingerprint scanners, wet or dry fingers, cuts, dirt on the sensor, or simply placing your finger at a slightly different angle can all change the reading enough to cause a mismatch. Facial recognition systems are sensitive to lighting changes, new glasses, weight fluctuations, or aging. Voice recognition can stumble on background noise, a cold, or even emotional state affecting your vocal patterns.
Environmental conditions such as poor lighting, sensor degradation, and dirty hardware can compromise the data capture process, sometimes requiring multiple rounds of scanning. Over time, sensors wear down and templates go stale as your body changes, both of which gradually push FRR upward if the system isn’t maintained or re-enrolled.
How FRR Shapes Real-World Decisions
The right FRR depends entirely on context. A high-security facility like a data center or border checkpoint typically sets a high threshold, prioritizing low FAR even if it means more legitimate users get temporarily rejected. Being asked to scan your fingerprint twice is a minor inconvenience compared to an unauthorized person gaining access.
Consumer devices take the opposite approach. Apple’s Face ID, for instance, is designed for convenience. Apple publishes that the chance of a random person unlocking your phone is about 1 in 1,000,000, but the system is tuned so that you, the owner, rarely experience a rejection. A phone that constantly fails to recognize its owner would be abandoned within a week, no matter how secure it is.
For user-facing applications like banking apps or workplace entry, even a modest FRR creates friction that compounds quickly. If a system with a 3% FRR handles 10,000 authentications per day, that’s 300 frustrated users daily, each one needing to retry or fall back to a password. Over time, high FRR erodes trust in the system and drives users toward workarounds that may be less secure.
How Modern Systems Perform
Today’s best facial recognition algorithms achieve remarkably low false rejection rates. NIST’s Face Recognition Technology Evaluation, which independently benchmarks algorithms against large datasets, shows that top-performing systems now reach FRR values as low as 0.14% to 0.2% on high-quality visa and passport photos, measured at an extremely strict false match rate of 1 in 1,000,000. That means fewer than 2 out of every 1,000 legitimate comparisons fail.
Performance drops in less controlled conditions. The same NIST evaluations show FRR climbing to around 3.4% for border kiosk images, where lighting, pose, and image quality are harder to control. This gap between controlled and real-world conditions is one of the most important factors to consider when evaluating a biometric system’s claims.
How FRR Is Tested and Reported
The international standard for biometric performance testing is ISO/IEC 19795, a multipart framework that establishes how error rates should be measured and reported. It covers principles, testing methodologies for different scenarios, modality-specific testing (fingerprint vs. face vs. iris), and even on-card comparison algorithms.
One important detail the standard addresses: you can’t simply count how many times a system says “yes” or “no” and call it an FRR measurement. You need additional controls to confirm whether each identity claim was actually legitimate. Without that ground truth, you’re just measuring rejection rate, not false rejection rate. Proper testing requires knowing exactly who is presenting their biometric and whether they’re enrolled in the system, which is why credible benchmarks like NIST’s use carefully curated datasets with verified identities.
Tuning the Threshold
If you’re selecting or configuring a biometric system, the similarity threshold is your primary lever. A facial recognition system comparing two images might generate a similarity score from 0 to 1. Setting the threshold at 0.85 means any score below that triggers a rejection. Lowering it to 0.75 lets more borderline matches through, reducing FRR but increasing the risk of accepting an impostor.
Beyond threshold adjustment, system designers reduce FRR through better hardware (higher-resolution sensors, controlled lighting), improved algorithms (particularly deep learning models that handle more variation in appearance), and multimodal approaches that combine two biometric types, like face and fingerprint together. When both modalities must fail simultaneously for a false rejection to occur, the combined FRR drops dramatically.
Periodic re-enrollment also helps. Updating stored templates to reflect how a person looks or sounds today, rather than relying on data captured years ago, keeps the gap between live samples and stored references small enough that the system stays accurate.

