AFR (Annualized Failure Rate) is the raw percentage of hardware units, typically hard drives, expected to fail in a year. Adjusted AFR removes “No Trouble Found” returns from the count, giving a lower number that reflects only confirmed, genuine failures. The difference matters because a significant portion of returned drives turn out to have nothing wrong with them, and manufacturers prefer to report the adjusted figure.
How AFR Is Calculated
Annualized Failure Rate expresses the likelihood that a drive will fail within a one-year period, stated as a percentage. The basic idea is straightforward: take the number of failures observed, divide by the total operating time of all drives in the population, and scale that to a full year. If you have 1,000 drives running for a year and 15 fail, the AFR is 1.5%.
AFR is closely related to another reliability metric called MTBF (Mean Time Between Failures). The two are mathematically linked. If a drive’s MTBF is listed as 1,000,000 hours, you can estimate its AFR by dividing 8,766 (the number of hours in a year) by the MTBF value, which gives roughly 0.87%. The higher the MTBF, the lower the AFR.
Despite being widely used, there is no single accepted industry standard definition for AFR. IEEE has noted that various companies calculate and measure it differently, and the abbreviation “AFR” gets applied to several slightly different formulas. This inconsistency is one reason comparing AFR numbers across manufacturers requires caution.
What “Adjusted” Means in Adjusted AFR
When a drive is returned under warranty, the manufacturer tests it to determine what went wrong. In many cases, the returned unit passes every diagnostic test with no defect found. These are classified as “No Trouble Found” or NTF returns. The drive may have been pulled because of a cabling issue, a software problem, user error, or an intermittent condition that doesn’t reproduce in the lab.
Adjusted AFR subtracts these NTF returns from the failure count before calculating the rate. If 100 drives were returned in a given period but 30 tested as NTF, the adjusted calculation uses 70 failures instead of 100. This produces a noticeably lower failure rate. Manufacturers argue this is a more accurate reflection of actual hardware defects, since the NTF drives weren’t truly broken. Critics point out that those drives still caused real downtime and replacement costs for the customer, making the raw AFR more useful for capacity planning.
Why Manufacturers Prefer Adjusted AFR
Drive manufacturers almost always report the adjusted figure on their specification sheets. The reasoning is that NTF returns inflate the failure rate with events that aren’t the drive’s fault. A drive pulled from a server because of a faulty SATA cable, for instance, would count as a failure in raw AFR even though the drive itself was fine. Adjusted AFR filters that noise out.
The NTF rate varies by product line and manufacturer, but it can represent a meaningful chunk of all returns. This means the gap between raw AFR and adjusted AFR isn’t trivial. A drive with a raw AFR of 1.5% might carry an adjusted AFR closer to 1.0% once NTF returns are excluded. When you’re evaluating a manufacturer’s published reliability spec, you’re almost certainly looking at the adjusted number.
What Real-World Failure Rates Look Like
Backblaze, a cloud storage company that operates hundreds of thousands of hard drives, publishes quarterly and annual failure statistics from its fleet. Because Backblaze tracks every drive failure internally and doesn’t ship drives back to manufacturers for NTF classification, their reported AFR is effectively a raw, unadjusted number. This makes their data a useful counterpoint to manufacturer specs.
In their 2025 annual report, Backblaze’s fleet-wide AFR was 1.36%. Individual drive models varied widely. Some performed exceptionally well: a Seagate 16TB enterprise model came in at just 0.22%, and several Western Digital and Toshiba models sat below 0.60%. Others fared worse, with a Toshiba 16TB model reaching 6.30% and a Seagate 10TB model hitting 5.66%. These numbers reflect drives running 24/7 in data center conditions, which is more demanding than typical desktop use but also more consistent in temperature and vibration control.
Backblaze requires a minimum of 100 drives and 10,000 drive days in a quarter for a model to be included in their reporting, and 500 drives with 100,000 lifetime drive days for lifetime statistics. Drives still undergoing certification testing are excluded entirely. These thresholds help ensure the data is statistically meaningful rather than skewed by small sample sizes.
Which Number Should You Use
If you’re comparing drives on a spec sheet, know that the AFR you see is almost certainly adjusted. Two manufacturers using different NTF classification methods could report different adjusted AFRs for drives with identical real-world reliability. The adjusted number is useful for comparing products within the same manufacturer’s lineup, where the NTF methodology stays consistent.
For capacity planning, especially in environments where you’re managing dozens or hundreds of drives, raw AFR gives you a more conservative and practical estimate. It tells you how many drives you’ll actually need to replace in a year, regardless of whether the cause was a genuine defect or something else. A drive that gets pulled from production costs the same amount of labor and downtime whether it was truly broken or not.
Research from Carnegie Mellon University has also highlighted that traditional reliability metrics, including AFR, tend to underrepresent early wear-out failures. IDEMA, the trade association for the disk drive industry, has proposed standards that break reliability into separate windows: the first few months of operation, a middle period, and long-term use out to five years. This matters because failure rates aren’t constant over a drive’s life. They tend to be higher in the first few months (infant mortality), lower during the middle years, and higher again as the drive ages. A single AFR number, adjusted or not, flattens all of that into one average.
When evaluating any AFR figure, the most important question is what went into the count. A manufacturer’s adjusted AFR tells you about confirmed hardware defects. A fleet operator’s raw AFR tells you about operational reality. Both are valid, but they answer different questions.

