Among the major accident causation theories taught in safety science, Perrow’s Normal Accident Theory (NAT) is the one rarely used in practice. Unlike other models that have been adapted into investigation tools and prevention frameworks, NAT has never been developed into a usable technique for analyzing specific incidents. Safety researchers have noted this gap and have generally advocated for more detailed, actionable approaches instead.
That said, Heinrich’s original Domino Theory from the 1930s, while historically influential, is also considered outdated in its original form. And the broader “person approach” to accidents, which blames individual human error, has fallen out of favor in most professional settings. Understanding why each of these has declined helps clarify what modern safety science actually values.
Why Normal Accident Theory Is Rarely Applied
Normal Accident Theory, developed by sociologist Charles Perrow, classifies systems along two dimensions: complexity and coupling. A system is “complex” when its internal interactions produce unplanned, unexpected sequences. It is “tightly coupled” when events happen so rapidly that there is little room for human intervention. NAT’s central claim is striking: when a system is both complex and tightly coupled, accidents are inevitable. They are, in Perrow’s term, “normal.”
The theory offers a useful way to think about risk in industries like nuclear power or chemical processing, where tight coupling and hidden interactions are genuine features of the work. But that’s also its limitation. NAT describes why accidents happen in certain system types without offering a structured method for investigating a specific incident or designing targeted prevention measures. No formal incident analysis technique based on NAT has ever been developed, which is why safety practitioners and researchers have largely set it aside in favor of models that produce actionable findings.
In a 2025 review of accident models published in the Journal of Applied Clinical Medical Physics, the authors explicitly excluded NAT from their analysis for this reason, noting that safety researchers have advocated for more detailed views of accidents than NAT provides.
Heinrich’s Domino Theory: Influential but Outdated
Heinrich’s Domino Theory, proposed in 1932, was one of the first structured models of accident causation. It imagines an accident as a chain of five falling dominos: social environment and ancestry, fault of the person, unsafe act or condition, the accident itself, and injury. Remove any one domino and you break the chain.
This model shaped decades of workplace safety thinking, and versions of it are still referenced in some industrial settings. Bird and Loftus updated it in 1976, replacing “ancestry and social environment” with “management’s lack of control” and adding categories for basic causes and immediate causes. These second-generation domino theories remain among the most widely used models for accident mitigation in industry, even though the original version has serious problems.
The biggest criticism is that the original model puts almost all the weight on the individual worker. Four of the five dominos target the person as the cause. In practice, this narrow focus leads safety professionals to treat the “unsafe act” as the easiest domino to blame, while leaving unsafe conditions largely unattended. The theory also fails to account for complex interactions between multiple system components. It treats accidents as simple, linear chains when real-world incidents rarely unfold that neatly. Issues of risk quantification across each domino are never addressed, and reducing every accident to five or six categories creates an error of overgeneralization.
So while the Domino Theory isn’t “rarely used” in the same way NAT is, its original 1930s form is widely considered a product of its era, reflecting a culture of mass production and standardization where blaming the worker was the default response.
The Decline of Person-Centered Models
More broadly, any accident theory that focuses primarily on individual human error has lost credibility in professional safety management. This “person approach” treats errors as moral failings or carelessness, leading organizations to respond with blame, retraining, or disciplinary action. It remains common in everyday thinking (someone made a mistake, so punish or replace them), but it has serious weaknesses that make it poorly suited to complex environments.
The person approach isolates unsafe acts from their system context. It overlooks two patterns that decades of research have made clear: the best, most experienced people make the worst mistakes, and the same set of circumstances tends to provoke similar errors regardless of who is involved. Errors are not random and they are not the monopoly of a careless few. They fall into recurrent patterns driven by system design, time pressure, communication gaps, and organizational culture.
By focusing on individuals, the person approach also undermines reporting culture. Workers who fear blame are less likely to report near misses and minor incidents. Without that data, organizations have no way to identify recurring hazards or understand where their safety margins are thinning until a serious event occurs.
What Modern Safety Science Uses Instead
The shift in accident theory over the past several decades has moved steadily from linear, person-focused models toward systems-based thinking. The most commonly referenced models in current practice fall into a few categories.
Reason’s Swiss Cheese Model remains widely used, particularly in healthcare and aviation. It treats accidents as the result of failures at multiple organizational layers, each with its own gaps (the “holes” in the cheese). An accident occurs when the holes in several layers line up simultaneously. This model directs attention away from individual blame and toward organizational defenses.
Systems-theoretic models like STAMP (Systems-Theoretic Accident Model and Processes) go further, treating safety as a control problem rather than a failure problem. These approaches examine the entire system of constraints, feedback loops, and decision-making structures that are supposed to keep hazards under control. When those control structures degrade or become inadequate, accidents result.
International safety standards reflect this shift toward systems thinking. ISO 45001, the current global standard for occupational health and safety management systems, uses a Plan-Do-Check-Act framework that emphasizes hazard identification, risk assessment, worker participation, incident investigation, and continual improvement. It does not prescribe a single accident theory but embeds the principle that safety is an organizational and systemic responsibility, not just a matter of individual behavior.
How These Theories Compare at a Glance
- Normal Accident Theory (NAT): Rarely used in practice. Explains why some systems are inherently accident-prone but offers no investigation or prevention method.
- Heinrich’s Domino Theory (original): Historically important but considered outdated. Overly focused on individual fault and oversimplifies causation into a linear chain.
- Person-centered approaches: Still common in everyday blame culture but rejected by modern safety science for isolating errors from system context.
- Swiss Cheese Model: Widely used across industries. Highlights how organizational defenses fail at multiple levels.
- Systems-theoretic models (STAMP and similar): Growing in adoption for complex, tightly coupled systems. Treat safety as a control problem across entire organizations.
If you’re studying for an exam or certification in safety management, the most defensible answer to “which accident theory is rarely used” is Normal Accident Theory. It occupies a unique position: intellectually significant but practically abandoned because it was never translated into a tool that safety professionals could apply to real incidents.

