Driverless cars are, by the available data, significantly safer than human-driven vehicles. Waymo’s autonomous fleet, the largest operating without a safety driver, showed an 80% reduction in injury-causing crashes compared to human drivers over 7.14 million miles of riding. But that headline number comes with important context: these vehicles operate in limited areas, under specific conditions, and face real vulnerabilities that human drivers don’t.
How the Numbers Compare to Human Drivers
The most rigorous comparison comes from a study published in late 2024 analyzing Waymo’s rider-only operations across Phoenix, San Francisco, and Los Angeles. The autonomous system was involved in 0.6 injury-related crashes per million miles, compared to 2.8 for human drivers in the same cities. For all police-reported crashes, the rate was 2.1 per million miles for Waymo versus 4.68 for humans, a 55% reduction.
For broader context, human-driven vehicles in the U.S. kill people at a rate of about 1.13 per 100 million miles traveled, based on NHTSA’s early 2024 estimates. That translates to roughly 8,650 deaths in just the first three months of the year. No autonomous vehicle operating without a safety driver has been linked to a fatality at the time of writing, though the total miles driven by these fleets remain a small fraction of what humans collectively drive.
One methodological detail matters here: about half of Waymo’s reported collisions involved impact speeds under 1 mph, essentially parking-lot bumps that a human driver might not bother reporting. When those ultra-low-speed incidents were excluded, the safety advantage grew even larger in most comparisons.
Where Driverless Cars Struggle
Autonomous vehicles have a distinct pattern of weaknesses that differs from human drivers. Humans cause crashes through distraction, impairment, speeding, and fatigue. Driverless cars never get drunk or check their phones, but they have their own failure modes.
The most common issue is overcaution. Autonomous vehicles tend to hesitate longer when turning at intersections or merging into heavy traffic. This sounds like a virtue, but it actually causes rear-end and sideswipe collisions because human drivers behind them don’t expect the hesitation. In mixed traffic, where autonomous and human-driven cars share the road, this mismatch in driving style creates friction.
Reading human behavior is another gap. A human driver can usually tell when a pedestrian is about to step off the curb or when a cyclist is preparing to swerve, based on body language and subtle social cues. Autonomous systems lack this psychological reasoning. They may not recognize a pedestrian’s intent, which can lead to either emergency braking (startling following drivers) or delayed reactions. Current detection algorithms also struggle with small, partially hidden, or cut-off objects, categories that include children, cyclists, and people in wheelchairs.
Sensor Limits in Bad Weather
The sensors that give driverless cars their “vision” degrade meaningfully in rain, fog, and snow. LiDAR, the laser-based system most autonomous vehicles rely on for depth perception, loses about 25% of its detection range in fog and snowfall. In heavy rain above 30 millimeters per hour (a serious downpour), the number of data points the sensor collects drops by as much as 56%, and signal strength can fall by 73%.
In thick fog with visibility under 50 meters, the picture is similarly grim: data point collection drops up to 59%, with signal intensity falling 71%. These aren’t theoretical numbers. They come from real-world and simulation testing of the sensor hardware that autonomous vehicles depend on.
Modern systems compensate by combining cameras, radar, and LiDAR together, and newer visual algorithms are designed to handle rain and darkness. But the physical limitations of these sensors mean that driverless cars in a Minnesota blizzard face challenges that don’t exist on a clear day in Phoenix. This is one reason most commercial driverless services still operate in warm, dry cities.
How These Vehicles Prevent Total Failures
The engineering philosophy behind driverless cars centers on eliminating any single point of failure. If one system breaks, a backup takes over. Steering systems use dual motors and dual controllers that constantly cross-check each other, ensuring that steering commands reach the wheels even if one motor fails. Braking works similarly, with redundant systems that can bring the vehicle to a controlled stop independently.
Power is another layer. The low-voltage electrical systems that run all the onboard computers use dual batteries and dual power converters. If one battery dies, the other keeps every critical function running. The goal is that any single hardware failure triggers a safe fallback: the car slows down, pulls to the shoulder, and stops. It won’t simply lose control.
This redundancy is genuinely more reliable than the single-system design of most human-driven cars, where a brake line failure or power steering loss can be catastrophic. But redundancy has limits. It protects against component failure, not against the software misinterpreting what it sees.
Cybersecurity as a Safety Risk
Because driverless cars rely entirely on sensors and software, they’re vulnerable to attacks that have no equivalent in traditional driving. Sensor spoofing involves feeding false data into the car’s perception system, projecting fake obstacles that trigger sudden braking, or hiding real obstacles so the car doesn’t react. Sensor jamming overwhelms the sensors with noise, effectively blinding the vehicle.
These aren’t just theoretical concerns. Researchers have demonstrated that manipulated sensor inputs can cause erratic driving behavior. A car that suddenly “sees” a phantom object in its path will brake hard, endangering passengers and following traffic. A car whose sensors are jammed loses awareness of its surroundings entirely. The industry is actively developing countermeasures, but the attack surface of an autonomous vehicle is fundamentally larger than that of a conventional car.
The Ethical Programming Problem
Human drivers make split-second instinctive decisions in unavoidable crash scenarios. Autonomous vehicles must have those decisions programmed in advance. When a collision is truly inevitable, how should the car prioritize? Passenger safety versus pedestrian safety? One pedestrian versus several? These questions have no universally agreed-upon answers, and manufacturers have been reluctant to discuss their specific decision logic publicly.
Current systems are designed to prioritize avoiding all collisions through early detection and conservative driving, sidestepping the trolley-problem scenarios as much as possible. But edge cases exist, and the lack of transparency about how these algorithms handle them remains a point of legitimate concern for regulators and ethicists.
What the Data Actually Tells Us
The honest answer is that driverless cars are safer than human drivers under the conditions where they currently operate. The crash data supports this clearly. But those conditions are narrow: specific cities, mapped routes, mostly good weather, with remote human operators available as backup. The technology has not yet been tested at scale in the full range of environments that human drivers navigate daily.
Human drivers kill roughly 40,000 people per year in the U.S. alone, overwhelmingly through preventable errors. Even an imperfect autonomous system that eliminates distraction, impairment, and fatigue as crash factors represents a substantial safety improvement. The question isn’t whether driverless cars are perfectly safe. It’s whether they need to be perfect, or just consistently better than the very imperfect humans they’re replacing.

