Self-driving cars promise safer roads, but the case against them is more substantial than most people realize. Critics point to real, documented problems: safety gaps in specific driving conditions, a legal system that cannot assign blame when software kills someone, cybersecurity vulnerabilities that could turn a car into a weapon, massive job losses, and a surveillance apparatus on wheels. None of these issues have been resolved, and some may be inherently unresolvable.
They Fail in Conditions Human Drivers Handle Fine
The headline statistic for self-driving cars is that they crash less often overall than human drivers. In early testing, Google’s self-driving cars in Mountain View, California had 2.19 police-reportable crashes per million miles compared to 6.06 for human drivers. But that number hides critical weaknesses. A 2024 study published in Nature analyzing over 37,000 accidents found that vehicles with advanced driving systems crash more than five times as often as human drivers during dawn and dusk lighting, and nearly twice as often when turning.
These aren’t obscure edge cases. Turning and low-light driving happen on virtually every trip. The same research found that self-driving vehicles in work zones have a higher probability of causing moderate to severe injuries. Other documented failure triggers include unexpected obstacles, unclear road markings, sudden traffic congestion, and the unlawful but common behaviors of other drivers (like jaywalking or running red lights). The systems disengage most often due to failures in prediction and perception, the two capabilities they need most in dangerous situations.
One widely cited estimate found that autonomous vehicles would need to be driven hundreds of billions of miles before their safety record around fatalities and injuries could be statistically confirmed. No manufacturer is anywhere close to that threshold. We are, in effect, running a public safety experiment on real roads with incomplete data.
No One Is Legally Responsible When They Kill
When a human driver causes a fatal crash, the legal system knows what to do. There’s a person to charge, sue, or hold accountable. When a fully autonomous car kills someone, that framework collapses. As a Brookings Institution analysis put it directly: “The self-driving car accident occurred. It was caused by the misbehavior of the self-driving car. No fault can be traced to any person or legal entity.”
This isn’t a hypothetical. The liability gap is worst at the highest levels of automation, where the car handles all driving and there may be no human driver in the vehicle at all. Product liability law requires proving that a manufacturer made a defective product, but when a system operates at the technological frontier and its decision-making is opaque even to its creators, tracing fault to a specific design choice or software error can be nearly impossible. Victims of autonomous vehicle crashes face the real possibility of having no one to hold accountable.
Every Self-Driving Car Is a Hackable Target
A conventional car can be stolen. A self-driving car can be hijacked remotely, while you’re inside it. The cybersecurity vulnerabilities documented by researchers are extensive and alarming.
- Sensor manipulation: Attackers can feed false data to a car’s inclination sensor, causing it to brake or slow down for a hill that doesn’t exist. Tire pressure monitors can be spoofed to hide dangerous air leaks.
- Spoofing and message interception: In a man-in-the-middle attack, a hacker intercepts the messages a car receives from infrastructure or other vehicles, alters them, and sends the modified version. The car might “see” a neighboring vehicle in a completely wrong location.
- Malware infection: A car’s information system can be infected through internet downloads or even a compromised CD. The resulting attack can crash the system that controls the vehicle.
- Entry system attacks: Remote keyless entry systems can be targeted to lock a person inside the car or prevent the car from locking at all.
- Smartphone exploits: Car-sharing services that use smartphones to unlock vehicles create another attack surface, letting hackers access the car by compromising the phone or the connection between phone and vehicle.
Attacks can also be passive. An attacker might quietly gather location data, travel patterns, and personal information over months before using it or selling it. The more connected and autonomous a vehicle becomes, the larger its attack surface grows.
They Create a Surveillance Network on Public Roads
Self-driving cars bristle with cameras, radar, and lidar sensors that continuously scan their surroundings. This means they collect vast amounts of data not just about their passengers but about every pedestrian, cyclist, and bystander they pass. The Federal Trade Commission has flagged the types of data modern vehicles collect: biometric information, precise geolocation tracked persistently over time, video footage, and other personal data.
Geolocation data is particularly sensitive. The FTC has established in multiple enforcement actions that collecting and disclosing location data can be an unlawful practice, because it can reveal visits to medical clinics, places of worship, domestic abuse shelters, and other sensitive locations. In one case, a data company was found to be grouping consumers into highly sensitive advertising categories based on where they traveled. A fleet of autonomous vehicles operating 24 hours a day would generate an unprecedented map of human movement across entire cities, with no established limits on how that data gets stored, shared, or sold.
Hundreds of Thousands of Jobs Disappear
The U.S. Department of Transportation estimates that 300,000 to 500,000 long-haul trucking jobs are directly at risk from Level 4 and Level 5 automation. Another 105,000 transit bus operators could face displacement. These are not abstract projections about a distant future. Long-haul highway driving is the simplest driving task to automate, so these jobs are expected to be hit first.
The economic damage per worker is significant. Federal estimates put the lifetime income loss for each displaced professional driver at around $80,000. Multiply that across hundreds of thousands of workers, and the total cost reaches tens of billions of dollars. These are disproportionately workers without college degrees in regions with few alternative employers. Unlike past waves of automation that unfolded over decades, autonomous trucking could eliminate a large share of these positions within a compressed timeframe, long before displaced workers can retrain or relocate.
They Could Make Traffic Worse, Not Better
One of the least intuitive arguments against self-driving cars is that they may increase congestion rather than reduce it. The mechanism is straightforward: instead of paying for downtown parking, an autonomous car can simply circle the block with no one inside, waiting to be summoned. Research published in Transportation Research found that this self-interested cruising behavior leads to a 63% increase in average travel time for all road users and a 58.7% increase in total vehicle kilometers traveled across a road network.
These “zombie miles,” driven by empty vehicles avoiding parking fees, effectively shift cars from parking lots onto active roads. The result is more vehicles competing for the same lane space, slower speeds for everyone, and higher emissions. Cities that already struggle with gridlock would face a new category of traffic that serves no transportation purpose at all.
No Algorithm Should Decide Who Lives and Dies
Every self-driving car carries pre-programmed instructions for what to do in a crash scenario. This forces manufacturers to make ethical decisions that no company should be trusted to make. When Mercedes announced in 2016 that its autonomous vehicles would prioritize passenger safety over pedestrians in unavoidable crashes, the public backlash was immediate and fierce. The company reversed course, pledging instead to “avoid 100 percent of accidents” rather than program ethical trade-offs.
But avoidance isn’t always possible, and the question remains unanswered. The only formal ethical guidance comes from Germany’s Ethics Commission on Automated and Connected Driving, which ruled in 2017 that programming a system to accept inevitable loss of life is not permitted. Some researchers have proposed random selection in no-win scenarios, arguing that randomness represents a kind of respect for life by refusing to value one person over another. Others argue the entire framing is flawed, since no manufacturer should build a system on the premise that killing someone is an acceptable outcome.
The core problem is that these decisions are being made in corporate boardrooms and engineering labs, not through democratic processes. There is no public vote on whose life a car should prioritize. There is no transparency about what values are encoded in the software. And there is no mechanism for accountability when those hidden choices lead to someone’s death.
Regulation Has Not Kept Pace
In October 2023, California’s Department of Motor Vehicles suspended Cruise’s autonomous vehicle permits amid safety concerns, a reactive move that came only after a series of public incidents. State Senator Dave Cortese introduced legislation to give local communities more power over driverless car deployment, acknowledging that existing regulatory structures had failed to protect residents. But this kind of patchwork, after-the-fact regulation is the norm, not the exception.
There is no comprehensive federal framework governing autonomous vehicle safety, liability, data collection, or cybersecurity. Different states have different rules. Some allow fully driverless testing on public roads with minimal oversight. Others have no rules at all, which means companies can operate in a legal vacuum. The technology is advancing faster than any legislature can respond, and the people sharing the road with these vehicles have had no meaningful say in whether they consent to the risk.

