Driverless cars promise safer roads and more convenient travel, but they come with serious drawbacks that are often downplayed. From technical failures in bad weather to unresolved legal questions about who pays when things go wrong, autonomous vehicles introduce a new set of problems even as they try to solve old ones. Here’s what the technology still gets wrong.
They Struggle in Common Driving Conditions
Autonomous vehicles perform well in ideal conditions: clear skies, well-marked roads, predictable traffic. But driving isn’t always ideal. In snow, cameras can no longer recognize lane markings or traffic signs. The laser-based sensors that help cars “see” the world around them malfunction when rain, snow, or debris is falling from the sky. These aren’t edge cases. They’re Tuesday in Minneapolis.
A U.S. Department of Transportation analysis of 2022 and 2023 disengagement reports (moments when the self-driving system hands control back to a human or a human intervenes) found that the most common reason for failure was the car incorrectly predicting what other road users would do. That category alone accounted for about 24% of all known disengagements. Problems with detecting objects came next, followed by motion planning errors and mapping discrepancies. In 86% of all disengagement cases, the human driver was the one who initiated the takeover, meaning the person in the car recognized a problem before the system did.
These aren’t theoretical risks. A 2024 study published in Nature found that while autonomous vehicles had fewer accidents overall in certain conditions, they were involved in crashes at 5.25 times the rate of human drivers during dawn and dusk lighting, and nearly twice the rate when making turns. The technology excels at highway cruising and struggles with the messy, ambiguous situations that make up much of real-world driving.
No One Agrees on Who’s Liable in a Crash
When a human driver causes an accident, liability is straightforward: the driver is at fault. With autonomous vehicles, the question fractures. Was the crash caused by the driver, who was supposed to be monitoring the system? The car manufacturer? The company that wrote the software? The sensor supplier? Current legal frameworks in most countries were built around the assumption that a human is controlling the car, and they haven’t caught up.
A 2025 comparative analysis of liability laws across multiple countries found widespread inconsistency. China’s traffic liability system still assumes human oversight and doesn’t account for scenarios where the system is fully in control. Regional regulations in other jurisdictions frequently fail to include software developers and data service providers as potentially liable parties. Many people, including judges and insurers, struggle to distinguish between a driver’s negligence and a defect in the product itself, especially when driver and machine are sharing control.
Germany has moved further than most, requiring autonomous vehicles to carry a “black box” that records operating data to help determine fault after a crash. If the accident happens during manual control, the driver is liable. If it happens while the system is driving or due to a system failure, liability shifts to the manufacturer. But this model is the exception. In most places, getting compensation after an autonomous vehicle crash means navigating a legal gray zone that can take years to resolve.
They Could Make Traffic Worse, Not Better
One of the biggest selling points of driverless cars is that they’ll reduce congestion. The evidence suggests the opposite may happen. Research modeling the effects of widespread autonomous vehicle adoption found that higher penetration of self-driving cars increases both total vehicle miles traveled and average congestion. The reason is a phenomenon called induced demand: when driving becomes easier and more comfortable, people do more of it. If you can work, sleep, or watch a movie in a car that drives itself, a 90-minute commute stops feeling like a burden. That encourages people to live farther from work, accelerating urban sprawl.
During peak hours, traffic congestion is projected to rise as autonomous vehicles enter the road in larger numbers, increasing travel times and worsening air pollution, greenhouse gas emissions, and noise. A fleet of empty cars circling the block rather than paying for parking, or shuttling back and forth to pick up family members, adds vehicle trips that didn’t exist before. Without aggressive policy intervention, driverless cars could generate more traffic, not less.
Millions of Driving Jobs Are at Risk
Trucking is one of the most common occupations in the United States, and it’s directly in the path of automation. Long-haul highway driving is the easiest type to automate: predictable roads, fewer pedestrians, steady speeds. The International Transport Forum modeled a scenario of rapid autonomous truck adoption and estimated that out of roughly 6.4 million truck-driving jobs projected for 2030, between 3.4 and 4.4 million could become redundant.
Those numbers represent the most aggressive adoption scenario, and the actual timeline will likely be slower. But the direction is clear. Taxi and rideshare drivers face similar pressure as companies like Waymo and Cruise expand robotaxi services. These aren’t jobs that transition easily into other fields. The affected workers tend to be older, without college degrees, and concentrated in regions where alternative employment is limited. The economic disruption won’t be spread evenly across the population. It will hit specific communities hard.
They Create New Cybersecurity Threats
A conventional car can be stolen. A driverless car can be stolen, hijacked remotely, or weaponized. Researchers at the University of Michigan’s Mcity identified a range of vulnerabilities specific to autonomous vehicles: data thieves targeting personal and financial information stored in the car, spoofing attacks that feed false signals to the vehicle’s sensors or navigation system, and denial-of-service attacks that could shut down a car the same way they shut down a website.
Practical examples include a hacker sending a false signal to disable remote parking, or a thief spoofing your parking signal to steal the vehicle entirely. As cars become more connected, with over-the-air software updates and constant data transmission, the attack surface grows. A fleet of autonomous taxis controlled by a single software platform represents a particularly attractive target: compromise one system, and you potentially compromise thousands of vehicles at once.
Moral Decisions Get Programmed, Not Made
Human drivers make split-second ethical decisions in emergencies without thinking about them consciously. Autonomous vehicles need those decisions coded in advance. The Moral Machine experiment, one of the largest studies ever conducted on machine ethics, collected tens of millions of responses from people in 233 countries about how a self-driving car should behave in unavoidable crash scenarios. Should it prioritize passengers or pedestrians? The young or the old?
The results revealed three major cultural clusters with fundamentally different moral preferences, shaped by deep cultural values and institutional norms. There is no universal answer. A car programmed to protect pedestrians over passengers might be ethical in one culture and unacceptable in another. And any decision a manufacturer codes into the software becomes a corporate policy about who lives and who dies, one that will inevitably face legal and public scrutiny after a fatal crash. This isn’t a problem that better engineering solves. It’s a philosophical disagreement with no resolution, embedded in software that has to pick one answer.
The Data Collection Problem
To navigate the world, autonomous vehicles collect enormous amounts of data: video feeds from multiple cameras, precise GPS coordinates, mapping of every street and building they pass, and in some cases, interior cabin footage of passengers. This creates a detailed, continuous record of where you go, when, and with whom.
The Federal Trade Commission has flagged the broader issue of car manufacturers collecting and using consumer data, and the problem intensifies with fully autonomous systems that depend on constant data transmission to function. Who owns the data your car collects about your daily routine? Can it be sold to advertisers, subpoenaed by law enforcement, or breached by hackers? These questions don’t have consistent answers across jurisdictions, and the vehicles are already on the road collecting data while regulators work to catch up.

