Self-driving cars promise safer roads and more convenience, but they come with serious drawbacks that don’t get enough attention. From sensor failures in bad weather to unsolved legal questions about who pays when things go wrong, the technology introduces new risks even as it tries to eliminate old ones. Here’s a closer look at the specific problems.
Sensors Fail in Common Weather
Self-driving cars rely heavily on LIDAR (a laser-based radar) and cameras to “see” the road. These sensors degrade significantly in rain, fog, and snow. Fog is especially problematic: because fog droplets are much smaller than raindrops, LIDAR light scatters off them more easily, causing a steep drop in the sensor’s ability to detect objects. Once visibility drops below about 100 meters (roughly a football field), LIDAR struggles to reliably detect low-reflectivity targets like dark vehicles or pedestrians in dark clothing.
Rain causes a different issue. While LIDAR can still penetrate raindrops more easily than fog, heavy rain introduces distance measurement errors and false detections. Snow compounds both problems by coating sensors and obscuring lane markings that the car depends on for navigation. These aren’t edge cases. Rain, fog, and snow are everyday driving conditions across much of the world, and no production self-driving system has solved them reliably.
The Accident Record Is Mixed
Supporters often cite Google’s early self-driving car data from Mountain View, California, which showed a lower rate of police-reportable crashes than human drivers: 2.19 per million miles versus 6.06. No fatalities occurred during that testing period, compared to roughly 1 death per 108 million miles for human drivers in California overall.
But context matters. A separate analysis found that when you account for the limited, less demanding conditions autonomous vehicles actually operate in (good weather, well-marked roads, no snow), they have a higher accident rate per million miles than human drivers. Self-driving cars are currently tested in the easiest possible driving environments and still have collisions. Comparing those results to human drivers who navigate blizzards, dirt roads, and chaotic city streets isn’t apples to apples.
Humans Are Bad at Taking Back Control
Most current self-driving systems aren’t fully autonomous. They’re classified as Level 3, meaning the car drives itself but expects a human to take over when it can’t handle a situation. This creates a dangerous gap. Research on driver takeover times shows that when drivers have been in autonomous mode for less than 30 minutes, they need about 3.4 seconds on average to regain control. After more than 30 minutes, that stretches to about 4.3 seconds. At highway speeds, 4.3 seconds covers roughly 120 meters, or nearly 400 feet.
The problem goes deeper than slow reaction times. Drivers in autonomous mode mentally check out. In studies, some participants became so absorbed in watching a movie or other activity that they failed to notice the car’s takeover request at all. This phenomenon, sometimes called automation complacency, means the safety backup that the entire system depends on (an alert human ready to intervene) is often not alert and not ready. The car creates a false sense of security that undermines its own safety model.
Roads Aren’t Ready for the Technology
Self-driving cars need well-maintained lane markings, clear signage, and consistent road design to function. The reality of American infrastructure falls short. Lane markings fade. Signs vary in style and placement across all 50 states. Rural roads often lack markings entirely. The National Academy of Engineering has noted that harmonizing lane markings, signage, and traffic signals nationwide is critical for autonomous vehicles, but achieving that requires updating and enforcing the federal Manual of Uniform Traffic Control Devices across every jurisdiction.
The cost of these upgrades would be significant, and in remote areas with low traffic volume, it may not be economically justifiable to install the infrastructure autonomous vehicles need. This creates an uneven rollout where self-driving technology works well in wealthy, well-maintained urban corridors but fails in the communities that might benefit most from transportation alternatives.
Nobody Knows Who’s Liable in a Crash
Current U.S. liability law was built around a simple assumption: the human driver caused the accident. Driver error accounts for roughly 90% of crashes, so lawsuits target the person behind the wheel. Product liability claims against manufacturers are rare. When a fully autonomous vehicle crashes with no human input at all, that framework breaks down completely.
The list of potentially responsible parties is long: the car manufacturer, the software developer, the sensor supplier, the software operator, the vehicle owner, or even the occupants. No clear legal standard has emerged. Some legal scholars argue that liability should rest with the manufacturer whenever the vehicle is in autonomous mode but could shift back to the driver if they were supposed to be paying attention. Others push for a strict manufacturer liability model. States regulate insurance and liability individually, while federal agencies handle vehicle safety standards, adding another layer of complexity. For someone injured by a self-driving car today, the path to compensation is murky and expensive to navigate.
They’re Vulnerable to Hacking
Self-driving cars are computers on wheels, and they carry the same cybersecurity risks as any networked system. In 2015, security researchers demonstrated they could remotely exploit a Jeep Cherokee’s infotainment system and gain control of its steering and brakes. Researchers have also shown that Tesla vehicles can be remotely invaded to manipulate critical systems. BMW’s ConnectedDrive system was compromised partly because its in-vehicle network gateway lacked basic security measures.
The attack surface goes beyond traditional hacking. Researchers have demonstrated that autonomous vehicles can be fooled by manipulated road signs or specially crafted images that confuse their perception systems, causing incorrect driving decisions. A strategically placed sticker on a stop sign, for example, could make the car misread it entirely. As vehicles become more connected and more autonomous, the potential consequences of a successful attack escalate from data theft to physical danger.
A Massive Surveillance Network on Wheels
Self-driving cars are packed with cameras, LIDAR sensors, and GPS systems that continuously collect data about everything around them, including people who never consented to being recorded. The Federal Trade Commission has flagged the collection of biometric information, geolocation data, video, and telematic data from connected vehicles as a serious privacy concern.
Geolocation data is particularly sensitive. The FTC has noted it can be used to track visits to medical clinics, places of worship, domestic abuse shelters, and other locations people reasonably expect to visit privately. News reports have documented connected car data being used to stalk people and to affect their insurance rates. When any company collects this volume of sensitive data, it also raises national security concerns if that data is shared with or accessed by foreign actors. Self-driving cars don’t just know where you’re going. They build a detailed, continuous map of where everyone around them is going, too.
More Driving, More Sprawl
One of the less obvious risks is that self-driving cars could make transportation problems worse by encouraging people to drive more. When riding in a car becomes as easy as sitting on a couch, the calculus changes. Longer commutes become tolerable, trips that would have been skipped get taken, and people choose private vehicles over public transit or walking. Researchers studying urban sustainability have warned that widespread autonomous vehicle adoption could significantly increase total miles driven.
That increase in driving feeds urban sprawl. If you can work, sleep, or watch a movie during a 90-minute commute, living far from city centers becomes more attractive. Development pushes outward, consuming land, increasing energy demand, and hollowing out the public transit systems that denser communities depend on. Without deliberate policy intervention, self-driving cars could lock in a more car-dependent, more spread-out version of cities rather than the efficient transportation future their proponents imagine.
Millions of Jobs at Stake
Trucking is one of the most common occupations in the United States, and it’s directly in the path of autonomous vehicle technology. Long-haul trucking, which involves predictable highway driving, is the easiest segment to automate. The International Transport Forum has modeled scenarios in which 3.4 to 4.4 million trucking jobs out of an estimated 6.4 million could become redundant by 2030 under aggressive adoption timelines. Even moderate projections point to massive displacement concentrated among workers who often lack the credentials to transition easily into other industries.
The disruption extends beyond truckers to delivery drivers, taxi and rideshare drivers, and the entire ecosystem of businesses that serve them: truck stops, roadside motels, and rural diners. These jobs tend to be concentrated in communities with fewer economic alternatives, meaning the pain of displacement won’t be evenly distributed.

