Self-driving cars work by combining sensors that constantly scan the environment, ultra-detailed maps that know every lane and curb, and artificial intelligence that interprets all of that data to steer, brake, and accelerate without human input. The entire process, from detecting a pedestrian to adjusting the steering wheel, must happen within 100 milliseconds. Here’s how each piece fits together.
How the Car Sees the World
A self-driving car doesn’t rely on a single type of sensor. It layers several together so each one compensates for the others’ weaknesses. Waymo’s vehicles, for example, carry 29 cameras that provide a simultaneous 360-degree view, designed with high dynamic range to work in both daylight and low-light conditions. Those cameras can spot traffic lights, construction zones, and other vehicles from hundreds of meters away.
LiDAR (Light Detection and Ranging) fires millions of laser pulses per second in every direction, then measures how long each pulse takes to bounce back. This creates a detailed 3D map of the surroundings that works regardless of lighting conditions, day or night. Radar rounds out the sensor suite by using millimeter-wave radio frequencies to measure exactly how far away an object is and how fast it’s moving. Radar’s big advantage is that it cuts through rain, fog, and snow, conditions that can degrade cameras and LiDAR.
These sensors feed data into the car’s onboard computers simultaneously. The system fuses all three data streams into a single, rich picture of the world: camera images provide color and detail, LiDAR provides precise depth and shape, and radar provides speed and distance even in bad weather.
Knowing Exactly Where the Car Is
GPS alone isn’t accurate enough for driving. It can place you within a few meters, but a self-driving car needs to know its position within centimeters. This is where high-definition maps come in. These maps record every lane marking, curb, median, and traffic sign at centimeter-level accuracy, typically within 10 to 20 centimeters. Some commercial HD map systems achieve lateral and longitudinal errors under 7 centimeters on normal roads.
The car continuously matches what its LiDAR and cameras see against this pre-built HD map, a technique that achieves localization errors of 10 centimeters or less. A related approach called SLAM (Simultaneous Localization and Mapping) combines data from GPS, motion sensors, and wheel speed sensors to estimate the car’s position in real time, correcting for accumulated errors as it goes. For fully autonomous driving, these maps need frequent updates to reflect construction, new signage, or changed lane configurations.
How the AI Makes Decisions
Once the car knows what’s around it and where it is, it needs to decide what to do. This happens in a pipeline: perception, prediction, planning, and control.
Perception is where the car identifies objects. Deep learning models, particularly convolutional neural networks and transformer architectures, process camera images and LiDAR point clouds to classify everything the car sees: other cars, cyclists, pedestrians, lane boundaries, and obstacles. Modern systems achieve around 73% mean average precision in 3D object detection, meaning they correctly identify and locate objects in three-dimensional space with high reliability.
Prediction comes next. The car doesn’t just need to know where other road users are right now; it needs to forecast where they’ll be in two, five, or ten seconds. AI models trained on millions of miles of driving data predict likely trajectories for every detected object. Transformer-based prediction models can forecast another vehicle’s path with an average displacement error of about 2.8 meters, significantly more accurate than simple physics-based models that assume constant speed or acceleration.
Path planning is where the car charts its course. The system generates candidate trajectories, evaluates them against predicted movements of other road users, and selects the safest, most efficient path. Researchers categorize planning approaches into traditional algorithms (graph-based or optimization-based), machine learning methods, and hybrid approaches that combine both. The trend in the field is toward these hybrid algorithms, which account for about 27% of current research because they merge the reliability of traditional methods with the adaptability of AI.
Control is the final step: translating the chosen path into precise steering, throttle, and brake commands. This entire loop, from raw sensor data to wheel movement, repeats many times per second.
The Computing Power Behind It
Processing this much data in real time requires serious hardware. Self-driving systems consume between several hundred watts and a full kilowatt of power, comparable to running a high-end gaming PC at full tilt, continuously. Early systems like the Nvidia Drive PX2 (used in Tesla’s Autopilot from 2016 to 2018) could perform 12 trillion operations per second. Newer platforms designed for full autonomy, like the Nvidia AGX Orin, reach 2,000 trillion operations per second.
Speed matters as much as raw power. The hard deadline for processing any single input is 100 milliseconds. If the system takes longer than that to recognize a stopped car and begin braking, it’s too slow to drive safely.
Levels of Self-Driving Capability
Not every “self-driving” car does the same thing. The SAE International defines six levels of automation, from 0 to 5. Level 0 is a conventional car where the human does everything; features like cruise control or automatic emergency braking don’t count as automation because they don’t continuously handle driving tasks. Levels 1 and 2 offer increasing driver assistance (lane keeping, adaptive cruise control), but the human must stay engaged and ready to take over at all times.
The meaningful jump happens at Level 3, where the car handles all driving tasks in specific conditions but may request that the human take over. Level 4 systems handle all driving in defined areas or conditions with no human intervention needed. This is where companies like Waymo operate today: fully driverless ride-hailing within mapped city zones. Level 5 is the theoretical ceiling, a car that drives itself anywhere a human could, under any conditions, with no steering wheel required. No production vehicle has reached Level 5.
How Safe Are They Compared to Humans?
Waymo published safety data covering 7.14 million fully driverless miles across Phoenix, San Francisco, and Los Angeles through October 2023. The results showed a significant gap in favor of the automated system. For crashes involving any reported injury, the self-driving system had 0.6 incidents per million miles compared to 2.8 for human drivers, an 80% reduction. For all police-reported crashes, the rate was 2.1 per million miles versus 4.68 for humans, a 55% reduction. These differences were statistically significant across multiple cities.
When researchers filtered out the most minor fender-benders (collisions with a speed change of less than 1 mph), about half of the self-driving car’s reported collisions dropped out, widening the safety gap further. This suggests that many of the system’s “crashes” are extremely low-speed contact events that wouldn’t typically be reported in human-driven incidents.
Where Weather Still Causes Problems
Bad weather remains the biggest technical challenge. U.S. Department of Transportation field tests found that rain and ice can block both radar and camera sensors, preventing them from functioning as intended. Some test vehicles interpreted heavy rain as a physical object and braked harshly. Others failed to detect nearby vehicles in rain and actually sped up. Even a small amount of snow covering the ground was enough to make one vehicle lose track of lane lines entirely.
This is one reason current commercial deployments operate in relatively mild climates like Phoenix and parts of California. Solving all-weather driving will likely require advances in sensor hardware combined with redundancy: if one sensor type fails, others must pick up the slack reliably enough to maintain that 100-millisecond response window.
Communicating Beyond What Sensors Can See
Even the best sensors can only see what’s in their direct line of sight. A child behind a parked truck or a car running a red light at a blind intersection is invisible until it’s almost too late. Vehicle-to-everything (V2X) communication aims to solve this by letting cars, traffic lights, and road infrastructure share information wirelessly.
With V2X, a self-driving car approaching a blind intersection could receive data from another vehicle already in the cross street, or from sensors embedded in the intersection itself. This extends the car’s perception range far beyond what onboard sensors can achieve. V2X can also deliver information sensors simply can’t capture on their own: real-time traffic status, detailed states of nearby vehicles, and warnings about construction zones ahead. The technology is still being deployed in limited areas, but it represents a significant layer of safety that complements onboard sensing.
The Regulatory Picture
In the United States, there is no federal law that specifically prohibits self-driving cars from being sold or operated, as long as they comply with existing Federal Motor Vehicle Safety Standards. Manufacturers self-certify that their vehicles meet these standards. The catch is that current safety standards were written with the assumption that a human would be driving, so they require things like steering wheels, brake pedals, and turn signals. A company building a vehicle without those controls needs to apply for an exemption from the National Highway Traffic Safety Administration. New standards specifically designed for vehicles without traditional driver controls are still being developed.
State-level regulation varies widely. Some states have detailed frameworks permitting driverless testing and commercial operation, while others have no specific rules on the books. This patchwork means a self-driving car operating legally in Arizona may face different requirements to expand into another state.

