Automated technology in a car refers to any system that takes over part or all of the driving task, from keeping you centered in your lane to braking before a collision. These systems range from simple assists you already use daily, like backup cameras, to experimental technology that can drive without human input. Most cars sold today include at least some level of automation, and by 2028, over half of registered vehicles in the U.S. will have front crash prevention and blind spot monitoring.
The Six Levels of Driving Automation
The automotive industry uses a scale from Level 0 to Level 5, defined by SAE International, to describe how much control a car’s technology has over driving. Understanding where a feature falls on this scale tells you exactly how much attention you still need to pay.
At Level 0, the car has no sustained control. Warning chimes for lane departure or a blind spot alert fall here. They inform you, but you do everything. Level 1 means the car handles either steering or speed, but not both. Adaptive cruise control is the classic example: the car manages your following distance, but you steer.
Level 2 is where most “self-driving” marketing lives today. The car manages both steering and speed simultaneously, like Tesla’s Autopilot or GM’s Super Cruise. You must keep your eyes on the road and stay ready to take over at any moment. The system is helping, not driving.
Level 3 is the first stage where the car is genuinely responsible for driving in certain conditions. You can look away from the road, but must be ready to take control when the system asks. Only a handful of vehicles have reached this level with regulatory approval. In late 2025, China approved its first Level 3 vehicles from Changan Auto and BAIC Motor, but only for designated areas in Chongqing and Beijing with speed limits of 50 and 80 km/h respectively. Mercedes-Benz has offered a similar system in limited markets.
Levels 4 and 5 represent full autonomy. Level 4 can handle all driving in specific conditions (a robotaxi operating within a city, for example) with no human backup needed. Level 5 would drive anywhere a human could, under any conditions. No consumer vehicle currently operates at Level 4 or 5.
Common Automated Features in Today’s Cars
The automated technology most drivers interact with falls under the umbrella of Advanced Driver Assistance Systems, or ADAS. These are the Level 1 and Level 2 features that have become widespread over the past decade.
- Automatic emergency braking (AEB): Uses sensors to detect an imminent front collision and applies the brakes if you don’t react in time. A major government-automaker study from NHTSA found that AEB reduces front-to-rear crashes by 49% across 2015 to 2023 model years, with newer systems (2021 to 2023) achieving a 52% reduction.
- Adaptive cruise control: Maintains a set speed like traditional cruise control, but automatically slows down and speeds up to match traffic. More advanced versions pair this with lane centering to keep the car positioned in its lane, though only about 22% of registered vehicles will have that combination by 2028.
- Lane departure warning and lane keep assist: The warning version alerts you when you drift out of your lane. Lane keep assist goes further by gently steering you back. Cameras read lane markings and pavement lines to make this work.
- Blind spot monitoring: Sensors detect vehicles in your blind spots and display a visual warning, typically a light in or near the side mirror. About 53% of registered vehicles are expected to have this by 2028.
- Rear cameras and parking sensors: The most common automated feature on the road. Roughly 76% of registered vehicles will have rear cameras by 2028, and 65% will have rear parking sensors.
How Cars See the Road
Automated systems rely on three main types of sensors to perceive the world, each with distinct strengths. Most advanced systems combine all three.
Cameras work much like your eyes. They distinguish shapes, colors, and object types, making them the only sensor that can read lane markings, traffic signs, and traffic light colors. Modern car cameras include infrared lighting for night driving. Their weakness mirrors ours: heavy rain, snowstorms, and sandstorms degrade their performance significantly.
Radar sends out radio waves that bounce off objects and return, revealing an object’s location, speed, and direction of travel. Its key advantage is consistency. Radio waves aren’t affected by visibility, lighting, or noise, so radar performs the same in a blizzard as on a clear day. This reliability is why radar serves as the default sensor for emergency braking. It excels at detecting moving objects but produces low-detail models, meaning it knows something is there and how fast it’s moving, but not exactly what it is.
LiDAR fires hundreds of thousands of laser pulses per second and measures how long each takes to bounce back, creating a detailed 3D point cloud of everything around the car. This gives the most spatially accurate picture of the environment. LiDAR is common in robotaxis and more expensive automated systems but remains too costly for most consumer vehicles.
How Software Makes Sense of Sensor Data
Raw data from cameras, radar, and LiDAR would be useless without software that combines and interprets it. This process, called sensor fusion, merges inputs from multiple sensors into a single, unified picture of the car’s surroundings. The goal is to offset each sensor’s weaknesses with another’s strengths: radar confirms an object’s speed while the camera identifies it as a pedestrian, for instance.
There are different approaches to fusion. Some systems let each sensor independently detect and track objects, then compare results at the end. Others fuse the raw data from all sensors first, retaining the maximum amount of information before any detection happens. A middle approach extracts specific features from each sensor, like color from camera images and location from LiDAR point clouds, then combines those features before classifying what the car sees. Modern systems increasingly use neural networks trained on millions of driving scenarios to handle this classification, which is how the car distinguishes a plastic bag blowing across the highway from a child running into the street.
Drive-by-Wire: How the Car Executes Commands
For a computer to steer, brake, or accelerate, it needs electronic control over systems that were traditionally mechanical. Drive-by-wire technology replaces physical linkages with electronic signals.
In a traditional throttle system, pressing the gas pedal physically opens a valve controlling airflow to the engine. In a throttle-by-wire system, the pedal activates a sensor that sends a voltage to an electronic control unit, which then sends its own signal to the throttle valve (or, in electric vehicles, directly to the motor controller). The computer can override or adjust this signal, which is how adaptive cruise control manages your speed without moving the pedal.
Steer-by-wire replaces the mechanical steering column with an electric motor that turns the wheels based on electronic input. A position sensor on the steering wheel measures your input, and the system acts on it, but it can also receive input from the car’s automated driving software. Brake-by-wire similarly uses sensors and actuators to translate pedal pressure into braking force through a master cylinder, eliminating the traditional brake servo and allowing the computer to apply brakes independently for features like automatic emergency braking.
Built-in Backup Systems
At Level 3 and above, the driver isn’t expected to be immediately available if something fails. This makes hardware redundancy a critical safety requirement. If the primary steering system malfunctions, a backup must keep the car controllable long enough to reach a safe stop.
Redundancy can take different forms. A steering system might contain its own duplicate components, so a failure in one part doesn’t disable the whole system. Alternatively, the car can use differential braking, applying more braking force to one side, to steer the vehicle even if the steering system fails entirely. The backup system doesn’t need to drive normally; it just needs enough capability to bring the car safely to a stop. Power supply redundancy is equally important, ensuring the steering and braking systems have enough energy to function even if the main electrical system goes down.
What Still Limits These Systems
Adverse weather remains the most persistent barrier to higher levels of automation. Heavy rain, snow, and fog degrade camera and LiDAR performance, and even radar, while unaffected by visibility, can struggle with wet road surfaces that scatter radio waves unpredictably. Most Level 2 systems will warn you and disengage when conditions become too difficult for the sensors to read the road reliably.
Unusual scenarios, sometimes called edge cases, also challenge automated systems. A construction zone with temporary lane markings that contradict permanent ones, a traffic officer waving you through a red light, or an object the system has never seen before can all exceed the system’s training. This is the core reason fully autonomous consumer vehicles remain years away: not the average driving scenario, which current technology handles well, but the unpredictable situations that happen just often enough to matter.

