Odometry is a method robots and vehicles use to estimate their position by tracking how far they’ve moved. At its simplest, it works by counting wheel rotations and converting that into distance and direction. It’s one of the most fundamental tools in robotics navigation, used in everything from warehouse robots to the Perseverance rover on Mars.
How Odometry Works
The core idea is straightforward. A robot’s wheels have sensors called encoders attached to them. These encoders count how many times each wheel rotates, and since the wheel’s circumference is known, the robot can calculate the distance each wheel has traveled. By comparing the left and right wheels, it can also figure out whether it turned and by how much. From there, it updates its estimated position on an x-y coordinate system, tracking where it thinks it is relative to where it started.
Think of it like a hiker counting paces. If you know your stride length and you count 100 steps heading north, then 50 steps heading east, you can estimate where you are on a map. Odometry does the same thing, just with much more precise measurements and math running many times per second.
Types of Odometry
Wheel encoders are the oldest and most common approach, but they’re far from the only one. Different sensors give robots different ways to estimate motion.
- Wheel odometry uses physical encoders on the wheels. Some measure rotation with magnetic sensors (Hall encoders), while others use light passing through a patterned disc. Robots like the TurtleBot and Segway RMP rely on wheel encoders for basic position tracking.
- Visual odometry uses cameras to track how the surrounding scene shifts between frames. By analyzing how features in the image move, the system calculates how the camera (and the robot carrying it) has moved. This works well in good lighting with plenty of visual detail, but struggles in dark, featureless, or rapidly changing environments.
- Laser odometry uses LiDAR scanners that send out laser pulses and measure how they bounce back. The system compares successive scans to determine movement. It handles poor lighting better than cameras but can fail in environments dominated by large flat surfaces, like long hallways, where successive scans look nearly identical.
- Inertial odometry uses accelerometers and gyroscopes to measure acceleration and rotation directly, without relying on wheels or external features at all. This is useful for drones, legged robots, or any system where wheels aren’t the primary way of moving.
Each type has blind spots, which is why modern systems often combine two or more. A visual-LiDAR system, for example, can use camera data (running at 60 frames per second) to handle fast movements, while the laser data (running at about 1 frame per second) corrects for drift and handles situations where the camera can’t see well.
The Drift Problem
Odometry’s biggest weakness is that errors accumulate over time. Every measurement has a tiny inaccuracy, and because each new position estimate builds on the last one, those small errors stack up. This is called drift. A robot that thinks it’s traveling in a straight line might actually be veering slightly to one side, and after enough distance, its estimated position can be meters away from reality.
Research from Monash University’s robotics lab found that for straight-line travel, the position error in the direction of motion grows proportionally to the distance traveled. But the error perpendicular to the direction of motion, the sideways drift, grows proportionally to the cube of the distance. That means lateral drift gets dramatically worse the farther a robot goes without correction.
Three main factors cause this drift: wheel slippage (the wheel turns but the robot doesn’t move the expected distance), uneven floor surfaces that change the effective wheel contact, and the limited precision of the encoder’s counting mechanism. Loose gravel, sand, ice, or even a slightly polished floor can throw off wheel-based estimates significantly. Researchers have tested machine learning systems that detect when wheels are slipping on gravel or sand and compensate in real time, but simple wheel odometry on its own has no way to know the ground isn’t cooperating.
How Robots Correct for Drift
Since odometry alone drifts over time, practical systems pair it with correction methods. The most common approach is sensor fusion, where data from multiple sensors gets combined using a mathematical tool called the Kalman filter. The filter works in two alternating steps: it predicts where the robot should be based on odometry, then corrects that prediction when a more reliable measurement comes in from another sensor, like a laser range finder or GPS.
During the prediction step, the system’s uncertainty about its position grows. During the correction step, that uncertainty shrinks. The result is a position estimate that’s consistently more accurate than either sensor could provide alone. Experiments on the TurtleBot robot using this approach showed significant improvement in position accuracy compared to odometry by itself.
Odometry vs. SLAM
You’ll often see odometry mentioned alongside SLAM, which stands for Simultaneous Localization and Mapping. They solve related but different problems. Odometry estimates where a robot is right now relative to where it was a moment ago. It’s focused on local, step-by-step tracking. SLAM goes further: it builds a map of the entire environment while simultaneously figuring out where the robot is within that map.
The critical difference is what happens when a robot revisits a place it’s been before. Odometry has no way to recognize that. SLAM can detect these “loop closures” and use them to correct the accumulated drift across the entire path. If a robot drives in a big circle and SLAM recognizes that the endpoint matches the starting point, it can retroactively fix the drift that built up during the loop. Odometry, by contrast, would show a spiral that doesn’t quite close.
Real-World Applications
Odometry is everywhere in robotics, but one of its most impressive deployments is on Mars. NASA’s Perseverance rover uses visual odometry as a core part of its autonomous navigation system, called AutoNav. The rover takes images of the terrain, compares consecutive frames to estimate its movement, and uses that information to plan safe paths around rocks and slopes. AutoNav has driven 88% of the rover’s 17.7-kilometer journey during its first Martian year, with the autonomy software planning nearly 95% of the driving in some campaigns. It has set records including the longest distance driven without any human review: 699.9 meters in a single stretch.
On Earth, self-driving cars use a combination of wheel, visual, and laser odometry alongside GPS and detailed pre-built maps. Warehouse robots use wheel odometry to navigate between shelves. Robotic vacuum cleaners use simpler versions to cover a room without missing spots. Even VR headsets use a form of visual odometry to track your head’s position in space.
The concept scales from a $50 hobby robot counting wheel ticks to a billion-dollar Mars mission combining cameras, inertial sensors, and terrain mapping. What stays the same across all of them is the fundamental question odometry answers: given where I was, and what my sensors just measured, where am I now?

