IMU calibration is the process of measuring and correcting the errors built into an inertial measurement unit’s sensors so their readings match reality. Every IMU ships with small imperfections from manufacturing, and without calibration, those tiny errors compound rapidly. A bias of just 0.01 m/s² in an accelerometer can produce a 9-meter position error after only 30 seconds of use. Calibration identifies these imperfections and mathematically removes them.
What an IMU Actually Measures
An inertial measurement unit combines multiple motion sensors into one package. At minimum, it pairs an accelerometer (which measures changes in straight-line acceleration) with a gyroscope (which measures rotational motion). Many IMUs also include a magnetometer, which senses magnetic fields and acts like a digital compass. A unit with all three is often called a 9-degree-of-freedom sensor because it tracks three axes for each of the three sensor types.
These sensors work together to determine how a device is oriented and how it’s moving through space. Your phone uses one to detect screen rotation. Drones rely on them to stay level. Robots, cars, VR headsets, and medical devices all depend on IMU data being accurate, which is exactly why calibration matters.
The Three Errors Calibration Fixes
IMU errors fall into three categories, and calibration targets all of them.
- Bias (offset): Even when the sensor is perfectly still, it reports a small nonzero reading. Think of a kitchen scale that reads 3 grams with nothing on it. Every measurement it takes will be off by that amount. In an IMU, fixed biases are the most straightforward errors to detect and remove.
- Scale factor error: This describes how accurately the sensor’s output corresponds to the actual force or rotation it’s experiencing. If a gyroscope reports 98 degrees of rotation when the true rotation was 100 degrees, it has a 2% scale factor error. The sensor’s “ruler” is slightly the wrong size.
- Misalignment error: The three sensing axes inside an accelerometer or gyroscope are supposed to be perfectly perpendicular to each other. In practice, manufacturing tolerances mean they’re slightly off. This causes readings on one axis to bleed into another, an effect sometimes called cross-coupling.
For both the accelerometer and gyroscope, a full calibration typically solves for twelve correction values: three bias offsets, three scale factors, and six misalignment terms. Once these are known, a simple mathematical correction is applied to every future reading.
Why Small Errors Become Big Problems
IMUs often feed their data into a process called dead reckoning, where the device continuously calculates its position by adding up all the small movements it detects. This requires integrating acceleration data twice to get position. Any uncorrected error gets squared over time.
The consequences are dramatic. A 0.1% error in peak acceleration leads to a 10% offset in estimated position after just 10 seconds. Without correction, position tracking from a single IMU is only reliable for one to two seconds. After that, errors can grow to be several orders of magnitude larger than the actual movement. This is why even consumer devices like phones run calibration routines: the math simply doesn’t work without them.
How Accelerometers Are Calibrated
Accelerometer calibration uses gravity as a free, perfectly consistent reference. The most common method is the six-position tumble test. You place the sensor in six orientations so that gravity acts along each axis in both directions: positive X, negative X, positive Y, negative Y, positive Z, and negative Z. In each position, you know exactly what the sensor should read (1g on one axis, zero on the other two), so you can compare the expected values against what the sensor actually reports.
The difference between expected and actual readings across all six positions gives you enough data to calculate every bias, scale factor, and misalignment term. Software then fits these raw measurements to a mathematical model. In the simplest version, the model maps each raw sensor output to a corrected value using a scale factor and an offset for each axis. More advanced models add a matrix that accounts for misalignment between axes.
How Gyroscopes Are Calibrated
Gyroscope calibration starts with measuring the zero-rate bias: the value the gyroscope outputs when it’s completely stationary. Ideally, a motionless gyroscope should read exactly zero on all axes. In practice, it doesn’t, and that offset changes with temperature.
Temperature sensitivity is one of the biggest challenges with gyroscope calibration, especially in MEMS sensors (the tiny, affordable type found in phones and drones). The bias drifts as the sensor heats up or cools down, and the relationship between temperature and drift is nonlinear, with added hysteresis, meaning the bias at a given temperature can differ depending on whether the sensor is warming up or cooling down. Advanced calibration systems model this thermal behavior and continuously compensate for it using filtering algorithms that track temperature alongside rotation data.
Consumer-grade gyroscopes have a zero-bias stability worse than 15 degrees per hour, which sounds small but adds up quickly in applications that need precise orientation tracking over time.
How Magnetometers Are Calibrated
Magnetometer calibration is about removing magnetic interference from nearby materials. There are two types of distortion.
Hard-iron distortion comes from permanently magnetized components near the sensor, like a speaker magnet or magnetized metal on a circuit board. These produce a constant offset that shifts every reading by a fixed amount, regardless of which direction the sensor faces. Correcting this requires finding three offset values (one per axis) and subtracting them.
Soft-iron distortion is subtler. It happens when nearby ferromagnetic materials (things that aren’t permanently magnetized but respond to magnetic fields) warp the earth’s magnetic field around the sensor. This doesn’t shift the readings by a fixed amount. Instead, it stretches and skews them differently depending on orientation. Correcting it requires a 3-by-3 matrix with six independent terms.
Together, a full magnetometer calibration solves for ten parameters: three for hard-iron offset, six for the soft-iron matrix, and one for local magnetic field strength. When raw magnetometer data is plotted in 3D, an uncalibrated sensor traces an ellipsoid (a squished, off-center sphere). Calibration transforms that ellipsoid back into a sphere centered at the origin, which represents clean, interference-free compass readings.
The Math Behind It
Most IMU calibration algorithms use a technique called ellipsoid fitting. The idea is straightforward: when you rotate a triaxial sensor through many orientations, the raw readings should trace a perfect sphere if the sensor is ideal. Real sensors produce an ellipsoid because of bias, scale factor differences, and misalignment. The algorithm finds the set of correction parameters that best transforms this ellipsoid into a sphere, minimizing the average error between the corrected readings and the known reference value (gravity for accelerometers, Earth’s field for magnetometers).
This minimization can use either linear or nonlinear optimization, depending on how many error terms the model includes. Simpler models that only correct for scale and offset can be solved with straightforward linear algebra. Models that also correct for axis misalignment require more complex nonlinear methods.
The Figure-8 Motion on Phones
If you’ve ever been told to wave your phone in a figure-8 pattern to fix a compass, you’ve performed a magnetometer calibration. The figure-8 movement is designed to rotate the sensor through three dimensions and around all three axes as thoroughly as possible in a short time. This gives the calibration algorithm enough data points across enough orientations to solve for the hard-iron and soft-iron parameters. A few seconds of smooth figure-8 motion can collect the spread of measurements the ellipsoid-fitting math needs to work.
Phones and other consumer devices typically run simplified versions of these calibration routines automatically in the background, but the figure-8 prompt appears when the software detects that its current correction model has drifted too far from reality, often because the device’s magnetic environment has changed (a new phone case with a magnetic clasp, for example).
Factory vs. Field Calibration
High-end IMUs used in aerospace, surveying, or autonomous vehicles often undergo factory calibration using precision turntables, thermal chambers, and known reference inputs. This produces highly accurate correction models but is expensive and not practical for consumer devices.
Consumer and industrial MEMS sensors instead rely on field calibration: methods the user or the device’s software can perform without specialized equipment. The six-position test, figure-8 motion, and stationary bias sampling are all field techniques. They’re less precise than factory methods but good enough for applications like navigation, gaming, and fitness tracking. The tradeoff is cost. Low-cost devices need calibration approaches that are simple, fast, and require no external equipment, even if they sacrifice some accuracy.

