What Is Time of Flight? ToF Technology Explained

Time of flight (ToF) is a measurement principle based on a simple idea: send a signal out, wait for it to bounce back, and use the travel time to calculate distance. The signal can be light, sound, or even subatomic particles. By knowing exactly how fast the signal travels and how long the round trip takes, you get a precise measurement of how far away something is. This core concept powers everything from smartphone face unlock to self-driving car sensors to medical imaging.

The Basic Principle

At its simplest, time of flight means emitting energy toward an object, then measuring how long it takes for the reflection to return. Since light travels at a known, constant speed (about 300 million meters per second), even tiny differences in return time translate into measurable differences in distance. Sound-based systems work the same way but at much slower speeds, which is why sonar and ultrasound use sound waves for closer-range measurements.

The concept also appears in classical physics as a projectile motion calculation. When you launch an object at an angle, the total time it spends airborne is its time of flight, calculated as 2 × initial velocity × sin(launch angle) / 9.8 m/s². That version matters in physics courses, but the measurement principle is what drives the technology applications most people encounter.

How ToF Sensors Measure Depth

Modern ToF sensors use infrared light pulses to build 3D maps of their surroundings. They come in two main types: direct and indirect.

Direct ToF sends out a short pulse of light and uses a high-speed timer to measure exactly when the reflection arrives. This is the approach used in LiDAR systems. Pulsed LiDAR can work at distances from a few meters to several kilometers, with a typical depth resolution of about 1 centimeter per pulse.

Indirect ToF takes a cleverer approach to avoid the need for impossibly fast stopwatch circuits. Measuring millimeter-level distances with direct timing would require picosecond-precision clocks at every single pixel, which isn’t practical. Instead, indirect sensors emit light in a repeating, modulated pattern and then measure how the pattern shifts when it bounces back. By comparing how returning photons distribute across two offset detectors, the sensor reconstructs the phase shift and converts it into distance. This is the method used in most smartphone depth cameras and indoor 3D sensors.

ToF in Smartphones and Consumer Tech

If you’ve used Face ID on an iPhone, you’ve used a ToF sensor. Apple’s TrueDepth camera projects thousands of infrared dots onto your face, measures the depth of each point, and builds a 3D map for authentication. The same hardware powers augmented reality features through Apple’s ARKit framework, letting apps track facial movements and overlay digital objects onto the real world.

Rear-facing ToF cameras on smartphones serve a different purpose. They help with autofocus speed, portrait mode depth effects, and AR applications that need to understand room geometry. The sensor gives the phone a real sense of spatial awareness rather than relying on software guesses from a flat 2D image.

LiDAR and Self-Driving Cars

Autonomous vehicles rely heavily on ToF-based LiDAR to see the road. These systems scan the environment with laser pulses hundreds of thousands of times per second, building a detailed 3D point cloud of everything around the car: other vehicles, pedestrians, lane markings, curbs.

Pulsed ToF LiDAR remains the most common choice for automotive use because it works well outdoors and handles long distances. A typical system operates in the 20 to 150 meter range for real-time driving decisions, though pulsed systems can reach much farther. Depth resolution sits around 1 centimeter with standard timing precision of about 0.1 nanoseconds.

A newer approach called frequency-modulated continuous wave (FMCW) LiDAR pushes resolution dramatically further, achieving depth precision down to 150 micrometers, roughly the width of a human hair. That’s a hundredfold improvement over pulsed systems and could become increasingly important as autonomous driving demands finer detail.

ToF in Medical Imaging

Time of flight has transformed PET scans (positron emission tomography), a type of imaging used to detect cancer, heart disease, and brain disorders. In a PET scan, a radioactive tracer injected into the body emits pairs of photons that fly off in opposite directions. Detectors surrounding the patient pick up both photons and use the time difference between their arrivals to estimate where the emission originated.

Without ToF information, the scanner only knows the emission happened somewhere along the line between two detectors. With ToF, it narrows the location considerably. The spatial uncertainty depends on timing resolution: halving the timing error halves the positional uncertainty. Assessments by nuclear medicine physicians found that ToF-enhanced PET images showed better definition of small lesions, improved uniformity across the image, and noticeably lower noise compared to non-ToF reconstructions. The images also converge faster during processing, meaning shorter computation times for the same quality.

The practical benefit for patients is that ToF-PET can detect smaller tumors more reliably and reduce the kinds of image artifacts that might lead to ambiguous results.

Industrial and Warehouse Automation

ToF cameras have become a workhorse in logistics and manufacturing. In automated warehouses, they provide real-time 3D mapping of storage areas so that robots and automated guided vehicles (AGVs) can navigate aisles, identify open shelf spaces, and retrieve specific items by recognizing their positions and height profiles.

For pick-and-place operations on assembly lines, ToF sensors let robotic arms detect an object’s exact location, orientation, and distance before grasping it. In logistics, they measure package dimensions on conveyor belts to optimize how items are stacked on pallets or packed into shipping containers. AGVs equipped with ToF cameras can navigate through busy warehouse floors, avoiding both stationary shelves and moving workers or forklifts by continuously building and updating a 3D map of their surroundings.

What Limits ToF Accuracy

ToF sensors aren’t perfect, and certain conditions cause measurement errors. The biggest challenge for indirect ToF systems is multipath interference, where the emitted light bounces off multiple surfaces before reaching the detector. This creates “ghost” signals that distort the depth reading. There are four main sources: reflections off transparent objects like glass, light bouncing between nearby surfaces, scattering beneath translucent materials like skin or wax, and volumetric scattering from fog, smoke, or dust in the air.

Surface reflections tend to cause the largest errors, especially when two reflected signals arrive at nearly the same time. The sensor can’t easily separate them, producing inaccurate depth data at those points. Bright sunlight also floods the detector with ambient photons, reducing the signal-to-noise ratio and making outdoor use more difficult for indirect ToF systems. This is one reason pulsed direct ToF dominates outdoor applications like automotive LiDAR, while indirect ToF sensors excel indoors.

How ToF Sensor Technology Is Improving

The biggest leap in ToF performance comes from a type of detector called a single-photon avalanche diode, or SPAD. These sensors can detect individual photons with timing precision in the low picoseconds, which translates to sub-millimeter depth accuracy. SPAD arrays are already used in LiDAR, medical PET scanners, and scientific imaging. Advances in 3D chip stacking and new semiconductor materials are making SPAD sensors more sensitive in near-infrared wavelengths, which is exactly where most ToF systems operate. Combined with computational imaging techniques that use algorithms to extract more information from fewer photons, these detectors are pushing ToF systems toward higher resolution, longer range, and better performance in difficult lighting conditions.