What Is a Time of Flight Sensor and How Does It Work?

A time-of-flight (ToF) sensor measures distance by sending out a pulse of light and timing how long it takes to bounce back. The basic principle is simple: since light travels at a known, constant speed, the round-trip time reveals exactly how far away an object is. These sensors show up in everything from the face-unlock camera on your phone to autonomous robots navigating warehouse floors.

How a ToF Sensor Measures Distance

A ToF sensor has two core parts: a light source and a detector. The light source is typically a laser or LED that emits near-infrared light, usually at wavelengths around 850 or 940 nanometers, which is invisible to the human eye. The sensor fires this light toward a scene, it reflects off surfaces and objects, and the returning photons hit the detector array. The sensor then calculates distance based on the delay between emission and return.

There are two main approaches to measuring that delay. The pulsed method fires a short burst of light and directly measures how long the photons take to come back. The continuous-wave method takes a slightly different route: instead of timing individual pulses, it sends out a continuously modulated light signal and measures the phase shift between the outgoing and returning waves. That phase difference translates directly into distance. Both methods rely on the same underlying physics, but continuous-wave sensors tend to be more common in consumer devices because they’re easier to manufacture at small scales.

Direct vs. Indirect ToF

ToF sensors split into two families: direct (dToF) and indirect (iToF). Direct ToF sensors measure the actual flight time of individual photons. They use extremely sensitive detectors called single-photon avalanche diodes, which can register a single photon hitting the sensor. The sensor fires thousands of light pulses, logs when each returning photon arrives, and builds a histogram of arrival times. The peak of that histogram reveals the most likely distance to the object. This is the same basic approach used in LiDAR systems on self-driving cars.

Indirect ToF sensors skip the direct timing and instead calculate distance from the phase difference between emitted and received light waves. They come in two subtypes: continuous-wave, which works with sinusoidal signals, and pulsed, which uses square waves. Indirect sensors are generally simpler and cheaper to build, which is why they dominate in consumer electronics. Direct sensors offer better performance at longer ranges, making them the go-to for automotive and industrial applications where precision at distance matters.

What’s Inside the Sensor

The light source in most modern ToF sensors is a vertical-cavity surface-emitting laser (VCSEL). These tiny lasers are the same technology phones use for biometric face unlock. Older and lower-cost systems sometimes use infrared LEDs instead, but VCSELs produce a more focused, uniform beam that improves accuracy. The light passes through a diffuser that spreads it to match the camera’s field of view, illuminating the entire scene at once rather than scanning point by point.

On the detection side, a bandpass filter sits in front of the sensor to block as much ambient sunlight as possible, improving the signal-to-noise ratio. The filtered light hits the ToF sensor array, where each pixel independently measures the phase or timing of returning photons. Software then decodes these signals and converts them into a 3D point cloud, a map of distances for every pixel in the frame. High-end ToF cameras can perform up to nine million distance measurements per second with millimeter-level accuracy.

Common Uses in Phones and Consumer Devices

If you’ve used face recognition to unlock a smartphone, you’ve used a ToF sensor. The sensor maps the 3D contours of your face, creating a depth profile that’s far harder to fool than a flat 2D photo. Beyond security, ToF sensors improve portrait photography by accurately measuring the distance between you and the background, letting the camera software create a natural-looking blur (bokeh) effect without guessing where edges are.

Augmented reality is another major use. When you place a virtual couch in your living room using an AR app, a ToF sensor is scanning the floor, walls, and furniture to understand the room’s geometry. This lets virtual objects sit convincingly on real surfaces, cast shadows in the right direction, and stay anchored as you move your phone. ToF sensors also enable space scanning, where you can walk through a room and generate a rough 3D model of it in minutes.

Industrial and Automotive Applications

In warehouses and logistics centers, ToF sensors automate volume measurement. A sensor mounted above a conveyor belt can measure the dimensions of packages as they pass underneath, replacing manual measurement and reducing errors in shipping cost calculations. One research system combined a simple overhead camera with a ToF sensor to measure object volumes for under $110 USD in hardware, making it practical for mass deployment at delivery stations and storage facilities.

The automotive industry uses ToF sensors for both interior and exterior functions. Inside the cabin, they power driver monitoring systems that track head position, eye gaze, and whether the driver is paying attention. Outside, ToF-based LiDAR systems help autonomous vehicles build real-time 3D maps of their surroundings. Robotics and drone navigation rely on ToF sensors for obstacle detection and spatial awareness, particularly in GPS-denied environments like indoor spaces.

How ToF Compares to Other 3D Sensing Methods

ToF isn’t the only way to capture depth. Stereo vision uses two cameras (like human eyes) and calculates distance from the slight difference between their views. Structured light projects a known pattern onto a scene and measures how the pattern distorts. Each approach has tradeoffs.

ToF sensors excel at range. They work over longer distances and larger measurement areas than either stereo vision or structured light. They also perform well in low light, since they bring their own illumination and don’t rely on surface texture, contrast, or visible features. Structured light shares that low-light advantage but struggles in bright sunlight because the projected pattern gets washed out. Stereo vision handles bright conditions better since it relies on ambient light rather than fighting against it.

Where ToF sensors have a clear edge is speed and simplicity. They capture an entire depth frame in a single shot, making them compact, less computationally demanding, and less expensive than structured light systems that need to project and process multiple sequential patterns. Structured light generates a high computing load and fails with moving objects because it stitches together several images taken in sequence. ToF handles motion reasonably well, though not perfectly.

Limitations and Challenges

ToF sensors aren’t perfect. One of the biggest challenges is multipath interference. When light bounces off multiple surfaces before reaching the detector (say, off a wall, then a floor, then back to the sensor), the extra travel time produces distance errors. This is especially problematic in corners, concave objects, and rooms with highly reflective surfaces. Correcting for multipath effects computationally is expensive: some simulation methods take around two hours to process a single depth image.

Ambient light is another issue. Although ToF sensors use bandpass filters to block unwanted wavelengths, intense sunlight in the same near-infrared range can still overwhelm the sensor and degrade accuracy. This makes outdoor performance in direct sunlight harder than indoor use. Highly reflective or very dark surfaces also cause trouble, since they return either too much or too little light for the sensor to measure accurately.

Range and precision trade off against each other depending on the sensor class. Consumer-grade ToF sensors in phones typically work well within a few meters. Industrial sensors designed for high precision can achieve accuracy down to about 27 micrometers (roughly a quarter the width of a human hair), but only at very short measurement distances of 20 to 30 millimeters. Extending that precision to longer ranges remains an active engineering challenge. Standard ToF range cameras with pulse modulation deliver precision in the range of a few millimeters, which is excellent for navigation and AR but not sufficient for tasks like precision machining or coordinate measurement.

Where ToF Technology Is Headed

ToF sensors are spreading into new territory. The integration of AI with ToF imaging is improving how raw depth data gets processed, reducing noise and correcting errors that would have required expensive hardware upgrades in the past. Autonomous robots and drones increasingly rely on ToF for navigation, and IoT-connected depth-sensing platforms are bringing 3D awareness to smart infrastructure like building management systems and retail analytics. The healthcare and aerospace industries are also adopting ToF for applications ranging from 3D body scanning to spatial mapping in environments where GPS isn’t available.