LiDAR, which stands for Light Detection and Ranging, is a remote sensing technology that uses laser light to measure distances and build detailed three-dimensional maps of the surrounding environment. It works by firing rapid pulses of laser light at a surface and measuring how long each pulse takes to bounce back. That simple principle, repeated millions of times per second, produces extraordinarily detailed 3D models of terrain, buildings, forests, ocean floors, and even fast-moving objects like other cars on a highway.
The Core Principle: Time of Flight
A LiDAR unit fires a short burst of laser light toward a target. That light travels at roughly 300,000 kilometers per second, hits a surface, and reflects back to a sensor on the unit. The system records exactly how long the round trip took, then calculates distance with a straightforward formula: distance equals the speed of light multiplied by the travel time, divided by two. You divide by two because the light had to travel out and back.
This measurement happens incredibly fast. Modern systems fire hundreds of thousands to millions of laser pulses per second, each one returning a single distance measurement. Every return is logged as a “point” with a precise X, Y, and Z coordinate. Stack enough of those points together and you get what’s called a point cloud: a dense, three-dimensional representation of whatever the laser scanned. Point clouds can range from around 7 to 8 points per square meter for basic surveys up to 700 or more points per square meter for close-range, high-detail scans.
What’s Inside a LiDAR System
A typical LiDAR setup has three core components working together. The laser itself fires the light pulses. A GPS receiver tracks the exact X, Y, and Z position of the sensor at the moment each pulse is emitted. And an Inertial Measurement Unit (IMU) records the orientation of the platform, tracking its roll, pitch, and yaw. This matters because if the system is mounted on an airplane banking into a turn, the sensor needs to know it’s tilted 15 degrees to the left so it can correct the geometry of every measurement taken during that moment.
Without the GPS and IMU working in sync, the millions of distance measurements would have no spatial context. You’d know how far away something was, but not where in the world that “something” actually sits. All three components log data simultaneously, and post-processing software stitches it all together into an accurate 3D model.
Pulse-Based vs. Continuous Wave LiDAR
The time-of-flight method described above is the traditional approach: fire a discrete pulse, wait for it to return, measure the delay. But there’s a newer technique called frequency-modulated continuous wave (FMCW) LiDAR that works differently. Instead of sending out individual pulses, FMCW systems emit a continuous beam of light whose frequency gradually shifts over time. When the reflected light returns, the system compares the frequency of the returning signal to the frequency being emitted at that instant. The difference reveals both the distance to the object and its velocity in a single measurement.
Traditional pulse-based systems can also estimate velocity by comparing successive distance readings, but FMCW does it natively and with greater precision. This makes FMCW particularly attractive for applications like autonomous driving, where knowing both where an object is and how fast it’s moving is critical.
Topographic vs. Bathymetric LiDAR
Not all laser wavelengths behave the same way when they hit different surfaces, and LiDAR systems exploit this. Topographic LiDAR, used to map land surfaces, fires near-infrared light at a wavelength of 1,064 nanometers. This wavelength reflects well off terrain, buildings, and vegetation but gets absorbed almost immediately by water.
Bathymetric LiDAR, designed to measure underwater surfaces like riverbeds and shallow coastal floors, uses green light at 532 nanometers. Green light penetrates water with far less absorption than infrared, allowing it to reach the bottom and reflect back. These systems can measure depths within about three times the water’s clarity depth (a measurement called Secchi depth). Many bathymetric systems actually carry both lasers, using the infrared beam to map the water’s surface and the green beam to map the bottom, then combining both datasets.
Mechanical Spinning vs. Solid-State Sensors
The earliest commercial LiDAR units used a mechanical spinning design: a laser and detector mounted on a motor that physically rotates 360 degrees, sweeping the environment with each spin. These systems produce excellent coverage but are bulky, have moving parts that can wear out, and historically came with steep price tags. A 64-beam Velodyne unit, the workhorse of early self-driving car research, cost around $80,000. Current mechanical units from multiple suppliers have come down to the $10,000 to $20,000 range, but that’s still far too expensive for mass-market vehicles.
Solid-state LiDAR eliminates the spinning mechanism entirely. Instead, it steers the laser beam electronically or uses a fixed “flash” approach that illuminates a whole scene at once, similar to how a camera flash works. With no moving parts, solid-state units are smaller, more durable, and dramatically cheaper to manufacture. Several major suppliers, including Hesai, RoboSense, and Luminar, have announced long-term cost targets below $500 per unit. MicroVision has gone further, designing a solid-state automotive sensor intended to reach production pricing below $200. That kind of price drop is what would make LiDAR standard equipment on everyday cars rather than a luxury reserved for robotaxis.
How Self-Driving Cars Use LiDAR
For autonomous vehicles traveling at highway speeds, LiDAR needs to see far and see sharply. At 140 kilometers per hour, a vehicle needs a minimum detection range of 200 meters to have enough time to identify and avoid a stationary obstacle ahead. At that distance, the system requires a horizontal and vertical angular resolution of 0.075 degrees to reliably identify something as large as a parked car. For smaller objects like motorcycles in a rear-view application, the requirement tightens to 0.04 degrees.
These numbers explain why automotive LiDAR is one of the most demanding applications. The sensor has to generate a dense, accurate point cloud of everything within 200 meters, update it many times per second, and do it all in rain, fog, dust, and direct sunlight. It’s also why the industry is investing so heavily: the global LiDAR market was valued at roughly $3.1 billion in 2025, with projections reaching $44.5 billion by 2035. The short-range segment, driven by industrial robotics and automotive safety features, is currently the fastest-growing category.
Mapping Hidden Landscapes in Archaeology
One of LiDAR’s most striking applications is in archaeology, where airborne scans can reveal ancient structures hidden beneath dense forest canopy. The key is a process called ground filtering. When a laser pulse hits a forest, some pulses reflect off the treetops, some off mid-level branches, and some penetrate all the way to the ground. Each of these returns is recorded at a different elevation.
Specialized algorithms then classify each point in the cloud as ground, low vegetation, medium vegetation, high vegetation, or clutter. By stripping away everything except ground points, researchers generate what’s called a bare-earth digital terrain model. This effectively “removes” the forest digitally, exposing the shape of the land beneath it. In practice, the filtering is more nuanced than a single pass. Areas with extremely dense vegetation require different filter settings than open areas, so researchers often generate multiple filtered models and combine them. The results have been transformative: entire ancient cities, road networks, and agricultural terraces have been discovered in places like Guatemala, Cambodia, and the Mediterranean, features completely invisible from the ground or from satellite imagery.
From Forest Floors to Factory Floors
Beyond archaeology and self-driving cars, LiDAR shows up in a wide range of fields. Utility companies use it to survey power line corridors, measuring the distance between sagging cables and nearby trees to prevent wildfire ignitions. Urban planners use airborne LiDAR to create precise 3D city models for flood simulation, solar panel placement, and building code enforcement. Forestry researchers measure canopy height across entire national forests, estimating timber volume and carbon storage without setting foot in the woods. In agriculture, drone-mounted LiDAR maps crop height variations across fields, flagging areas of poor growth for targeted treatment.
What ties all these uses together is the same basic loop: fire a laser pulse, time the return, log the distance, repeat millions of times. The math is simple. The engineering required to do it accurately, quickly, and affordably at scale is what keeps pushing the technology forward.

