How Is Lidar Data Collected From Air, Land, and Water

Lidar collects data by firing rapid laser pulses at a target and measuring how long each pulse takes to bounce back. A sensor fires anywhere from 100,000 to 2,000,000 pulses per second, and each returning pulse becomes a precise 3D point. Millions of these points together form a detailed “point cloud” that maps the shape of terrain, buildings, vegetation, or any other surface the laser hits.

How a Single Measurement Works

Every lidar measurement starts with a laser pulse leaving the sensor, striking a surface, and reflecting back to a detector. The sensor records the round-trip travel time and divides it by two (since the light traveled out and back) to calculate the exact distance to that surface. This is called the time-of-flight method, and it forms the foundation of most lidar systems.

A second approach, used when higher precision is needed at close range, replaces individual pulses with a continuous laser beam. Instead of timing a round trip, the sensor measures how much the returning light wave has shifted out of sync with the outgoing wave. This phase-shift method can deliver finer accuracy because subtle shifts in a wave are easier to measure precisely than tiny differences in travel time. Industrial scanning and some terrestrial systems rely on this technique.

The Four Components Every System Needs

Regardless of whether lidar is mounted on an airplane, a car, or a tripod, the collection system combines four core pieces of hardware working together.

  • Laser source: Fires the pulses. Topographic systems typically use an infrared wavelength (1064 nm), while systems designed to penetrate water use a green wavelength (532 nm).
  • Scanner and optics: A rotating mirror or similar mechanism steers the laser beam across the scene in a systematic pattern, ensuring full coverage rather than a single line of points.
  • GNSS receiver: A satellite positioning system (GPS or its equivalents) records the sensor’s location in three dimensions. This provides a steady, low-frequency position fix that anchors every measurement to real-world coordinates.
  • Inertial measurement unit (IMU): Records the sensor’s tilt, rotation, and acceleration many times per second. While GNSS tracks the sensor’s broad position, the IMU captures the rapid, high-frequency movements that happen between satellite fixes, like the vibration of an aircraft wing or a car hitting a pothole.

By fusing GNSS and IMU data together, the system knows exactly where the sensor was and which direction it was pointing at the instant each pulse left the laser. Without that pairing, the 3D coordinates of every point would drift.

Airborne Collection

The most common way to map large areas is from a fixed-wing aircraft or helicopter. The lidar sensor is mounted in the belly of the plane, firing pulses downward while the aircraft flies a series of parallel lines over the survey area, much like mowing a lawn. The U.S. Geological Survey recommends a minimum 10% overlap between adjacent flight lines and at least 25% sidelap to ensure no gaps in coverage. Overlap also gives data processors a way to check accuracy by comparing where the edges of neighboring strips agree.

Point density, the number of measurements per square meter, depends on the combination of flying height, aircraft speed, and pulse rate. A sensor operating at 100 kHz (100,000 pulses per second) from a lower altitude will produce a denser point cloud than the same sensor flying faster and higher. Modern airborne sensors from manufacturers like Riegl and Optech operate at effective rates up to roughly 1,300 kHz, allowing surveys to capture dozens of points per square meter even from typical flight altitudes.

An important detail for terrain mapping: point density is not uniform across hilly ground. The highest terrain features sit closer to the aircraft, so they receive denser coverage, while valleys farther from the sensor receive slightly fewer points per square meter.

Mobile and Vehicle-Mounted Collection

For mapping roads, highways, rail corridors, and urban streetscapes, lidar sensors are mounted on cars, trucks, or trains. The collection principle is the same as airborne, but the positioning challenge is different. Tall buildings, tunnels, and overpasses block satellite signals, so mobile systems add an odometry sensor (essentially a wheel-based distance tracker) to fill in when GNSS drops out.

Specialized algorithms then correct for the distortion caused by the vehicle’s movement during each scan rotation. Without this motion compensation, straight walls would appear warped and curbs would zigzag. The algorithms fuse data from the GNSS receiver, the IMU, and the odometer into a single, smooth trajectory that keeps every point cloud measurement in the right place.

Bathymetric Collection for Underwater Surfaces

Standard infrared lidar cannot penetrate water because the wavelength is absorbed almost immediately at the surface. Bathymetric lidar systems solve this by adding a green laser at 532 nm, a wavelength that passes through water and reflects off the bottom. The sensor fires both colors simultaneously: the infrared pulse bounces off the water surface while the green pulse continues down and returns from the lakebed, riverbed, or seafloor. The time difference between the two returns gives the water depth.

Penetration depth depends on water clarity, measured by a standard called Secchi depth (how deep you can see a white disc before it disappears). Most bathymetric systems can map the bottom down to 1.5 to 3 times the Secchi depth. In very clear coastal water with a Secchi depth of 10 meters, a high-end system could reach 25 to 30 meters. In turbid rivers, effective depth might be only a few meters.

Discrete Return vs. Full Waveform Recording

When a lidar pulse hits a tree canopy, parts of the pulse bounce back from the treetop, parts from mid-canopy branches, and parts from the ground below. How the sensor records those multiple reflections matters.

Most commercial systems use discrete return recording, which picks out the strongest peaks in the returning signal and logs each one as a separate point. A single outgoing pulse typically generates one to five return points this way. There is a trade-off, though: after detecting one return, the sensor has a brief “blind zone” of roughly 1.2 to 5.0 meters in which it cannot detect another surface. That means closely spaced layers, like dense understory beneath a canopy, can be missed.

Full waveform systems take a different approach, digitizing the entire shape of the returning signal at very fine time intervals (1 to 5 nanoseconds). Instead of isolated points, you get a continuous profile of reflected energy. This captures more detail about the vertical structure of whatever the pulse passed through, which is particularly useful for measuring forest composition and detecting deadwood. The downside is significantly larger data files and more complex processing. For most general-purpose mapping, discrete return collection is sufficient and far more common.

Weather and Environmental Limits

Lidar is an active sensor, meaning it supplies its own light source rather than relying on the sun, so it works at night. But it is not immune to weather. Fog, rain, and snow all scatter or absorb the laser pulses before they reach the target.

Fog is the most disruptive condition. In testing, some sensors lost the ability to detect targets entirely in thick fog, while others saw detection rates drop sharply once visibility fell below about 2,000 meters. Rain causes a more gradual decline: at moderate rainfall (around 50 mm per hour), some sensors lose detection of distant targets, though closer objects remain visible. Snow creates a dual problem, both scattering pulses in the air and physically building up on the sensor housing, which blocks the beam entirely.

Direct sunlight can also interfere. When the sun falls within the sensor’s field of view, the intense light can exceed the detector’s threshold and create false points in the data. In automotive lidar testing, one sensor began generating false detections at illuminance levels around 20,000 lux, roughly the brightness of full daylight with the sun low on the horizon. For airborne surveys, flights are typically planned to avoid extreme sun angles for this reason.

Accuracy Standards for Collected Data

The quality of lidar data is measured by how closely the collected points match reality. The American Society for Photogrammetry and Remote Sensing (ASPRS) sets the industry benchmarks, expressing accuracy as root mean square error (RMSE), which is essentially the average size of the positioning errors across a dataset.

Horizontal accuracy classes are defined by the project’s needs. A high-precision urban mapping project might require an RMSE of 7.5 cm or better, meaning the collected points must land within about 3 inches of their true horizontal position on average. Vertical accuracy is assessed separately for open ground and vegetated areas. On bare or paved surfaces, the data must meet a strict pass/fail threshold. Under tree canopy, vertical accuracy is tested and reported but does not carry the same hard cutoff, because vegetation introduces inherent variability in where pulses actually reach the ground.