What Is a Turbidity Meter? Types, Units, and Uses

A turbidity meter is an instrument that measures how cloudy or hazy a liquid is by shining a beam of light through a sample and detecting how much of that light gets scattered by suspended particles. The cloudier the liquid, the more light scatters, and the higher the reading. These meters are standard equipment in drinking water treatment, food and beverage production, and environmental monitoring, where clarity serves as a quick indicator of contamination or process quality.

How a Turbidity Meter Works

The core principle is simple: particles suspended in a liquid deflect light. A turbidity meter exploits this by directing a focused beam of light into a sample and measuring what happens to it. In the most common design, called a nephelometer, a detector sits at a 90-degree angle to the light beam. When particles in the sample scatter light sideways, that detector picks it up. More particles means more scattered light and a higher turbidity reading.

An older approach, called turbidimetry, measures the light that passes straight through the sample instead. Cloudier samples block more light, so the transmitted intensity drops. Nephelometry (measuring scattered light) is more sensitive at low turbidity levels, which is why it became the standard for drinking water testing.

Inside the instrument, you’ll find four main components: a light source (typically an LED or tungsten lamp), a photodetector that converts light into an electrical signal, a sample chamber that holds the liquid, and calibration standards used to verify accuracy. The type of light source matters more than you might expect, because it determines which international method the meter complies with and which units the readings are reported in.

Units of Measurement

Turbidity readings come in several units, and the differences trace back to the instrument and method used. The most common unit in the United States is the Nephelometric Turbidity Unit (NTU), which applies specifically to meters following EPA Method 180.1. These instruments use a white or broadband light source with a peak output between 400 and 680 nanometers.

The Formazin Nephelometric Unit (FNU) is used with meters that follow ISO 7027, the European drinking water protocol. These instruments use a near-infrared light source (780 to 900 nanometers) instead. Because the light wavelength differs, NTU and FNU readings from the same sample won’t always match, even though both methods place the detector at 90 degrees. Formazin Turbidity Units (FTU) appear in older spectrophotometric methods and are roughly comparable to NTU in value. An even older unit, the Jackson Turbidity Unit (JTU), is no longer in common use.

Types of Turbidity Meters

Turbidity meters come in three broad categories, each designed for a different setting.

Portable (handheld) meters are built for field use. They’re compact, battery-powered, and rugged enough to handle outdoor conditions. Modern portables deliver accuracy on par with laboratory instruments, making them practical for on-site water testing at treatment plants, rivers, or wells.

Benchtop (laboratory) meters sit in a lab and handle samples collected from multiple sources. They’re the go-to choice for compliance monitoring, periodic analysis of raw or settled water, and calibrating other instruments. If you’re comparing water quality across dozens of sites, a benchtop meter is the efficient option.

Process (inline) meters are installed directly in a pipeline or treatment system and take continuous, real-time readings of flowing water. Drinking water plants and some wastewater facilities use these to monitor every step of filtration. If something goes wrong mid-process, an inline meter flags it immediately rather than waiting for someone to grab a sample.

Where Turbidity Meters Are Used

Drinking water treatment is the highest-profile application. U.S. federal regulations set the maximum contaminant level for turbidity at 1 turbidity unit as a monthly average, measured at representative entry points to the distribution system. Up to 5 turbidity units may be allowed only if the water supplier can demonstrate to the state that the higher level doesn’t interfere with disinfection, prevent maintenance of an effective disinfectant throughout the system, or interfere with microbiological testing. A two-consecutive-day average cannot exceed 5 turbidity units. These thresholds exist because turbidity can shelter harmful microorganisms from disinfection.

Wastewater treatment facilities rely on turbidity meters to verify that suspended solids and pollutants are being removed effectively before discharge, helping meet environmental regulations. The readings offer a fast, continuous check on whether the treatment process is performing as expected.

The food and beverage industry uses turbidity measurement at nearly every production stage. Breweries monitor turbidity in wort and beer to maintain batch consistency. Juice and soft drink producers check clarity to ensure products are free of suspended particles before bottling. Dairy processors track the clarity of milk during different production stages. In all these cases, turbidity serves as a proxy for product quality and visual appeal.

Calibration and Standards

A turbidity meter is only as reliable as its calibration. The gold standard for calibration is Formazin, a chemical suspension that produces a known, reproducible level of turbidity. All calibrations must ultimately be traceable back to a Formazin primary standard.

In practice, handling Formazin for every calibration check isn’t always convenient, especially in the field. Secondary standards fill that gap. These include sealed containers of liquid latex or stabilized Formazin, as well as glass rods, plastic cylinders, or mirror devices designed for specific meter models. Secondary standards are easier to transport and store, but they need periodic verification against a primary Formazin standard to stay trustworthy.

What Affects Accuracy

Several factors can throw off a turbidity reading if you’re not careful. Finely divided air bubbles trapped in the sample scatter light just like particles do, producing artificially high readings. Letting the sample sit briefly or gently swirling it can help release trapped air before measurement.

Color in the water works in the opposite direction. Dissolved substances that absorb light (what’s called “true color”) reduce the amount of light reaching the detector, making turbidity readings come out lower than the actual particle load. This effect is generally minor in drinking water but can matter in highly colored surface water or industrial samples.

Light-absorbing materials like activated carbon, if present in significant concentrations, also suppress readings. And floating debris or coarse sediment that settles quickly can give misleadingly low results because it drops out of the light path before the measurement is taken. Consistent sample handling, including mixing the sample gently and measuring promptly, reduces these errors significantly.