How to Measure Turbidity by Spectrophotometer: Step-by-Step

A spectrophotometer measures turbidity by shining a beam of light through your sample and detecting how much light is lost to scattering by suspended particles. The instrument reads this loss as an absorbance value, which you then convert to turbidity units using a calibration curve built from standards of known turbidity. The process is straightforward, but wavelength selection, calibration, and sample handling all affect whether your results are reliable.

How a Spectrophotometer Detects Turbidity

A true turbidity meter (nephelometer) measures scattered light at an angle, typically 90 degrees from the light source. A spectrophotometer does something different: it measures the light that passes straight through the sample and reaches the detector on the other side. Suspended particles in the sample scatter photons out of the beam path, so less light arrives at the detector. The instrument interprets this as higher absorbance, even though the light wasn’t technically absorbed by a dissolved dye. It was redirected by solid matter.

This distinction matters. Because spectrophotometers and nephelometers use fundamentally different optical designs, results from different instruments can differ remarkably, even when measuring the same sample. A nephelometer is the gold standard for regulatory turbidity reporting, but a spectrophotometer works well for routine lab monitoring, microbiology (tracking bacterial growth), and industrial quality checks where relative comparisons matter more than absolute regulatory values.

Choosing the Right Wavelength

Wavelength selection is one of the most important decisions you’ll make. Two main ranges are used:

  • 400 to 600 nm (visible light): This is the range specified by U.S. EPA Method 180.1, which calls for a tungsten filament lamp. It offers higher sensitivity to small particles but is susceptible to interference from dissolved color in the sample.
  • 860 nm (near-infrared): This is the wavelength prescribed by ISO 7027, the international standard commonly used in Europe. Light above 800 nm minimizes interference from dissolved substances that absorb visible light, such as humic acids found in surface water.

If your samples have any color to them, whether from organic matter, tannins, or industrial dyes, a near-infrared wavelength will give you more accurate turbidity readings. Colored organic substances cause visible-light instruments to read lower than the true turbidity because the dissolved color absorbs some of the light, masking the scattering effect of the particles. For colorless samples like treated drinking water or laboratory buffers, 600 nm is a common and practical choice that most UV-Vis spectrophotometers can handle easily.

Preparing Formazin Calibration Standards

You cannot convert absorbance readings to turbidity units without a calibration curve, and that curve needs to be built from standards with known turbidity values. Formazin is the universal reference standard for turbidity calibration. It’s a polymer suspension created by mixing two chemical solutions and letting them react.

To prepare the stock suspension, you mix equal volumes of a hydrazine sulfate solution (1 g per 100 mL) and a hexamethylenetetramine solution (10 g per 100 mL) in a volumetric flask, then let the mixture sit undisturbed at 25°C for 24 hours. After diluting to volume with demineralized water, you have a concentrated stock. Diluting 10 mL of that stock to 100 mL produces a suspension defined as 40 NTU (nephelometric turbidity units). You can further dilute this to create a series of standards covering your range of interest, typically 0 to 100 NTU for general water analysis.

The stock suspension should be prepared fresh monthly, and the diluted working standards fresh weekly. Formazin particles settle over time, so each standard needs thorough mixing immediately before use. Pre-made stabilized formazin standards are also commercially available if you prefer to skip the preparation step.

Step-by-Step Measurement Procedure

Once your spectrophotometer is warmed up and your standards are ready, follow this sequence:

Set your spectrophotometer to the chosen wavelength. Zero the instrument using a cuvette filled with demineralized water as your blank. This establishes your baseline for zero turbidity.

Measure each formazin standard in order from lowest to highest concentration. Record the absorbance reading for each. Plot absorbance on the y-axis against known turbidity (in NTU or FAU) on the x-axis to build your calibration curve. Most instruments or associated software can fit a linear regression through these points. The curve should be linear at low to moderate turbidity levels, though it may flatten at high concentrations as multiple scattering events start to interfere with each other.

For your samples, bring them to room temperature if possible. Mix each sample thoroughly to disperse all suspended solids evenly. Pour the sample into a clean cuvette and wait for air bubbles to rise out before placing it in the instrument. Read the absorbance, then use your calibration curve to convert that value to turbidity units.

Understanding Turbidity Units

When you measure turbidity on a spectrophotometer rather than a nephelometer, the correct unit is FAU (formazin attenuation units) rather than NTU. Both units are based on the same formazin standard, so a 40-FAU reading and a 40-NTU reading refer to the same concentration of formazin. But because the two instruments detect light differently (transmitted vs. scattered), their readings on real-world samples won’t always agree, especially for samples with unusual particle shapes or sizes. If your work requires NTU values for regulatory compliance, you’ll need a dedicated nephelometer. For lab and process monitoring, FAU values from a spectrophotometer are perfectly functional.

Common Sources of Error

Turbidity measurements are sensitive to several pitfalls that can push your readings either too high or too low.

Air bubbles are the most frequent problem. Even tiny bubbles clinging to the inside of the cuvette scatter light and create falsely high readings. Always let your sample sit briefly after pouring to allow bubbles to escape. If your sample contains dissolved gases that tend to effervesce, you may need a degassing step before measurement. Gently swirling (not shaking) the cuvette can help dislodge stubborn bubbles.

Dirty or scratched cuvettes cause erratic readings. Fingerprints, smudges, or etching on the optical surfaces scatter light in unpredictable ways. Handle cuvettes only by the top edges or frosted sides, never on the clear faces where the light beam passes through. Discard cuvettes once they become visibly scratched. For glass cuvettes, a thin coating of silicone oil can mask minor surface imperfections.

Particle settling is another concern. Turbidity is time-sensitive. If you let a sample sit too long before reading, heavier particles sink to the bottom and your reading drops. Biodegradation can also reduce turbidity in biological samples over time. The U.S. Geological Survey recommends measuring turbidity as soon as possible after collection, ideally on-site, because changes in pH during transport can cause minerals to precipitate or humic acids to behave differently.

Dissolved color in the sample is a subtler issue. Colored substances absorb light at visible wavelengths, reducing the amount of light reaching the detector independently of any particle scattering. This produces a reading that blends true turbidity with color interference, typically biasing results low at visible wavelengths because the instrument can’t distinguish between light lost to scattering and light lost to absorption. Switching to 860 nm largely eliminates this problem.

Optimizing Accuracy Across Instruments

Because spectrophotometer designs vary in beam geometry, detector sensitivity, and path length, turbidity readings from one instrument may not match another, even with the same sample and wavelength. This is a fundamental limitation of turbidimetric measurement. To keep your data consistent, always calibrate on the same instrument you use for samples, run at least one standard each time you measure, and note which instrument and wavelength you used when recording results.

For samples above roughly 40 NTU, accuracy tends to decline as the relationship between absorbance and turbidity becomes nonlinear. Diluting the sample with particle-free water and multiplying the result by the dilution factor gives more reliable numbers. Keep your calibration curve within the range where it’s genuinely linear, and verify that range periodically by running a mid-range check standard.