How Are Gases Measured? Methods, Sensors & Units

Gases are measured using sensors and instruments that detect their concentration, composition, or flow rate. The method depends on what you need to know: whether a specific gas is present, how much of it is in the air, or how fast it’s moving through a pipe. In workplaces, environmental monitoring, and industrial settings, different technologies handle different types of gases, and the units used vary depending on the context.

Units Used to Express Gas Concentration

Gas concentration is most commonly expressed in parts per million (ppm) or parts per billion (ppb). These describe how many molecules of a target gas exist within a million or billion molecules of air. For very high concentrations, you’ll see percentages instead. Carbon dioxide in a room, for example, might be reported as 0.04% (400 ppm).

In workplace safety, OSHA sets exposure limits using two main formats. An 8-hour Time Weighted Average (TWA) represents the maximum average concentration a worker can be exposed to over a full shift in a 40-hour work week. A Ceiling Value, marked with a “C,” is a concentration that must never be exceeded at any point, even briefly. If continuous monitoring isn’t available, ceiling values are assessed as a 15-minute time weighted average. Some substances also have Short-Term Exposure Limits (STELs), which cap exposure over shorter windows, typically 15 minutes.

For combustible gases, concentration is expressed as a percentage of the Lower Explosive Limit (LEL). The LEL is the lowest concentration of a gas mixed with air that can ignite. The Upper Explosive Limit (UEL) is the highest. Gas detectors in facilities like refineries and chemical plants are set to trigger alarms well before the LEL is reached, giving workers time to ventilate the area or shut off the gas supply.

Electrochemical Sensors

Electrochemical sensors are among the most common portable gas detectors. They work by allowing the target gas to diffuse through a porous membrane to an electrode inside the sensor. At that electrode, the gas undergoes a chemical reaction (either oxidation or reduction) that generates a small electrical current. The strength of that current is directly proportional to the gas concentration.

These sensors are widely used for detecting gases like carbon monoxide, hydrogen sulfide, oxygen, and carbon dioxide. Because they rely on a chemical reaction at the electrode, they work best with gases that are electrochemically active. One limitation is cross-sensitivity: a sensor designed for one gas can sometimes respond to others. A sensor tuned to detect ethylene oxide, for example, requires a highly reactive electrode that may also respond to alcohols and carbon monoxide, potentially producing false readings.

Infrared (NDIR) Sensors

Non-Dispersive Infrared sensors, known as NDIR, measure gases based on how they absorb infrared light. The sensor sends infrared radiation through a chamber containing the gas sample. Certain gas molecules absorb infrared light at specific wavelengths. Carbon dioxide, for instance, absorbs strongly at a wavelength of 4.26 micrometers. The sensor compares the amount of light that makes it through the gas at this wavelength to a reference wavelength (3.95 micrometers) where the gas doesn’t absorb.

As gas concentration increases, more infrared light gets absorbed, and less reaches the detector on the measurement channel. The reference channel stays constant, giving the sensor a built-in baseline for comparison. This relationship follows a well-established physics principle called the Lambert-Beer law, which links light absorption to gas concentration and the length of the path the light travels. NDIR sensors are popular for carbon dioxide monitoring in buildings, greenhouses, and industrial processes because they’re stable over long periods and don’t consume the gas they’re measuring.

Catalytic Bead Sensors for Flammable Gases

Catalytic bead sensors, also called pellistors, are purpose-built for detecting combustible gases like methane and propane. Inside the sensor, a small bead coated with a catalyst sits on a heated wire coil. When a flammable gas contacts the bead, it burns on the surface. That combustion raises the temperature of the bead, which increases the electrical resistance of the wire coil inside it.

The sensor uses a circuit called a Wheatstone bridge that compares the heated detector bead to a second, inactive reference bead. Temperature fluctuations from the environment affect both beads equally, so they cancel out. The voltage difference between the two beads is proportional to the concentration of combustible gas in the chamber. These sensors are standard equipment in oil and gas facilities, mines, and confined space operations where flammable gas buildup could reach explosive levels.

Photoionization Detectors for Volatile Organic Compounds

Photoionization detectors (PIDs) measure volatile organic compounds (VOCs) like benzene, toluene, and other chemicals that evaporate easily. A PID uses an ultraviolet lamp to emit photons at a specific energy level, commonly 10.6 electron volts. When gas molecules with an ionization potential below that energy level pass through the lamp’s beam, the photons knock electrons loose, creating positively charged ions.

These ions are drawn to a cathode while the freed electrons move toward an anode, producing an electrical current. That current is amplified and converted to a voltage proportional to the VOC concentration. PIDs are lightweight and respond in real time, making them valuable tools for environmental monitoring, hazmat response, and industrial hygiene surveys where workers need immediate readings on airborne chemical levels.

Gas Chromatography for Complex Mixtures

When you need to identify and quantify individual gases within a complex mixture, laboratory instruments like gas chromatography-mass spectrometry (GC-MS) are the standard. A gas sample is injected into a long, narrow capillary column (typically 0.25 mm in diameter) where different gas components travel at different speeds based on their chemical properties. This separates the mixture into individual compounds over time.

As each separated compound exits the column, a mass spectrometer breaks it into fragments and measures their masses, producing a unique fingerprint for identification. The system can detect components at remarkably low concentrations, with a lower limit of about 1 nanogram per component injected onto the column. Sample capacity on the narrow columns used for GC-MS tops out around 50 to 100 nanograms per component, so concentrated samples often need to be diluted before analysis. This technique is used for air quality research, forensic analysis, and industrial quality control where portable sensors lack the specificity to tell similar compounds apart.

Measuring Gas Flow Rate

Measuring how much gas is flowing through a pipe or duct is a separate challenge from measuring concentration. Several technologies handle this, each suited to different conditions.

  • Thermal mass flow meters place a heated sensor in the gas stream and measure how much heat the flowing gas carries away. The cooling effect is proportional to the mass flow rate. These are common in semiconductor manufacturing and environmental monitoring.
  • Ultrasonic flow meters send sound pulses both upstream and downstream through the gas. The difference in how long each pulse takes to arrive is proportional to the flow velocity. They work best with clean gases and require no moving parts.
  • Flow nozzles and orifice plates create a pressure drop as gas passes through a restriction. The size of that pressure drop correlates to flow rate. These are simpler and cheaper but less precise, and they’re often used for steam and gas measurement in HVAC and chemical processing.

Keeping Gas Measurements Accurate

Gas sensors degrade over time. Chemical sensors slowly lose reactivity, infrared lamps dim, and environmental conditions shift baselines. Two maintenance practices keep readings trustworthy: bump testing and calibration.

A bump test is a quick functional check. You briefly expose the sensor to a known concentration of calibration gas above the alarm set point. If the sensor responds and the alarm triggers, it passes. No adjustments are made during a bump test. For most portable gas detectors, a bump test is recommended before each day of use.

Calibration is more thorough. The instrument is exposed to a precise concentration of calibration gas, and its readings are adjusted so the displayed value matches the known concentration. This corrects for sensor drift and degradation. The standard recommendation is to calibrate before first use and monthly afterward. Newer instruments with dual-sensor technology can extend the interval between bump tests, but monthly calibration remains the baseline for reliable measurements.