What Is Full Scale Output in Sensors and Systems?

Full scale output (FSO) is the total range of signal a sensor or instrument can produce, measured as the difference between its lowest and highest output values. If a pressure sensor outputs 0 millivolts at zero pressure and 80 millivolts at maximum pressure, its full scale output is 80 mV. This number is the foundation for nearly every other specification on a sensor’s data sheet, from accuracy to temperature stability.

How Full Scale Output Works

Every sensor converts a physical measurement (pressure, weight, flow, temperature) into an electrical signal. Full scale output describes the complete electrical range that signal covers. A load cell with a sensitivity of 2 mV/V powered by a 10-volt supply, for example, produces a full scale output of 20 millivolts when loaded to its maximum rated capacity.

The calculation is straightforward:

Output Signal (mV) = Rated Sensitivity × (Applied Load / Maximum Capacity) × Excitation Voltage

So a sensor rated at 2 mV/V on a 10V supply carrying half its maximum load would output 10 mV, exactly half the full scale output. This linear relationship between the physical input and the electrical output is what makes sensors useful in the first place.

Full Scale Output vs. Full Scale Span

These two terms sound identical but differ in one important way. Full scale output measures from the sensor’s true zero (no input applied) to its maximum output. Full scale span measures from the sensor’s actual output at the low end of its range to its output at the high end. The difference matters because most sensors don’t output exactly zero when nothing is being measured. They have a small residual voltage called an offset.

If a pressure sensor has an offset voltage of 2 mV and a maximum output of 82 mV, its full scale span is 80 mV (the difference between the two endpoints), while its raw upper output is 82 mV. Spec sheets may reference either term, so knowing which one a manufacturer is using prevents errors when calibrating equipment or selecting a sensor for a project.

Why Accuracy Is Expressed as a Percentage of FSO

One of the most common places you’ll encounter full scale output is in accuracy specifications. When a sensor is rated at ±1% of full scale, that error is a fixed absolute value across the entire measurement range. For a flow meter with a full scale of 100 liters per minute, ±1% FS means the reading could be off by up to 1 liter per minute regardless of what flow rate you’re measuring.

This has a practical consequence that catches people off guard. At full flow (100 l/min), a 1 l/min error is genuinely 1% of the reading. But at half flow (50 l/min), that same 1 l/min error becomes 2% of the actual reading. At 10 l/min, it’s 10%. Sensors specified as “percent of full scale” are most accurate near the top of their range and progressively less accurate as the measured value drops.

Some instruments specify accuracy as a percentage of reading instead, meaning the error scales proportionally with the measurement. Higher-end instruments sometimes combine both: ±1.75% of reading plus ±0.25% of full scale, for instance. Understanding which spec you’re looking at tells you where in the range the sensor will perform best, and whether you’ve chosen a sensor with an appropriate range for your application. Oversizing a sensor (picking one with a much larger range than you need) means you’ll spend most of your time in the lower portion of the range where percent-of-full-scale errors hit hardest.

Standard Signal Ranges in Industrial Systems

In industrial settings, sensors transmit their full scale output using standardized electrical signals. The two most common are 0 to 10 volts and 4 to 20 milliamps. A pressure transmitter rated for 0 to 500 psi with a 4-20 mA output sends 4 mA at zero pressure and 20 mA at 500 psi. The full scale output in this case is 16 mA (the span from 4 to 20).

The 4-20 mA standard has a built-in advantage: because zero pressure corresponds to 4 mA rather than 0 mA, a broken wire (which reads 0 mA) is immediately distinguishable from a legitimate low reading. The current signal is also far more resistant to electrical noise and signal loss over long cable runs than a voltage signal, which is why it dominates in factory and process environments where cables may run hundreds of feet.

Full Scale Output in Digital Systems

When an analog signal gets converted to a digital number, full scale output determines how precisely the signal can be represented. An analog-to-digital converter (ADC) divides the full scale voltage range into discrete steps based on its bit depth. A 12-bit ADC has 4,096 steps. If it’s digitizing a 10-volt full scale output, each step (called the least significant bit) represents about 2.5 millivolts.

A 3-bit ADC covering a 1-volt range, by contrast, has only 8 steps, so each step is 0.125 volts. Higher bit depth means finer resolution relative to the full scale range. An 8-bit ADC resolves one part in 256, or about 0.4% of the full scale range. A 16-bit converter resolves one part in 65,536, roughly 0.0015%. Matching the ADC resolution to the sensor’s full scale output ensures you’re not throwing away measurement precision during the conversion from analog to digital.

How Temperature Affects Full Scale Output

Sensor outputs drift with temperature. Manufacturers quantify this as thermal span shift (or thermal sensitivity shift), typically expressed as a percentage of full scale per degree of temperature change. A spec like ±0.01% FS/°F means that for every degree Fahrenheit the sensor’s environment changes, the full scale output could shift by up to 0.01% of its rated value.

Over a 50°F temperature swing, that accumulates to ±0.5% of full scale, which may be larger than the sensor’s baseline accuracy spec. This is why precision applications use temperature-compensated sensors and why data sheets specify a “compensated temperature range” within which the thermal shift numbers apply. Operating outside that range can produce errors that far exceed the headline accuracy number.

Choosing the Right Full Scale Range

Selecting a sensor comes down to matching its full scale output to your actual measurement needs. A sensor with too large a range wastes resolution and amplifies the effective error at your operating point. A sensor with too small a range risks overloading or clipping the signal when readings approach the maximum.

The ideal choice puts your typical operating range in the upper third to upper half of the sensor’s full scale range. This keeps percent-of-full-scale errors relatively small as a fraction of your actual reading, avoids saturation at the top, and leaves headroom for occasional peaks. When comparing sensors, convert all accuracy specs to the same basis (percent of reading at your expected operating point) so you’re making an apples-to-apples comparison rather than being misled by a headline number that only applies at full scale.