Control limits are boundaries on a chart that separate normal process variation from signals that something has changed. They consist of an upper control limit (UCL) and a lower control limit (LCL), drawn above and below a center line that represents the process average. When data points fall within these limits, the process is behaving as expected. When a point lands outside them, it’s a statistical signal that something unusual is affecting the process.
The concept comes from statistical process control (SPC), a method developed in the 1920s for monitoring whether a manufacturing or business process is stable over time. But control limits show up far beyond factory floors, appearing in healthcare quality tracking, software development, financial monitoring, and any field where you need to distinguish a meaningful shift from ordinary noise.
How Control Limits Work
Every process has some natural variation. A coffee shop won’t serve every latte in exactly 47 seconds, and a machine won’t cut every part to exactly 10.00 millimeters. This built-in randomness is called common cause variation. It’s the baseline wobble that exists even when everything is running properly.
Control limits define the expected range of that wobble. They’re calculated from your own process data, not from external targets or customer requirements. Once you plot new data points on a control chart, any point that falls outside the limits suggests something beyond normal randomness is at work. That “something” is called special cause variation: a broken tool, a new supplier, a software bug, a staffing change. The whole point of control limits is to tell you when to investigate and when to leave the process alone.
The Three-Sigma Standard
Control limits are almost always set at three standard deviations above and below the process mean. The formulas look like this:
- Upper Control Limit (UCL) = process mean + (3 × standard deviation)
- Lower Control Limit (LCL) = process mean − (3 × standard deviation)
The three-sigma threshold isn’t arbitrary. For a normally distributed process, 99.73% of data points will naturally fall within three standard deviations of the mean. That leaves only a 0.27% chance that any single point will land outside the limits when nothing has actually changed. According to NIST, this translates to an average run length of about 371 points before you’d get a false alarm, meaning you could plot roughly 371 data points before one randomly trips the limit by chance alone.
This balance matters. Set the limits too narrow (say, two standard deviations) and you’ll constantly chase false signals. Set them too wide and you’ll miss real problems. Three sigma hits a practical sweet spot for most applications.
Calculating Limits for Different Chart Types
The basic three-sigma principle stays the same across chart types, but the specific formulas change depending on what you’re measuring.
Variable Data Charts
When you’re measuring continuous data like weight, temperature, or time, the most common approach uses an X-bar and R chart. Rather than computing standard deviation directly, these charts use precalculated constants based on your subgroup size. For example, with subgroups of 5 observations, you’d multiply the average range by a factor of 0.577 to get the distance from the center line to each control limit. These constants (labeled A2, D3, and D4 in standard reference tables) adjust for small sample sizes and make the math straightforward. With a subgroup size of 5, the upper limit for the range chart uses a D4 factor of 2.114, while D3 doesn’t apply because it’s zero for subgroups smaller than 7.
Attribute Data Charts
When you’re counting defects or tracking pass/fail rates rather than measuring a continuous variable, different formulas apply. A p-chart tracks the proportion of defective items. Its control limits are the average proportion plus or minus three times the square root of that proportion’s expected spread, which accounts for sample size. A c-chart tracks the count of defects per unit, using the average count plus or minus three times the square root of that average. Both still follow the three-sigma logic, just adapted to the statistical distributions that describe count and proportion data.
Rules for Detecting Out-of-Control Signals
A single point beyond the upper or lower control limit is the most obvious signal, but it’s not the only one. Over the decades, analysts developed pattern-based rules that catch subtler problems even when no individual point crosses a limit.
The Western Electric rules, widely used since the 1950s, flag patterns like two out of three consecutive points falling more than two standard deviations from the center on the same side, four out of five points beyond one standard deviation on the same side, or eight consecutive points all above (or all below) the center line. Nelson rules extend this further, flagging things like nine consecutive points on the same side of the center line, six points in a row steadily increasing or decreasing, or fourteen points alternating up and down in a sawtooth pattern.
Each of these patterns is statistically unlikely to happen by chance in a stable process. A run of eight points above the center line, for instance, suggests the process mean has shifted even though no single point breached a limit. These rules help you catch drift and trends before they become large enough to produce an obvious out-of-limit point.
Control Limits vs. Specification Limits
This is one of the most commonly confused distinctions in quality management, and getting it wrong leads to bad decisions. Control limits describe what your process actually does. Specification limits describe what the customer (or a regulation, or a design requirement) says the process should do. One quality pioneer described it neatly: specification limits are “the voice of the customer,” while control limits are “the voice of the process.”
These two sets of boundaries are calculated completely differently and serve different purposes. Specification limits come from design requirements, customer needs, or regulatory standards. Control limits come purely from your process data. A process can be perfectly in statistical control, with every point inside its control limits, yet still produce output that violates specification limits. The reverse is also true: a process might meet specs today while being statistically out of control, meaning it’s unstable and could drift out of spec at any time.
When a process is in control but outside specifications, the fix is a fundamental process change, like better equipment, different materials, or a redesigned workflow. When a process is out of control, the fix is identifying and removing the special cause. Confusing the two leads you to apply the wrong type of corrective action.
Setting Up Control Limits in Practice
You can’t set meaningful control limits from a handful of data points. The FDA recommends collecting 20 to 30 subgroups (or individual results) during an initial period when the process is operating in a reasonably stable manner. This gives you enough data to estimate the true process mean and variation without being thrown off by a few unusual readings.
During this trial period, you calculate preliminary control limits, then review the chart for any out-of-control signals. If you find points that were caused by identifiable, correctable special causes (a power outage, a training error), you remove those points and recalculate. The remaining data establishes your baseline limits going forward.
These limits aren’t permanent. As you improve the process, reduce variation, or change materials, you should periodically recalculate. Control limits that were set two years ago may no longer reflect how the process actually behaves today. Recalculating after a deliberate process improvement is especially important, since tighter variation should produce tighter limits, making your chart more sensitive to future problems.
Common Mistakes With Control Limits
The most frequent error is recalculating control limits every time new data comes in. This defeats the purpose entirely. If the limits shift with every batch, you lose the stable reference frame that makes the chart useful. Limits should be recalculated only when you have a deliberate reason, like a confirmed process improvement or a planned change in materials.
Another common mistake is treating control limits as goals. Being “in control” doesn’t mean “good.” It means “predictable.” A process can be perfectly stable and predictably producing mediocre or even unacceptable output. Control limits tell you whether the process is behaving consistently. Whether that consistent behavior meets your needs is a separate question answered by comparing process performance to specification limits or capability indices.
Finally, applying control limits to data that isn’t collected in time order undermines the entire method. Control charts are fundamentally about sequence. They detect shifts, trends, and cycles that unfold over time. If you plot data points out of order or lump together measurements from different time periods, the patterns that signal real problems become invisible.

