Standard deviation is a number that tells you how spread out a set of values is from the average. If the standard deviation is small, most values are clustered close to the average. If it’s large, the values are scattered more widely. Think of it as a measure of how much variety or inconsistency exists in a group of numbers.
A Simple Example With Test Scores
Imagine two classrooms that both scored an average of 80 on a math test. In the first classroom, every student scored between 75 and 85. In the second, scores ranged from 50 to 100. Both classes have the same average, but the experience in each room was very different. The first class has a small standard deviation (scores are tightly grouped), while the second has a large one (scores are all over the place).
This is why the average alone can be misleading. Two datasets can share the same average yet look completely different. The average tells you the center of the data. Standard deviation tells you how much the individual values scatter around that center. Taken together, these two numbers give you a much clearer mental picture of what’s really going on.
The 68-95-99.7 Rule
When data follows a bell-shaped curve (which many natural measurements do, like heights, blood pressure readings, or test scores), standard deviation becomes especially powerful. A pattern called the empirical rule kicks in:
- 68% of values fall within one standard deviation of the average.
- 95% of values fall within two standard deviations.
- 99.7% of values fall within three standard deviations.
Say the average adult male height in a population is 5’10” with a standard deviation of 3 inches. That means about 68% of men are between 5’7″ and 6’1″. About 95% fall between 5’4″ and 6’4″. And virtually everyone (99.7%) is between 5’1″ and 6’7″. The farther you get from the average, the rarer those values become.
How It’s Calculated
You don’t need to calculate standard deviation by hand very often (spreadsheets and calculators handle it), but understanding the steps helps the concept click. Here’s the basic process:
First, find the average of all your values. Next, measure how far each individual value is from that average. Then square each of those distances (this prevents negative numbers from canceling out positive ones). Average all those squared distances together. Finally, take the square root of that result. The number you get is the standard deviation.
The squaring-then-square-rooting step is what distinguishes standard deviation from a related concept called variance. Variance is just the step before the square root: the average of the squared distances. The problem with variance is that it’s expressed in squared units, which aren’t intuitive. If you’re measuring heights in inches, variance comes out in “inches squared,” which is meaningless in everyday terms. Taking the square root converts the result back into the original units (inches), making it something you can actually interpret.
Standard Deviation in Investing
One of the most common real-world uses of standard deviation is measuring investment risk. When financial analysts talk about a stock’s “volatility,” they’re usually talking about its standard deviation.
A stock with a high standard deviation swings dramatically in price. It might soar one month and drop the next. A stock with a low standard deviation stays relatively close to its average price over time, making it more predictable. For example, if a stock has a mean price of $45 and a standard deviation of $5, you can expect with about 95% confidence that its next closing price will land between $35 and $55 (two standard deviations in either direction).
This is why stable blue-chip stocks tend to have low standard deviations, while speculative or high-growth stocks have high ones. The higher the standard deviation, the wider the range of possible outcomes, and the greater the chance of both large gains and large losses. Investors use this alongside other risk metrics to decide whether a particular investment fits their tolerance for uncertainty.
Standard Deviation in Manufacturing
Factories rely on standard deviation to keep products consistent. If a machine is supposed to cut parts to exactly 10 centimeters, a standard deviation of 1 millimeter means most parts are very close to the target. A standard deviation of 5 millimeters signals a problem: sizes are fluctuating too much, and defective products are likely slipping through.
This idea sits at the heart of the “Six Sigma” quality control system used by manufacturers worldwide. The goal is to shrink standard deviation so tightly that defects become extraordinarily rare. By tracking standard deviation over time, quality teams can spot when something has gone wrong, whether that’s a machine falling out of alignment, a batch of raw materials that doesn’t meet spec, or an operator who needs additional training. Consistently high deviation is a red flag that triggers investigation.
High vs. Low: What the Number Tells You
A standard deviation close to zero means the values in your dataset are nearly identical. Picture a group of professional runners whose mile times are all within a second of each other. A large standard deviation means there’s a wide range of outcomes. Picture a community fun run where some people sprint and others walk.
There’s no universal threshold for what counts as “high” or “low.” It always depends on context. A standard deviation of 5 pounds is tiny if you’re weighing elephants and enormous if you’re weighing hamsters. The number only has meaning relative to the scale of what you’re measuring and the average you’re comparing it to. When you see a standard deviation reported alongside an average, ask yourself: is that spread large or small compared to the average itself? That comparison is what turns a raw number into useful information.

