Standard deviation is a number that tells you how spread out a set of values is from the average. A small standard deviation means the values cluster tightly around the average, while a large one means they’re scattered more widely. It’s one of the most common tools in statistics, showing up everywhere from test scores to medical research to financial reports.
How It Works in Plain Terms
Think of standard deviation as a measure of consistency. Say two classrooms of students both score an average of 80 on a test. In one classroom, every student scored between 75 and 85. In the other, scores ranged from 50 to 100. Both classes have the same average, but the second class has a much larger standard deviation because the scores are more spread out. The average alone doesn’t tell you how reliable or typical it is. Standard deviation fills that gap.
A standard deviation close to zero means nearly every data point sits right on top of the average. As the number grows, the data points fan out further. The standard deviation is always expressed in the same units as the original data. If you’re measuring heights in centimeters, the standard deviation is also in centimeters. If you’re looking at dollars, the standard deviation is in dollars.
How to Calculate It Step by Step
You don’t need special software to calculate standard deviation by hand, though a calculator helps. The process has five steps:
- Find the mean. Add up all the values and divide by how many there are.
- Find each distance from the mean. Subtract the mean from each data point, then square the result. Squaring removes negative signs so that values below the mean don’t cancel out values above it.
- Add up those squared distances.
- Divide by the number of data points. This gives you a value called the variance.
- Take the square root. This converts the variance back into the original units, giving you the standard deviation.
Here’s a quick example. Suppose five friends weigh 150, 155, 160, 165, and 170 pounds. The mean is 160. The squared distances from the mean are 100, 25, 0, 25, and 100. Those add up to 250. Divide by 5 to get a variance of 50. The square root of 50 is about 7.07, so the standard deviation is roughly 7 pounds. That tells you a typical person in this group is about 7 pounds away from the average.
The 68-95-99.7 Rule
When data follows a bell-shaped curve (which many natural measurements do, from blood pressure readings to adult heights), standard deviation unlocks a powerful pattern:
- 68% of all data points fall within one standard deviation of the mean.
- 95% fall within two standard deviations.
- 99.7% fall within three standard deviations.
This is called the empirical rule, and it lets you quickly judge how unusual a value is. If the average adult male height is 5’9″ with a standard deviation of about 3 inches, roughly 68% of men are between 5’6″ and 6’0″. Someone who is 6’3″ is two standard deviations above the mean, placing them taller than about 97.5% of the population. Someone 6’6″ or taller is more than three standard deviations out, meaning fewer than 1 in 600 men are that tall.
Population vs. Sample Standard Deviation
There’s one important wrinkle. The five-step calculation above works when you have data for an entire group (the full “population”). But most of the time, you’re working with a sample, a smaller slice of the whole. In that case, you divide by the number of data points minus one instead of the full count. This adjustment makes the estimate slightly larger, which corrects for the fact that a sample tends to underestimate the true spread of the whole population.
The logic is intuitive: your sample mean always sits somewhere in the middle of the data you collected, but the true population mean could be further out. By dividing by a smaller number, you nudge the result upward to account for that gap. If your sample is large (hundreds or thousands of data points), this correction barely changes the result. For small samples, it matters a lot.
Standard Deviation vs. Variance
Variance and standard deviation are closely related. Variance is simply the standard deviation squared, or equivalently, standard deviation is the square root of variance. Both measure spread, but they differ in units. If your data is in pounds, the variance is in “pounds squared,” which isn’t easy to interpret. Standard deviation brings the number back into the original units, which is why it’s used far more often in everyday reporting.
Standard Deviation vs. Standard Error
You’ll sometimes see “standard error” (or “standard error of the mean”) reported alongside standard deviation, especially in scientific studies. They answer different questions. Standard deviation describes how much individual data points vary within a single group. Standard error describes how much the average itself would vary if you repeated the entire study many times.
Standard error is always smaller than standard deviation because averages are more stable than individual measurements. When a study reports results as “the average improvement was 12 points (SD = 8),” it’s telling you how much individual patients varied. When it reports “12 points (SE = 2),” it’s telling you how confident you can be in that average. Both are useful, but they communicate very different things.
Spotting Outliers With Standard Deviation
Standard deviation gives you a straightforward way to flag unusual data points. A value’s distance from the mean, measured in standard deviations, is called a z-score. A z-score of 0 means the value is exactly average. A z-score of 2 means it’s two standard deviations above the mean.
In practice, data points more than two or three standard deviations from the mean often get flagged for a closer look. The National Institute of Standards and Technology notes that some researchers use a threshold of 3.5 standard deviations to label potential outliers, since using standard z-scores alone can be misleading in small data sets. There’s no single universal cutoff; the right threshold depends on context. But the principle is the same: the further a value sits from the mean in standard-deviation terms, the more unusual it is.
Why It Shows Up Everywhere
Standard deviation is one of the first statistics reported in almost any study, survey, or data summary because the average by itself can be deeply misleading. Two investments might both return 8% per year on average, but one might swing between negative 20% and positive 40% while the other stays between 5% and 11%. The standard deviation makes that difference visible in a single number.
In health research, standard deviation shows whether a treatment works consistently across patients or helps some dramatically while doing nothing for others. In manufacturing, it tracks whether products come off the line at a uniform size or with unacceptable variation. In education, it reveals whether a test meaningfully separates students by ability or lumps everyone into a narrow band. Anywhere you see an average, standard deviation is the context that makes it meaningful.

