What Is Standard Deviation? Examples Explained

Standard deviation is a number that tells you how spread out a set of values is from their average. A small standard deviation means most values cluster close to the average, while a large one means values are scattered more widely. It’s one of the most common tools in statistics, and it’s easier to understand with a concrete example than with a formula alone.

A Simple Example With Test Scores

Imagine two classrooms of five students each take the same quiz, and both classes end up with the same average score of 80. In Class A, the scores are 78, 79, 80, 81, and 82. In Class B, the scores are 60, 70, 80, 90, and 100. Both averages are identical, but the spread is completely different. Standard deviation captures that difference in a single number.

For Class A, the standard deviation is about 1.6. For Class B, it’s about 15.8. That tells you at a glance that Class B’s scores are far more spread out, even though the “typical” student scored the same in both rooms. The average alone hides this information. Standard deviation reveals it.

How to Calculate It Step by Step

Let’s walk through Class A’s scores (78, 79, 80, 81, 82) to see exactly where that 1.6 comes from.

  • Step 1: Find the mean. Add all the values and divide by the count. (78 + 79 + 80 + 81 + 82) ÷ 5 = 80.
  • Step 2: Subtract the mean from each value. These gaps are called deviations: −2, −1, 0, +1, +2.
  • Step 3: Square each deviation. Squaring removes the negative signs so they don’t cancel out: 4, 1, 0, 1, 4.
  • Step 4: Add those squared deviations. 4 + 1 + 0 + 1 + 4 = 10.
  • Step 5: Divide by the number of values. 10 ÷ 5 = 2. This result is called the variance.
  • Step 6: Take the square root. √2 ≈ 1.41. That’s the standard deviation.

The square root in that last step brings the number back into the same units as the original data. If you’re measuring quiz points, the standard deviation is also in quiz points, which makes it easy to interpret.

Population vs. Sample Standard Deviation

The example above treated those five scores as the entire group we care about. That’s a population standard deviation, and you divide by n (the total count). But if those five students are just a sample drawn from a larger group, you divide by n − 1 instead of n. For Class A, that would be 10 ÷ 4 = 2.5, giving a standard deviation of about 1.58 instead of 1.41.

Dividing by n − 1 slightly increases the result, which corrects for the fact that a sample tends to underestimate the true spread of the full population. In practice, if you’re working with survey data, experiment results, or any subset of a bigger group, use n − 1. If you genuinely have every single data point (every student in a school, every transaction in a quarter), use n.

A Real-World Example: Human Height

Standard deviation becomes especially useful when you apply it to real measurements. The average height of adult men in the United States is about 68.9 inches (roughly 5′9″), according to CDC data. The standard deviation for adult male height is typically around 3 inches.

That means most men fall within 3 inches of the average, so between about 5′6″ and 6′0″. A man who is 6′3″ stands two standard deviations above the mean, putting him taller than the vast majority of the population. Standard deviation gives you a ruler for measuring how unusual any single value is.

The 68-95-99.7 Rule

When data follows a bell-shaped (normal) distribution, standard deviation unlocks a powerful shortcut. About 68% of all values fall within one standard deviation of the mean. About 95% fall within two standard deviations. And about 99.7% fall within three.

Back to the height example: if the mean is 68.9 inches and the standard deviation is 3 inches, then roughly 68% of men are between 65.9 and 71.9 inches tall. About 95% are between 62.9 and 74.9 inches. And nearly everyone (99.7%) is between 59.9 and 77.9 inches. This pattern, sometimes called the empirical rule, works for anything that’s normally distributed: birth weights, blood pressure readings, manufacturing tolerances, standardized test scores.

What High and Low Values Tell You

A standard deviation close to zero means values are nearly identical. Think of a factory producing bolts that are all supposed to be 10 mm long. If the standard deviation is 0.01 mm, quality is extremely consistent. If it’s 2 mm, something is seriously wrong with the process.

In medicine, this idea shows up in blood pressure monitoring. A person with a true average systolic blood pressure of 130 mm Hg typically sees readings that bounce around with a standard deviation of about 13 mm Hg. That means a single reading of 143 or 117 is completely normal variation, not a sign that anything changed. Understanding the standard deviation helps you avoid overreacting to a single measurement.

Context matters, though. A standard deviation of 5 means something very different for quiz scores (out of 100) than for the number of children in a family (typically 0 to 4). You always interpret it relative to the scale and mean of the data.

Standard Deviation vs. Standard Error

These two terms sound similar but answer different questions. Standard deviation describes how scattered individual data points are. If you measure the heights of 200 people, the standard deviation tells you how much those 200 heights vary from one another.

Standard error, on the other hand, tells you how precise your calculated average is likely to be. If you sampled a different 200 people, how much would the average shift? Standard error shrinks as your sample gets larger, because bigger samples produce more stable averages. Standard deviation doesn’t necessarily shrink with a bigger sample, because it’s describing the natural spread of the data itself, not the reliability of your estimate.

When you see error bars on a chart in a news article or research summary, check whether they represent standard deviation or standard error. Standard deviation bars will always be wider, and they answer a fundamentally different question.

Where You’ll See It in Everyday Life

Standard deviation appears in more places than most people realize. Investment firms report it as a measure of volatility: a mutual fund with a standard deviation of 20% is a wilder ride than one with a standard deviation of 8%. Weather forecasts use it implicitly when they give a range for tomorrow’s high temperature. Schools use it to convert raw test scores into percentiles. Sports analysts use it to identify outlier performances.

Any time someone says a result is “within the normal range” or “two sigma from the mean,” they’re using standard deviation as the yardstick. Once you recognize the concept, you’ll spot it everywhere, giving you a much sharper sense of what numbers actually mean and how much trust to place in any single measurement.