What Is Mean and Standard Deviation in Statistics?

The mean is the average of a set of numbers, and the standard deviation tells you how spread out those numbers are around that average. Together, they give you a two-part summary of any dataset: where the center is, and how tightly the values cluster around it. Understanding both is essential for reading research, interpreting test scores, and making sense of data in everyday life.

How the Mean Works

The mean is what most people think of as “the average.” You add up all the values and divide by how many there are. If five students score 70, 80, 85, 90, and 95 on an exam, the mean is (70 + 80 + 85 + 90 + 95) ÷ 5 = 84.

The mean gives you a single number that represents the center of a dataset. It’s useful, but it has a limitation: it tells you nothing about the range or consistency of the values. Two classrooms could both have a mean test score of 80, but in one class everyone scored between 75 and 85, while in the other scores ranged from 40 to 100. That’s where standard deviation comes in.

What Standard Deviation Measures

Standard deviation is a measure of how dispersed data points are relative to the mean. A small standard deviation means the values are clustered tightly around the average. A large standard deviation means they’re spread further apart. A standard deviation close to zero means almost no variation at all, meaning nearly every data point is the same.

Think of it this way: if the mean is the bullseye on a dartboard, the standard deviation describes how scattered the darts are. A skilled player has a small standard deviation (darts grouped near the center). A beginner has a large one (darts all over the board), even if both players happen to have the same average position.

How to Calculate Standard Deviation

The calculation builds on a related concept called variance. Here are the steps:

  • Find the mean of your data.
  • Subtract the mean from each data point to get the difference.
  • Square each difference (this eliminates negative numbers).
  • Average those squared differences to get the variance.
  • Take the square root of the variance to get the standard deviation.

Standard deviation is simply the square root of the variance. Variance measures spread in squared units, which isn’t intuitive. Taking the square root converts it back into the same units as your original data. If you’re measuring heights in inches, the standard deviation is also in inches, making it much easier to interpret.

Population vs. Sample Formulas

There’s one important difference depending on what your data represents. If your dataset includes every member of a group you care about (a full population), you divide by the total number of data points when calculating variance. If your data is a sample drawn from a larger population, you divide by one fewer than the number of data points. This adjustment, dividing by n−1 instead of n, corrects for the fact that a sample tends to underestimate the true variability in a population. In practice, most real-world data is a sample, so the n−1 version is the one you’ll use most often.

The 68-95-99.7 Rule

When data follows a bell-shaped (normal) distribution, the mean and standard deviation unlock a powerful shortcut known as the empirical rule. Roughly 68% of all values fall within one standard deviation of the mean. About 95% fall within two standard deviations. And 99.7%, nearly everything, falls within three standard deviations.

This means that if you know the mean and standard deviation, you can quickly estimate what counts as typical and what counts as unusual. Anything beyond two standard deviations from the mean only happens about 5% of the time. Anything beyond three standard deviations is genuinely rare, occurring in less than 0.3% of cases.

A Real-World Example

Health research uses mean and standard deviation constantly. In a long-term cardiovascular study called the Jackson Heart Study, researchers reported systolic blood pressure among 757 participants as 110.1 (6.6) at the first visit. That notation means the average blood pressure was 110.1 mmHg, with a standard deviation of 6.6. Most participants’ readings fell within about 6.6 points above or below 110.1, giving you a quick picture of both the typical reading and how much variation existed in the group.

By the third visit, the overall group’s blood pressure had shifted to 118.6 (13.9). Not only did the average go up, but the standard deviation nearly doubled, from 6.6 to 13.9. That tells you the group became less uniform over time: some participants’ blood pressure stayed low while others’ rose significantly. The growing standard deviation reveals something the mean alone would miss.

Z-Scores: Locating Individual Values

Once you have the mean and standard deviation, you can figure out exactly where any individual data point sits relative to the rest. This is called a z-score, and the formula is straightforward: subtract the mean from your data point, then divide by the standard deviation.

A z-score of 0 means the value is exactly average. A z-score of 1.5 means it’s one and a half standard deviations above average. A z-score of −2 means it’s two standard deviations below. Positive z-scores sit above the mean, negative ones sit below. Any z-score above 3 or below −3 is generally considered unusual, because values that far from the center are extremely rare in a normal distribution.

Standardized tests use this principle. If the mean SAT score is 1060 with a standard deviation of 200, and you scored 1260, your z-score is (1260 − 1060) ÷ 200 = 1.0. You’re one standard deviation above average, meaning you scored higher than roughly 84% of test-takers.

Standard Deviation vs. Standard Error

You’ll sometimes see data reported with “standard error” instead of standard deviation, especially in scientific papers. These measure different things. Standard deviation describes how spread out the individual data points are. Standard error describes how confident you can be in the calculated mean itself, essentially how much that average might shift if you collected a new sample.

Standard error is always smaller than standard deviation for the same dataset, which is why some researchers use it (intentionally or not) to make their results look more precise than they are. When you’re trying to understand the variability among individuals in a group, standard deviation is the number you want. Standard error matters more when you’re evaluating how reliable an estimated average is.

Why Both Numbers Matter Together

Reporting a mean without a standard deviation is like giving directions without a distance. Saying “the average commute is 30 minutes” is useful, but knowing the standard deviation is 5 minutes (most people commute between 25 and 35 minutes) paints a very different picture than a standard deviation of 25 minutes (commutes range wildly from 5 minutes to nearly an hour). The mean tells you what’s typical. The standard deviation tells you how much you should trust “typical” as a description of the whole group.