How Do You Find the Standard Deviation, Step by Step

To find the standard deviation, you calculate how far each data point sits from the average, then combine those distances into a single number. The result tells you how spread out your data is. A small standard deviation means values cluster tightly around the mean, while a large one means they’re scattered widely.

The process involves five clear steps, and once you understand the logic behind each one, the whole concept clicks into place.

The Five Steps to Calculate It by Hand

Start with any set of numbers. Say you have five test scores: 80, 85, 90, 75, and 95.

Step 1: Find the mean. Add all your values together and divide by how many you have. Here, that’s 425 ÷ 5 = 85.

Step 2: Find each deviation from the mean. Subtract the mean from every data point. Your deviations are: -5, 0, 5, -10, and 10. Some are negative (below average) and some positive (above average).

Step 3: Square each deviation. This removes the negative signs so that values below the mean don’t cancel out values above it. You get: 25, 0, 25, 100, and 100.

Step 4: Average those squared deviations. Add them up (250 total), then divide. If you’re working with a full population, divide by the number of data points (5). If you’re working with a sample, divide by one less than that (4). This result is called the variance. For our sample: 250 ÷ 4 = 62.5.

Step 5: Take the square root. The square root of the variance is the standard deviation. √62.5 ≈ 7.91. That number is in the same units as your original data, which is what makes standard deviation more useful than variance for everyday interpretation.

Population vs. Sample: Why the Denominator Changes

The most common source of confusion is when to divide by n and when to divide by n – 1. The difference comes down to whether your data set includes every possible value you care about or just a slice of it.

If you measured the height of every single employee in a company, that’s a population. You divide by n, the total count. But if you surveyed 50 out of 500 employees, that’s a sample. You divide by n – 1 instead, which slightly increases the result. This correction exists because a sample tends to underestimate the true spread of the full population. Dividing by a smaller number compensates for that bias and gives you a more accurate estimate.

In practice, most real-world data sets are samples. Unless you have every possible observation, use n – 1.

How to Read the Result

A standard deviation close to zero means your data points are nearly identical to each other. The farther the number climbs from zero, the more variation exists in your data. But “high” and “low” are always relative to the scale you’re measuring. A standard deviation of 5 is huge if your mean is 10, but tiny if your mean is 10,000.

The most practical way to interpret standard deviation is through the 68-95-99.7 rule, which applies when your data follows a bell-shaped (normal) distribution:

  • 68% of values fall within one standard deviation of the mean
  • 95% fall within two standard deviations
  • 99.7% fall within three standard deviations

So if the average adult male height is 70 inches with a standard deviation of 3 inches, roughly 68% of men are between 67 and 73 inches tall, and 95% are between 64 and 76 inches. Anything beyond three standard deviations (below 61 or above 79 inches) is extremely rare.

Calculating It in Spreadsheets

You rarely need to do this by hand outside of a classroom. In Excel or Google Sheets, two built-in functions handle it:

  • STDEV.S calculates the sample standard deviation (divides by n – 1). Use this when your data is a subset of a larger group.
  • STDEV.P calculates the population standard deviation (divides by n). Use this when your data represents the complete set.

Type either function into a cell, select your data range, and you’re done. For example, =STDEV.S(A1:A50) returns the sample standard deviation for 50 values in column A. If you’re unsure which to use, STDEV.S is the safer default.

Where Standard Deviation Shows Up in Real Life

In finance, standard deviation is the go-to measure of investment risk. A stock with a standard deviation of 2% in monthly returns is relatively stable. One with a standard deviation of 15% swings wildly. Portfolio managers use it to compare the predictability of different assets.

In healthcare, hospitals track standard deviation in costs per procedure to spot inconsistencies. A procedure with a high standard deviation in cost, even if the average looks reasonable, signals that some patients are paying far more than others. That variation often points to differences in severity, technique, or resource use that administrators want to investigate. Smaller-volume procedures tend to show wider standard deviations simply because a few unusual cases have more influence on a small data set.

In manufacturing, standard deviation determines whether a production line is staying within quality tolerances. In education, it helps distinguish whether a test effectively separates students by ability or if everyone scored about the same.

One Important Limitation: Sensitivity to Outliers

Standard deviation is calculated from the mean, and the mean itself gets pulled toward extreme values. This makes standard deviation particularly sensitive to outliers. A single unusually large or small number in your data set can inflate the result and make your data look more spread out than it really is.

Consider a set of salaries at a small company: $50K, $55K, $52K, $48K, and $500K. That one executive salary drags the mean up and inflates the standard deviation dramatically, even though four of the five employees earn similar amounts. In cases like this, the median absolute deviation, which measures spread around the median instead of the mean, gives a more honest picture because the median barely budges when one extreme value enters the data.

If your data set has obvious outliers or isn’t symmetrically distributed, keep this limitation in mind. The standard deviation is still valid mathematically, but it may not tell the story you think it does.

Variance vs. Standard Deviation

Variance and standard deviation measure the same thing: spread. Variance is simply the standard deviation before you take the square root (step 4 in the calculation above). The reason we take that extra step is practical. Variance is expressed in squared units, which makes it hard to interpret. If you’re measuring heights in inches, variance comes out in “square inches,” which is meaningless for height. Standard deviation brings you back to inches, making it directly comparable to your original measurements.

Variance does appear in more advanced statistical formulas and is useful behind the scenes. But for describing and communicating how spread out a data set is, standard deviation is the version people use.