What Is Range and Standard Deviation in Statistics?

Range and standard deviation are two ways to measure how spread out a set of numbers is. Range tells you the gap between the highest and lowest values. Standard deviation tells you how far, on average, each data point sits from the mean. Both describe variability, but they do it very differently and are useful in different situations.

How Range Works

Range is the simplest measure of spread in statistics. You subtract the smallest value from the largest value, and that’s it. If the highest test score in a class is 98 and the lowest is 62, the range is 36.

This version, where you simply calculate highest minus lowest, is called the exclusive range. There’s also an inclusive range, which adds 1 to the result (so 37 in the example above). The inclusive range is sometimes used when working with whole numbers to account for the fact that each value represents a full unit. In most everyday contexts and introductory courses, the exclusive range is what people mean when they say “range.”

The appeal of range is its simplicity. You can calculate it in seconds, and it gives an immediate sense of how wide your data stretches. The problem is that it only looks at two numbers: the maximum and the minimum. Every other data point is ignored. If 29 students scored between 85 and 98, and one student scored 62, the range of 36 makes the data look far more spread out than it actually is for almost everyone in the class.

How Standard Deviation Works

Standard deviation measures spread by looking at every single data point, not just the extremes. It calculates how far each value is from the mean (the average), then combines those distances into a single number that represents the typical amount of variation in the dataset.

The calculation works in steps. First, find the mean. Then subtract the mean from each data point and square the result (squaring eliminates negative signs so that values above and below the mean don’t cancel each other out). Average those squared differences, then take the square root to bring the result back to the original units. That final number is the standard deviation.

There’s one important wrinkle: the formula changes slightly depending on whether you’re working with an entire population or a sample drawn from a larger population. When you have the full population, you divide by the total number of data points. When you have a sample, you divide by one fewer than the number of data points. This adjustment (dividing by n-1 instead of n) corrects for the fact that a sample tends to underestimate the true variability of the larger population it came from.

A small standard deviation means data points cluster tightly around the mean. A large one means they’re spread widely. If two classrooms both average 80 on an exam but one has a standard deviation of 4 and the other has a standard deviation of 15, the first class performed consistently while the second had a wide mix of high and low scores.

Variance and Standard Deviation

You’ll often see variance mentioned alongside standard deviation, and the relationship is straightforward: standard deviation is the square root of variance. Variance uses the same calculation but skips the final square root step, leaving you with squared units. If you’re measuring heights in centimeters, variance is expressed in “square centimeters,” which isn’t intuitive. Taking the square root converts it back to centimeters, giving you a number you can directly compare to the original data. That’s why standard deviation is more commonly reported and easier to interpret.

The 68-95-99.7 Rule

Standard deviation becomes especially powerful when data follows a bell curve (a normal distribution). In that case, a predictable pattern emerges:

  • 68% of data falls within one standard deviation of the mean
  • 95% of data falls within two standard deviations
  • 99.7% of data falls within three standard deviations

This is called the empirical rule, and it turns standard deviation into a practical tool for spotting unusual values. If the average resting heart rate in a group is 72 beats per minute with a standard deviation of 5, you’d expect almost everyone to fall between 57 and 87 (three standard deviations in each direction). A reading of 95 would be a clear statistical outlier.

How Medical Labs Use Both

One of the most common real-world applications shows up in medical lab results. When your doctor orders blood work, the “normal range” printed next to each result is formally called a reference interval, and it’s typically built using standard deviation. Labs test a large group of healthy people, calculate the mean and standard deviation for each measurement, then define the normal reference interval as the mean plus or minus two standard deviations. That captures the central 95% of healthy values, which is why roughly 1 in 20 perfectly healthy people will have a result flagged as slightly outside the normal range on any given test.

Some lab reports also express results in terms of how many standard deviations a value is from the population mean, making it easy to see at a glance whether a result is borderline or dramatically abnormal.

Why Standard Deviation Is More Reliable

Range and standard deviation both measure spread, but they differ sharply in how much information they use and how they handle extreme values.

Range depends entirely on two data points: the maximum and minimum. A single outlier on either end will inflate the range dramatically, even if the rest of the data is tightly packed. Standard deviation, because it factors in every data point, gives a more stable and complete picture of variability. It can still be pulled by outliers, but no single extreme value dominates the calculation the way it does with range.

Think of it this way: if you’re comparing the consistency of two manufacturing processes and one had a single defective measurement out of a thousand, the range would make that process look wildly inconsistent. The standard deviation would barely budge, correctly reflecting that the vast majority of output was consistent.

A Quick Shortcut Between Them

There’s a handy approximation called the range rule of thumb: divide the range by 4, and you get a rough estimate of the standard deviation. If exam scores span from 60 to 100, dividing that range of 40 by 4 gives an estimated standard deviation of about 10. This shortcut works reasonably well for data that’s roughly bell-shaped and for sample sizes around 30, but it becomes less accurate with smaller samples, skewed distributions, or heavy outliers. It’s a useful sanity check, not a replacement for the real calculation.

When to Use Each One

Range is useful when you need a fast, rough sense of spread or when you’re describing the boundaries of a dataset (the youngest and oldest participant in a study, the cheapest and most expensive option in a product line). It’s also the go-to in casual contexts where precision isn’t critical.

Standard deviation is the better choice whenever you need to make comparisons, draw conclusions, or describe how consistent a set of measurements is. It’s the standard in scientific papers, quality control, finance (where it measures investment volatility), and anywhere that a rigorous measure of variability matters. In academic writing, standard deviation is typically reported to one decimal place alongside the mean, giving readers both the center and the spread of the data in a compact format.

In practice, you’ll often see both reported together. A study might note that participants had a mean age of 45.2 with a standard deviation of 8.6 and a range of 22 to 71. The mean and standard deviation describe the typical participant, while the range shows the full boundaries of who was included.