In statistics, SE stands for standard error, a measure of how precise your sample estimate is. While standard deviation tells you how spread out individual data points are, standard error tells you how much your sample’s average (or other estimate) would vary if you repeated the study many times. Think of it as a margin of uncertainty around your result.
Standard Error vs. Standard Deviation
These two terms get confused constantly, so it helps to see exactly where each one lives. Standard deviation (SD) describes the spread of individual measurements in your data. If you measured the heights of 50 people, the SD tells you how scattered those heights are around the group average.
Standard error, on the other hand, zooms out to a bigger question: if you grabbed a different group of 50 people and calculated their average height, how different would that average be from the one you already have? The SE quantifies that uncertainty. A small SE means your sample average is a reliable estimate of the true population average. A large SE means there’s more wiggle room, and another sample could give you a noticeably different result.
A useful rule of thumb: use standard deviation when you want to describe how variable your data is. Use standard error when you want to express how confident you are in an estimate like a mean or a proportion.
How Standard Error Is Calculated
The most common form is the standard error of the mean (SEM). The formula is straightforward: divide the standard deviation by the square root of your sample size. If your sample of 50 people has a standard deviation of 10 cm, the standard error is 10 divided by the square root of 50, which comes out to about 1.41 cm. That tells you the sample mean is precise to roughly plus or minus 1.41 cm.
For proportions (like the percentage of people who answer “yes” to a survey question), the formula changes slightly. You multiply the proportion by one minus the proportion, divide by the sample size, then take the square root. So if 60% of 200 respondents said yes, the standard error of that proportion is about 0.035, or 3.5 percentage points.
Why Sample Size Matters So Much
Because sample size sits under a square root in the formula, increasing it shrinks the standard error, but with diminishing returns. Penn State’s statistics program illustrates this clearly: in one dataset, a sample of 10 produced a standard error of 0.936, while bumping the sample to 100 dropped the SE to 0.296. In another example, the SE went from 0.143 at a sample size of 10 down to 0.044 at 100.
The practical takeaway is that quadrupling your sample size cuts the standard error in half. Going from 25 participants to 100 makes your estimate twice as precise. This is why large studies carry more statistical weight: their smaller standard errors mean their estimates are closer to the true value.
The Central Limit Theorem Connection
Standard error works because of a powerful statistical principle called the central limit theorem. It states that if you take many random samples from any population and calculate their means, those means will form a bell-shaped (normal) distribution, regardless of what the original data looks like. The spread of that bell curve is the standard error.
This is why SE is defined the way it is. The central limit theorem proves mathematically that the variance of sample means equals the population variance divided by the sample size. Take the square root of that variance and you get the standard error. It’s not just a convenient formula; it reflects how sampling actually behaves in the real world.
How SE Builds Confidence Intervals
One of the most common uses of standard error is constructing confidence intervals, the ranges you see reported as “plus or minus” some number. A 95% confidence interval takes your sample mean and adds or subtracts 1.96 times the standard error in each direction. For a 99% confidence interval, you use 2.576 instead.
For example, if your sample mean is 92 and your standard error is 2.14, the 95% confidence interval runs from about 87.8 to 96.2. That range is your best estimate of where the true population value falls. Smaller standard errors produce narrower intervals, which is another way of saying your estimate is more precise.
Standard Error in Hypothesis Testing
When researchers compare two groups (say, a treatment group and a placebo group), the standard error plays a central role. The t-statistic, one of the most widely used test statistics, is calculated by dividing the difference between two group means by the pooled standard error. The result tells you how many standard errors apart the two groups are.
A t-value of 1 means the groups differ by one standard error, which isn’t very convincing since that much difference could easily happen by chance. A t-value of 3 or 4 means the gap is several standard errors wide, making it far less likely that random sampling alone explains the difference. This is how SE feeds directly into p-values and statistical significance.
Standard Error in Regression
In regression analysis, SE appears in two places. First, the standard error of the model estimates how much unexplained noise remains in the outcome variable after accounting for the predictors. Smaller values mean the model fits the data more tightly.
Second, each individual predictor in a regression gets its own standard error, which reflects how precisely that predictor’s effect has been estimated. Dividing a coefficient by its standard error produces a t-value, and that determines whether the predictor is statistically significant. A large coefficient with a large standard error may not be meaningful, while a modest coefficient with a tiny standard error can be highly significant. The standard errors in regression also depend on sample size, decreasing as more data is collected, which is one reason larger datasets tend to produce more statistically significant results.
One important caveat: standard errors in regression assume the model is correctly specified. If key variables are missing, relationships are nonlinear, or the errors aren’t normally distributed, the reported standard errors (and the confidence intervals built from them) can be misleading.

