How to Interpret Skewness and Kurtosis in Statistics

Skewness tells you whether your data leans to one side, and kurtosis tells you how likely your data is to produce extreme values. A perfectly symmetrical, normal distribution has a skewness of 0 and a kurtosis of 3 (or 0 if your software reports “excess kurtosis”). Understanding how far your values stray from those benchmarks, and whether that matters for your analysis, is what interpretation is really about.

What Skewness Actually Measures

Skewness quantifies how asymmetrical your data is. A skewness of zero means the data is evenly balanced around the center. Positive skewness means the right tail is longer, with most values clustered on the left. Negative skewness means the left tail is longer, with most values clustered on the right.

A practical way to see this: in a positively skewed distribution (right-skewed), the mean is typically greater than the median, which is greater than the mode. Think of household income data, where most people earn moderate amounts but a small number of very high earners pull the mean upward. In a negatively skewed distribution (left-skewed), the pattern reverses: the mean is usually the smallest of the three, the mode is the largest, and the median sits between them. An example would be exam scores on an easy test, where most students score high but a few very low scores drag the mean down.

Interpreting Skewness Values

The most widely cited rule of thumb comes from Hair et al. (1998), and it’s straightforward:

  • Between -0.5 and +0.5: approximately symmetric
  • Between -1 and -0.5 or +0.5 and +1: moderately skewed
  • Below -1 or above +1: highly skewed

These thresholds give you a quick read on shape, but the real question is usually whether the skewness is severe enough to cause problems. George and Mallery (2010) consider skewness values between -2 and +2 acceptable for assuming a roughly normal distribution. West et al. (1996) set the bar slightly higher, treating an absolute skewness value greater than 2.1 as a substantial departure from normality. If your skewness falls within that -2 to +2 window, you’re generally safe using standard parametric tests.

What Kurtosis Really Tells You

Kurtosis is widely described as measuring the “peakedness” of a distribution. This is wrong, and the misconception has persisted for over a century despite repeated corrections from statisticians. A 2014 paper in The American Statistician put it bluntly: “Kurtosis tells you virtually nothing about the shape of the peak. Its only unambiguous interpretation is in terms of tail extremity.” Distributions with identical kurtosis values can have dramatically different peak shapes.

What kurtosis actually measures is how heavy the tails of your distribution are, meaning how prone your data is to producing outliers. High kurtosis means your data has more extreme values (or a greater tendency to produce them) than a normal distribution would. Low kurtosis means the tails are thinner and outliers are rarer.

The Three Types of Kurtosis

Distributions are classified into three categories based on their kurtosis value. The baseline is a normal distribution, which has a kurtosis of 3.

  • Mesokurtic (kurtosis ≈ 3): Tail behavior similar to a normal distribution. This is the reference point.
  • Leptokurtic (kurtosis > 3): Heavier tails than normal. Your data contains more extreme values than you’d expect. Stock market returns are a classic example: most daily changes are small, but crashes and surges happen far more often than a normal distribution would predict.
  • Platykurtic (kurtosis < 3): Thinner tails than normal. Extreme values are rarer. The data tends to stay closer to the center without dramatic outliers.

Excess Kurtosis vs. Standard Kurtosis

This is a common source of confusion. Because a normal distribution has a kurtosis of 3, many software packages subtract 3 and report what’s called “excess kurtosis” instead. With excess kurtosis, the baseline for a normal distribution becomes 0 rather than 3. Excel, R, SPSS, and most statistical tools report excess kurtosis by default, but this isn’t always documented clearly.

Before interpreting your number, check which convention your software uses. If the output says “excess kurtosis,” then 0 is your normal benchmark, positive values mean heavier tails, and negative values mean lighter tails. If it reports standard (Pearson) kurtosis, then 3 is the benchmark. Misreading this will throw off your entire interpretation.

Interpreting Kurtosis Values

When working with excess kurtosis (where normal equals 0), the commonly recommended thresholds for normality are:

  • Between -2 and +2: Generally acceptable for assuming normality (George and Mallery, 2010)
  • Between -7 and +7: A more lenient threshold used in structural equation modeling and multivariate analysis (Hair et al., 2010; Curran et al., 1996)
  • Above +7 or below -7: Substantial departure from normality that could affect your results

A positive excess kurtosis of, say, 1.5 means your data has somewhat heavier tails than normal but nothing alarming. An excess kurtosis of 10 means you have far more extreme values than a normal distribution would produce, and you need to investigate those outliers carefully before running standard analyses.

How Skewness and Kurtosis Affect Your Analysis

If you’re running common parametric tests like t-tests or ANOVA, moderate skewness and kurtosis typically don’t invalidate your results. Research examining the actual impact found that skewed or kurtotic data generally pushes results in a conservative direction, meaning you’re less likely to find statistical significance rather than more likely to get false positives. In other words, the risk is missing a real effect, not fabricating one.

The bigger concern is multivariate outliers. When extreme values appear across multiple variables simultaneously, they can genuinely bias results. If your data shows high kurtosis, inspect the specific data points driving it. A single extreme observation in a small dataset can inflate kurtosis dramatically.

For techniques that are more sensitive to normality assumptions, like structural equation modeling or factor analysis, the -2/+2 skewness threshold and -7/+7 kurtosis threshold from Curran et al. (1996) are the standard benchmarks. If your values exceed those limits, consider transforming the data (log or square root transformations work well for positive skewness) or using methods that don’t assume normality.

Quartile-Based Alternatives

The standard skewness formula uses every data point, which means a handful of extreme values can heavily influence the result. When outliers are a concern, two alternative measures are worth knowing.

Bowley’s coefficient of skewness uses quartiles instead of means. It compares how far Q1 and Q3 sit from the median: if the upper quartile is farther from the median than the lower quartile, the data is positively skewed. Because it only considers the middle 50% of observations, extreme values on either end don’t affect it at all. Kelly’s coefficient works on a similar principle but uses the 10th and 90th percentiles, keeping 80% of the data in play while still ignoring the most extreme 10% on each side. Both produce values that are easier to interpret when your data has genuine outliers you don’t want driving the skewness calculation.

A Quick Reference for Reading Your Output

When you get skewness and kurtosis values from your software, here’s how to read them in practice:

  • Skewness near 0: Data is roughly symmetric. No concerns.
  • Skewness between -1 and +1: Mild asymmetry. Fine for most analyses.
  • Skewness between -2 and +2: Moderate asymmetry. Still acceptable for parametric tests in most cases.
  • Skewness beyond ±2: Substantial skew. Consider transformations or non-parametric methods.
  • Excess kurtosis near 0: Tail behavior similar to a normal distribution.
  • Excess kurtosis between -2 and +2: Acceptable range for most analyses.
  • Excess kurtosis beyond ±7: Serious departure from normality. Investigate outliers and consider alternative methods.

Always pair these numbers with a visual check. A histogram or Q-Q plot will often tell you more about what’s happening in your data than any single statistic can. Skewness and kurtosis give you a numerical summary, but seeing the shape of your distribution makes interpretation concrete.