Is H₀ the Null Hypothesis? What It Means in Stats

Yes, H₀ (often written as “H0” or spoken as “H-oh” or “H-naught”) is the standard notation for the null hypothesis in statistics. It represents the default assumption in any statistical test: that no effect, no difference, or no relationship exists. The subscript zero signals “nothing going on,” which is exactly what the null hypothesis proposes.

What H₀ Actually Means

The null hypothesis is the starting assumption a researcher tries to disprove. Rather than setting out to prove that something works or that two groups differ, statistical testing flips the question: assume nothing is happening, then see if the data are strong enough to overturn that assumption.

Ronald Fisher, the statistician who formalized this approach in his 1935 book The Design of Experiments, put it plainly: “The null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation.” In his original example, a colleague claimed she could taste whether milk was added to a cup before or after the tea. Instead of trying to prove she had this ability, Fisher tested the null hypothesis that her correct answers were just random luck.

In mathematical notation, H₀ typically includes an equals sign. If you’re testing whether the average weight of a product matches its label of 500 grams, the null hypothesis would be written as H₀: μ = 500. The Greek letter μ stands for the true population average. The null hypothesis always contains a condition of equality, stating that the value is equal to some specific number.

H₀ vs. H₁: How They Work Together

Every null hypothesis has a partner called the alternative hypothesis, written as H₁ (or sometimes Hₐ). These two are mutually exclusive: one of them must be true, and they can never both be true at the same time. H₁ captures whatever the researcher suspects is actually going on.

Using the product weight example, H₁ might take a few different forms depending on the research question:

  • Two-sided test: H₁: μ ≠ 500 (the average weight is something other than 500 grams)
  • One-sided test (greater): H₁: μ > 500 (the product is heavier than labeled)
  • One-sided test (less): H₁: μ < 500 (the product is lighter than labeled)

The alternative hypothesis can’t be tested directly. Instead, you test H₀, and if the evidence is strong enough to reject it, you accept H₁ by default.

How H₀ Gets Rejected

Before collecting data, a researcher sets a significance level, called alpha (α). This is the threshold for how much risk of a wrong conclusion they’re willing to accept. The most common choice is α = 0.05, meaning a 5% chance of incorrectly rejecting the null hypothesis. But there’s nothing mathematically sacred about 0.05. It’s a convention that dates back to early statisticians who decided a 1-in-20 chance of being wrong was acceptable for many situations. For higher-stakes questions, a stricter threshold like 0.01 (1% risk) is often more appropriate.

After running the statistical test, you get a p-value. This number tells you how likely you’d be to see your data (or something more extreme) if the null hypothesis were actually true. If the p-value falls below your chosen alpha, you reject H₀. If the p-value is above alpha, you fail to reject H₀.

Why “Fail to Reject” Instead of “Accept”

This is one of the most common points of confusion. When the data don’t provide enough evidence against H₀, statisticians say you “fail to reject” it rather than “accept” it. The distinction matters because no amount of data can absolutely prove the null hypothesis is true. You might simply not have collected enough data, or your study might not have been sensitive enough to detect a real difference.

Think of it like a courtroom verdict. A jury finds a defendant “not guilty,” which isn’t the same as declaring them innocent. Similarly, failing to reject H₀ means the evidence wasn’t strong enough to overturn the default assumption. It doesn’t confirm that the default assumption is correct.

Two Ways a Decision About H₀ Can Go Wrong

Because statistical testing deals in probabilities rather than certainties, two types of errors are possible. A Type I error happens when you reject H₀ even though it’s actually true. This is a false positive: you conclude there’s an effect when there isn’t one. The alpha level you set before the test directly controls the maximum probability of this error. At α = 0.05, you’re accepting up to a 5% chance of a Type I error.

A Type II error is the opposite. You fail to reject H₀ when it’s actually false, missing a real effect. This is a false positive’s quieter cousin: the false negative. The probability of a Type II error depends on factors like sample size and how large the real effect is. Larger studies with bigger effects are less likely to miss something genuine.

These two error types pull in opposite directions. Making it harder to reject H₀ (using a smaller alpha) reduces Type I errors but increases Type II errors, since you’re demanding stronger evidence and might miss real but subtle effects.

Common Notation You’ll See

Depending on the textbook or field, you may encounter slightly different ways of writing the same thing. The null hypothesis appears as H₀, H0, or Ho. The alternative hypothesis shows up as H₁, H1, Ha, or Hₐ. All of these are standard, and the choice is mostly a matter of convention in a given discipline. In every case, the subscript zero (or the letter “o” when subscripts aren’t available) refers to the null, and the subscript one or letter “a” refers to the alternative.