Yes, H0 is the standard symbol for the null hypothesis in statistics. The “H” stands for hypothesis, and the subscript “0” (zero) represents the idea of “nothing” or “no effect.” Whenever you see H0 in a textbook, research paper, or stats class, it refers to the default assumption that there is no difference between groups, no relationship between variables, or no effect from a treatment.
What H0 Actually States
The word “null” means nothing, and that’s exactly what H0 claims: nothing is going on. It’s the starting assumption before any data is analyzed. If you’re testing whether a new medication works, H0 says the medication has no effect. If you’re comparing salaries between two groups, H0 says there’s no difference. If you’re looking at whether height predicts shoe size, H0 says there’s no relationship.
H0 always contains a condition of equality. In mathematical notation, that usually looks like H0: μ1 = μ2 (meaning two group averages are equal) or H0: μ = some specific number. For example, if the average SAT score is known to be 455 and you want to test whether a tutoring program changes that, the null hypothesis would be H0: μ = 455. The claim is that nothing has changed.
How H0 Pairs With H1
Every hypothesis test involves two competing statements: the null hypothesis (H0) and the alternative hypothesis (H1, sometimes written as Ha). These two are mutually exclusive. If one is true, the other is false.
H1 is the opposite of H0. It’s the claim that something is happening: a difference exists, a treatment works, or a relationship is real. You don’t prove H1 directly. Instead, you collect data and determine whether the evidence is strong enough to reject H0. If you can knock down the null hypothesis, the alternative wins by default. If you can’t, H0 stands. One useful analogy: H0 acts like a punching bag. You assume it’s true, then try to knock it down with your data. You either succeed or you don’t.
How You Decide to Reject H0
The decision to reject or keep H0 comes down to a number called the p-value. The p-value tells you the probability of getting your results (or more extreme results) if H0 were actually true. A small p-value means your data would be very unlikely under the null hypothesis, which is evidence against it.
Before running the test, researchers set a threshold called alpha (α), which is the cutoff for how unlikely the data needs to be before they’ll reject H0. The most common alpha level is 0.05, or 5%. Ronald Fisher, one of the founders of modern statistics, suggested this threshold in the early 20th century, writing that “we shall not often be astray if we draw a conventional line at 0.05.” If your p-value falls below 0.05, you reject H0 and conclude there’s likely a real effect. If it’s above 0.05, you don’t have enough evidence to reject H0.
That 5% threshold isn’t a universal law. It’s a convention that stuck. Some fields use stricter cutoffs like 0.01 or even 0.001 when the stakes are higher.
What Happens When You Get It Wrong
Two kinds of mistakes can happen in hypothesis testing, and both are defined in terms of H0.
A Type I error (false positive) happens when you reject H0 even though it’s actually true. You conclude there’s an effect when there isn’t one. The probability of making this mistake equals your alpha level. So at α = 0.05, you accept a 5% chance of a false positive.
A Type II error (false negative) happens when you fail to reject H0 even though it’s actually false. A real effect exists, but your data didn’t catch it. The probability of this error is called beta (β). Type II errors often happen when sample sizes are too small to detect a real difference.
Think of it like a courtroom. H0 is “the defendant is innocent.” A Type I error convicts an innocent person. A Type II error lets a guilty person go free. The system is designed to be cautious about Type I errors, just as courts set a high bar for conviction.
Why H0 Is Always the Starting Point
Statistics is built on skepticism. You don’t start by assuming your theory is correct. You start by assuming nothing is happening, then let the data argue otherwise. This is why H0 exists as the default position. It forces researchers to bring strong enough evidence before claiming a discovery.
This framework dates back to the 1920s and 1930s, developed through the separate but related work of Ronald Fisher and the team of Jerzy Neyman and Egon Pearson. Fisher developed significance testing, where you calculate a p-value to measure evidence against H0. Neyman and Pearson formalized hypothesis testing with pre-set error rates for both Type I and Type II errors. Modern statistics blends both approaches into what’s called null hypothesis significance testing (NHST), which is the version taught in most statistics courses today.
One common misconception: H0 doesn’t have to claim the effect is exactly zero, even though it usually does. You could set H0 to state that a difference is no larger than some specific value. But in practice, most null hypotheses are “nil” hypotheses, predicting zero difference or zero correlation.

