What Is the Null Value in Statistics and P-Values?

The null value in statistics is the specific number that represents “no effect,” “no difference,” or “no relationship” in a hypothesis test. It is the benchmark your data gets compared against to determine whether something meaningful is happening. Depending on the type of analysis, the null value is usually 0 or 1.

How the Null Value Works

Every statistical test starts with a null hypothesis, which is the default assumption that nothing interesting is going on. The null value is the number at the center of that assumption. When you run a test comparing two groups, you’re essentially asking: how far is my observed result from the null value, and is that distance large enough to be convincing?

For example, if you’re testing whether a new medication lowers blood pressure more than a placebo, the null hypothesis says the two treatments produce the same average result. The null value here is 0, because you’re looking at the difference between two group averages, and “no difference” equals zero. Your test then measures how far the actual observed difference sits from zero and calculates the probability of seeing that gap by chance alone.

Common Null Values by Test Type

The null value changes depending on what kind of statistic you’re working with. The two most common null values are 0 and 1, and which one applies depends on whether your test measures a difference or a ratio.

  • Difference of means (t-test): The null value is 0. If two group averages are equal, subtracting one from the other gives zero.
  • Correlation (Pearson’s r): The null value is 0. A correlation of zero means no linear relationship between two variables. Pearson’s r ranges from -1 to +1, so zero sits right in the middle, indicating no association.
  • Regression coefficient: The null value is 0. If a predictor variable has no relationship to the outcome, its slope equals zero.
  • Odds ratio or relative risk: The null value is 1, not 0. An odds ratio of 1.0 means the odds are identical in both groups. A relative risk of 1.0 means the risk is the same whether or not someone was exposed. Ratios use 1 as their baseline because dividing two equal numbers gives 1.

This distinction trips up a lot of people. If you see a confidence interval for an odds ratio that includes 1.0, the result is not statistically significant. But if you see a confidence interval for a mean difference that includes 0, that’s the equivalent situation. The logic is the same; only the number changes.

The Null Value and P-Values

The p-value is directly tied to the null value. It answers a very specific question: if the null value were the true state of the world, how likely would you be to see data at least as extreme as what you actually collected?

To calculate it, a test statistic measures the distance between your observed result and the null value, scaled by the variability in your data. The farther your result sits from the null value, the smaller the p-value becomes. A small p-value (typically below 0.05) suggests your data is unlikely under the assumption that the null value is correct, which leads you to reject the null hypothesis.

Consider a concrete example from Penn State’s statistics curriculum. If you’re testing whether a population mean equals 3 (making 3 your null value) and your sample produces a test statistic of 2.5, the resulting p-value is 0.0127. That means there’s only about a 1.3% chance of getting a result this extreme if the true mean really were 3. Since that probability falls below the conventional 5% cutoff, you’d reject the null hypothesis and conclude the mean is likely something other than 3.

Confidence Intervals and the Null Value

Confidence intervals offer another way to evaluate results against the null value, and many statisticians prefer them because they show you the range of plausible values rather than just a yes-or-no verdict.

The rule is straightforward. If a 95% confidence interval contains the null value, the result is not statistically significant at the 5% level. If the interval excludes the null value entirely, the result is significant. According to NIST, a 95% confidence interval includes the null hypothesis value if and only if a hypothesis test at the 5% significance level would fail to reject it. The two methods always agree when you’re using a two-sided test.

This makes confidence intervals especially intuitive for ratio statistics. If you calculate an odds ratio of 1.8 with a 95% confidence interval of 0.9 to 3.2, that interval contains 1.0 (the null value for ratios), so the association is not statistically significant. But if the interval were 1.2 to 3.2, the null value of 1.0 falls outside, and you’d call the result significant.

When the Null Value Isn’t Zero or One

Most introductory courses teach the null value as 0 or 1, but researchers sometimes choose a different number depending on their question. NIST notes that in some applications, you may want to adopt a new process only if it exceeds the current one by some threshold. In that case, the null hypothesis states that the difference between two groups equals a specific constant rather than zero.

This comes up frequently in clinical trials. A non-inferiority trial, for instance, doesn’t ask whether a new drug is better than the standard treatment. It asks whether the new drug is no worse than the standard by more than a pre-specified margin. The null value in that test is set to the margin itself, not zero. Superiority trials can work similarly, setting the null value at whatever improvement threshold the researchers consider meaningful before the study begins.

The core logic stays the same regardless of which number serves as the null value. You pick the value that represents “nothing worth reporting,” collect your data, and then check whether the evidence pushes you convincingly away from that benchmark.