The two hypotheses that can be supported with quantitative data are the null hypothesis and the alternative hypothesis. These two work as a pair in every quantitative study: one proposes that nothing significant is happening, and the other proposes that something is. Researchers collect numerical data and run statistical tests to determine which hypothesis the evidence supports.
The Null Hypothesis
The null hypothesis, written as H₀, states that there is no meaningful difference or relationship between the things being compared. It’s the default assumption, the “nothing is going on here” position. For example, if you’re testing whether a new teaching method improves test scores, the null hypothesis would say the new method produces no difference in scores compared to the old one.
In formal terms, the null hypothesis sets two values equal to each other. A study comparing two groups would express it as: the average of Group A equals the average of Group B. Any observed difference between the groups is assumed to be random chance unless the data proves otherwise. The null hypothesis is never truly “proven true.” Instead, when the data doesn’t show a strong enough pattern, researchers say they “fail to reject” it, meaning the numbers didn’t provide enough evidence to move away from the default assumption.
The Alternative Hypothesis
The alternative hypothesis, written as H₁ or Hₐ, is the opposite claim. It states that a real difference or relationship does exist. Using the same teaching method example, the alternative hypothesis would say the new method does change test scores, either raising or lowering them compared to the old method.
When statistical analysis produces results unlikely to occur by chance alone, researchers reject the null hypothesis and accept the alternative. This is where the concept of statistical significance comes in. The standard threshold is a p-value below 0.05, meaning there’s less than a 5% probability that the observed results would appear if the null hypothesis were actually true. Some fields use stricter thresholds, such as 0.01 or even 0.005, to reduce the risk of false positives.
Why These Two Require Quantitative Data
Both hypotheses depend on numbers because statistical tests need measurable inputs. You can’t calculate a p-value or a confidence interval from opinions or descriptions alone. The data must come in a form that allows mathematical comparison: test scores, blood pressure readings, temperatures, weights, reaction times, or any variable you can express as a number.
For this to work, the variables in your hypothesis need to be clearly defined and consistently measured. Researchers call this “operationalizing” a variable. Rather than vaguely measuring “health,” for instance, you’d specify that you’re recording body weight in kilograms using the same calibrated scale for every participant, under the same conditions. The more precisely a variable is defined, the more reliable the quantitative comparison becomes.
The measurement scale also matters. Interval data (like temperature in Celsius, where the difference between degrees is consistent but there’s no true zero) and ratio data (like weight or height, where zero means zero and you can multiply and divide values) are the scales best suited for hypothesis testing. These scales allow the full range of statistical operations needed to compare groups and calculate significance.
Directional vs. Non-Directional Hypotheses
The alternative hypothesis can take two forms depending on how specific the prediction is. A non-directional hypothesis simply states that a difference exists without predicting which direction it goes. For example: “The two groups will have different average scores.” A directional hypothesis predicts the specific direction: “Group A will score higher than Group B.”
Non-directional hypotheses use what’s called a two-tailed test, checking for differences in either direction. Directional hypotheses use a one-tailed test, focusing statistical power on detecting a difference in only one direction. Both are supported or rejected using the same quantitative tools, but the choice between them affects how the statistical test is structured and how sensitive it is to detecting a result.
How the Data Actually Supports a Hypothesis
Supporting a hypothesis with quantitative data involves more than just collecting numbers. The study needs enough participants or observations to detect a real effect if one exists. This is called statistical power, and the widely accepted standard is 80%, meaning the study has an 80% chance of detecting a true effect. For a typical comparison between two equal-sized groups with a moderate effect size and a significance threshold of 0.05, a sample of around 30 participants is often the minimum starting point, though many studies require far more.
Once data is collected, researchers calculate a test statistic that summarizes how far the observed results fall from what the null hypothesis would predict. That test statistic is then converted into a p-value. If the p-value falls below the chosen threshold (usually 0.05), the null hypothesis is rejected and the alternative is supported. If it doesn’t, the null hypothesis stands.
Confidence intervals provide additional context. A 95% confidence interval gives a range of values that likely contains the true effect. If that range doesn’t include zero (for a difference) or one (for a ratio), it aligns with rejecting the null hypothesis. The American Statistical Association has emphasized that decisions shouldn’t rest on a single p-value cutoff alone, and that the size and precision of the observed effect matter just as much as whether the result crosses the 0.05 line.
Correlation and Causation Hypotheses
Quantitative hypotheses can test two distinct types of relationships. A correlational hypothesis proposes that two variables move together, measured by a correlation coefficient that ranges from +1.0 (perfect positive relationship) to -1.0 (perfect negative relationship). A value near zero means no relationship. This tells you whether variables are related but not whether one causes the other.
A causal hypothesis goes further, proposing that changes in one variable directly produce changes in another. Supporting a causal hypothesis requires a controlled study design, typically an experiment where the researcher manipulates the independent variable (the presumed cause) and measures the dependent variable (the presumed effect) while holding everything else constant. Observational data can support a correlational hypothesis, but establishing causation demands a more rigorous setup. In both cases, the null and alternative hypotheses frame the question, and quantitative data provides the answer.

