Evaluation research is neither exclusively qualitative nor exclusively quantitative. It uses both approaches, often within the same study. The methodology depends entirely on what’s being evaluated, what questions need answering, and what resources are available. This flexibility is actually one of the defining features of evaluation research, and professional evaluators are expected to draw from whatever methods best fit the context.
Why It’s Not One or the Other
Evaluation research isn’t a single method. It’s a purpose: systematically assessing whether a program, policy, intervention, or product is working. Because that purpose can take so many forms, evaluators choose from the full toolkit of research methods. A school district evaluating a new reading program might use test scores (quantitative) alongside classroom observations and teacher interviews (qualitative). A nonprofit measuring the impact of a mentoring program might combine survey data with in-depth participant stories.
The American Evaluation Association’s guiding principles reflect this methodological openness. Their standards call for evaluators to conduct “data-based inquiries that are thorough, methodical, and contextually relevant,” and they describe a culturally competent evaluator as someone who “draws upon a wide range of evaluation theories and methods to design and carry out an evaluation that is optimally matched to the context.” There’s no preference for one paradigm over the other.
What Quantitative Evaluation Looks Like
Quantitative evaluation collects numerical data and uses statistical analysis to measure outcomes. It’s deductive, meaning it typically starts with a hypothesis or specific question and tests it against the numbers. This approach works well for establishing cause-and-effect relationships, testing whether an intervention produced measurable change, and determining the opinions or behaviors of a large population.
Common quantitative tools in evaluation research include pre- and post-intervention surveys, standardized test scores, performance indicators, attendance records, cost-benefit analyses, and program completion rates. The strength of quantitative data is that it produces results that are generalizable. If you survey 2,000 participants in a job training program and find that 74% secured employment within six months, that number carries weight with funders and policymakers.
Summative evaluations, which judge the overall effectiveness of a program after it’s completed, lean heavily on quantitative methods. They answer questions like “Did it work?” and “By how much?” with concrete metrics.
What Qualitative Evaluation Looks Like
Qualitative evaluation collects narrative data rather than numbers. It explores how and why something works (or doesn’t) by capturing the perspectives, experiences, and interpretations of the people involved. The data comes in the form of interview transcripts, field notes, focus group recordings, and documents rather than spreadsheets.
The most common qualitative methods in evaluation are one-on-one interviews, focus groups, participant observation, ethnography, and archival analysis. Interviews are typically semi-structured, meaning the evaluator has a guide with specific questions but asks them conversationally and can follow unexpected threads. Focus groups are particularly useful for understanding group dynamics, normative practices, and shared experiences with a program or service.
Qualitative methods shine during formative evaluation, the phase where you’re still developing or improving a program. They help you understand how participants experience the program, where the friction points are, and what’s happening in the process that numbers alone can’t capture. If a tutoring program has a 40% dropout rate, qualitative interviews with participants who left will reveal whether the issue is scheduling, quality, accessibility, or something else entirely.
How Mixed Methods Bring Both Together
Mixed-methods evaluation research has become the norm for complex programs precisely because combining quantitative and qualitative data provides a fuller picture than either approach alone. A mixed-methods design uses quantitative and qualitative data in a single study, producing stronger conclusions than using either approach independently. The quantitative side tells you what happened and how much. The qualitative side tells you why it happened and what it meant to the people involved.
For example, a survey questionnaire captures a limited number of structured responses. Adding qualitative methods can reveal unanticipated dimensions of the topic that help interpret the quantitative data. If satisfaction survey scores are high but your interviews reveal that participants are only satisfied because they’ve lowered their expectations, the qualitative findings completely reframe the quantitative results.
Evaluators sometimes use these methods simultaneously and sometimes sequentially. In a sequential design, one method informs the next. You might start with exploratory interviews to understand a program’s context, then design a survey based on what you learned, and finally conduct follow-up interviews to explain surprising survey findings. The notation used in research design reflects which approach leads: a qualitative-first study is labeled QUAL followed by quan, while a quantitative-first study is labeled QUAN followed by qual.
How Evaluators Decide Which Approach to Use
The choice between qualitative, quantitative, or mixed methods comes down to several practical factors, not philosophical preference.
- The evaluation question. “How many participants improved?” calls for quantitative methods. “Why did some participants drop out?” calls for qualitative. “Did this program work, and how can we improve it?” calls for both.
- Budget and timeline. Large-scale surveys and household data collection are expensive. One evaluation of a global health initiative opted not to fund specific household surveys “largely due to cost considerations” and relied on secondary data instead. Qualitative studies can be less expensive per data point but more labor-intensive to analyze.
- Stakeholder needs. Funders often want numbers. Program staff often want stories and process insights. A good evaluation design considers who will use the findings and how.
- Program complexity. Simple interventions with clear outcomes may only need quantitative measurement. Complex, multi-site initiatives almost always require mixed methods. Experts in evaluation design suggest investing up to 20% of the evaluation budget in design work alone for complex programs.
Trade-offs are inevitable. Short timelines limit the depth of qualitative fieldwork. Tight budgets restrict sample sizes. One review of evaluation practice identified “overambitious and underfunded terms of reference” as a common problem that forces evaluators into compromises they wouldn’t otherwise make.
Triangulation: Using Both to Check Your Work
One of the most valuable reasons to combine methods is triangulation, where you use different data sources or methods to cross-check your findings. The logic is straightforward: the weaknesses of one method are the strengths of another, and combining them lets you overcome limitations that either method would have on its own.
Triangulation can serve two purposes. As verification, comparing results from different methods provides cross-validation. If your survey data and your interview data point to the same conclusion, you can be more confident in that finding. As completeness, qualitative and quantitative data can illuminate complementary aspects of the same phenomenon, giving you a richer understanding than either could alone.
When the two types of data contradict each other, that’s informative too. It signals that one set of findings may be incomplete or that the phenomenon is more complex than initially assumed. Rather than being a failure, contradictions often point evaluators toward the most important insights.

