What Level of Evidence Is a Mixed Methods Study?

A mixed methods study doesn’t have a single, fixed level of evidence. Its placement in a hierarchy depends on the strength of its quantitative component. In the most widely used frameworks, a mixed methods study can rank anywhere from Level 1 to Level 3, determined by whether its quantitative strand is a randomized controlled trial, a quasi-experimental design, or an observational study.

This surprises many people who expect a simple answer. The reason is that mixed methods research combines qualitative and quantitative data by design, and traditional evidence hierarchies were built to rank quantitative studies alone. The result is a classification system that treats mixed methods studies as a moving target.

How the Quantitative Component Determines Level

The clearest framework for ranking mixed methods studies comes from a widely adopted hierarchy used in evidence-based practice education. It assigns mixed methods studies to one of three tiers based entirely on what kind of quantitative study is embedded within them:

  • Level 1: Mixed methods designs that include a randomized controlled trial as the quantitative component. These sit alongside standalone RCTs and systematic reviews of RCTs at the top of the hierarchy.
  • Level 2: Mixed methods designs paired with quasi-experimental quantitative studies, such as studies that compare groups without random assignment.
  • Level 3: Exploratory, convergent, or multiphasic mixed methods studies, as well as explanatory mixed methods designs paired with nonexperimental quantitative studies like surveys or cohort studies.

In practical terms, if you’re reading a mixed methods study and need to assign it a level, look at the quantitative strand first. A study that randomized participants into treatment and control groups, then added patient interviews to explore their experiences, would be Level 1. A study that surveyed a population and conducted focus groups to interpret the survey findings would be Level 3.

Why Traditional Hierarchies Struggle With Mixed Methods

Evidence hierarchies were originally designed to answer a narrow question: how confident can we be that a treatment causes a specific outcome? Randomized controlled trials sit at the top because randomization is the best tool for isolating cause and effect. Observational studies rank lower because they can’t rule out confounding variables as effectively.

Mixed methods research asks a fundamentally different kind of question. It’s not just measuring whether something works. It’s also exploring why it works, how patients experience it, what barriers exist to implementation, or what the intervention means in a specific cultural context. The qualitative component generates insights that a hierarchy focused purely on causal inference wasn’t built to evaluate.

This creates an awkward fit. When you rate a mixed methods study by its quantitative strand alone, you’re essentially ignoring the qualitative contribution, which may be the most valuable part of the study for certain decisions. A Level 3 mixed methods study with rich qualitative data about patient experiences could be far more useful for designing a real-world health program than a Level 1 RCT that measured only clinical endpoints.

How Qualitative Evidence Gets Its Own Rating

To address this gap, separate tools exist for evaluating the qualitative side of mixed methods research. The most notable is GRADE-CERQual, recommended by Cochrane for assessing qualitative evidence syntheses. While the standard GRADE system rates quantitative evidence on domains like risk of bias and imprecision, CERQual evaluates qualitative findings on four different components: relevance to the review question, methodological limitations, adequacy of the data, and coherence of the findings.

The Joanna Briggs Institute takes yet another approach, maintaining entirely separate evidence hierarchies for different types of questions. Under its “meaningfulness” hierarchy, which addresses how people experience health conditions and interventions, a mixed methods systematic review sits at Level 1. This reflects the reality that for questions about patient experience, acceptability, or feasibility, mixed methods evidence can be the strongest available.

Cochrane’s own handbook recommends that when qualitative and quantitative evidence are combined in a review, each strand should be synthesized separately using appropriate methods before being integrated. This keeps the qualitative evidence from being forced into a framework that doesn’t suit it, while still allowing both types of evidence to inform the final conclusions.

How Quality Is Assessed in Practice

Beyond level-of-evidence rankings, the quality of a specific mixed methods study matters enormously. A poorly designed RCT with qualitative interviews (Level 1 by hierarchy) can produce weaker evidence than a carefully conducted observational mixed methods study (Level 3).

The Mixed Methods Appraisal Tool, known as MMAT, was developed specifically for this purpose. Its 2018 version evaluates five categories of study design: qualitative, randomized controlled, nonrandomized, quantitative descriptive, and mixed methods. For mixed methods studies specifically, the tool assesses whether there’s a clear rationale for using a mixed approach, whether the different components are effectively integrated, and whether the outputs of the integration are adequately interpreted. A mixed methods study where the qualitative and quantitative strands never meaningfully connect scores poorly, regardless of where it falls in a hierarchy.

This is worth keeping in mind if you’re evaluating evidence for a paper, a systematic review, or clinical practice. The level number tells you something about the study’s potential for establishing causation, but the appraisal of its actual methodological quality tells you whether that potential was realized.

How Organizations Use Mixed Methods Evidence

Major health organizations increasingly treat mixed methods evidence as essential rather than supplementary. The WHO’s evidence-to-decision framework explicitly calls for mixed methods studies and reviews across multiple criteria: assessing whether patients value certain health outcomes, evaluating human rights and sociocultural acceptability of interventions, examining health equity implications, and determining whether an intervention is feasible within a given health system.

For these types of questions, purely quantitative evidence often falls short. Knowing that a treatment reduces mortality by 15% doesn’t tell you whether communities will accept it, whether health workers can deliver it, or whether it will widen or narrow existing health disparities. Mixed methods research fills those gaps, which is why these frameworks carve out specific roles for it regardless of where it might land in a traditional hierarchy.

The practical takeaway: if someone asks you to assign a single level of evidence to a mixed methods study, look at the quantitative design and use the three-tier system. But recognize that this number captures only part of the study’s contribution. For questions about effectiveness, the quantitative level matters most. For questions about meaning, experience, acceptability, or implementation, the qualitative component may carry equal or greater weight, and tools like CERQual and MMAT give you a more complete picture of the study’s strength.