Is a Scoping Review Qualitative or Quantitative?

A scoping review is neither strictly qualitative nor strictly quantitative. It sits in its own category as a type of evidence synthesis that can include qualitative studies, quantitative studies, theoretical papers, and even other reviews. The method itself is designed to map the breadth of available evidence on a topic rather than to answer a narrow research question using one particular approach.

Why It Doesn’t Fit Either Label

The qualitative-versus-quantitative distinction describes how a study collects and analyzes original data. A qualitative study gathers interviews, observations, or text and looks for themes. A quantitative study collects numerical data and runs statistical analyses. A scoping review does neither of these things. Instead, it pulls together findings from studies that have already been conducted, regardless of their design, and organizes what it finds into a structured overview.

This is what makes the question tricky. A scoping review can pull in randomized controlled trials (quantitative), interview-based studies (qualitative), policy documents, case reports, and grey literature all within the same review. The JBI Manual for Evidence Synthesis, which is the leading methodological guide for this type of work, explicitly describes scoping reviews as including “a diverse range of studies, including theoretical, qualitative, quantitative, and review-based research.” The method is intentionally broad. Its purpose is to map what’s out there, not to pool results statistically or build a single interpretive framework.

How Scoping Reviews Actually Handle Data

If scoping reviews aren’t doing statistical meta-analysis or qualitative thematic analysis in the traditional sense, what are they doing? Mostly, they use what’s called narrative synthesis and basic content analysis. Reviewers extract key details from each included study (things like the population studied, the methods used, the main findings, and the setting) and then organize that information into tables, charts, and written summaries that reveal patterns across the literature.

A 2023 cross-sectional analysis of how scoping reviews present their results found that 35 different types of data visualization were used. The most common were simple bar charts, pie charts, and cross-tabulation tables, which together accounted for about 61% of all visualizations. Graphs appeared in roughly two-thirds of scoping reviews that used any visual presentation. These are descriptive tools. They show how many studies addressed a given population, what methods researchers favored, or which geographic regions were represented. They don’t calculate effect sizes or test hypotheses.

The JBI Scoping Review Methodology Group published updated guidance in 2023 specifically addressing how to extract, analyze, and present results. That guidance recommends basic qualitative content analysis as a core technique, meaning reviewers categorize and count the types of evidence they find rather than performing deep interpretive analysis. This sits somewhere between qualitative and quantitative work, which is part of why scoping reviews resist a clean label.

How Scoping Reviews Compare to Systematic Reviews

The confusion often comes from people grouping scoping reviews with systematic reviews, which do tend to fall more clearly on the quantitative or qualitative side. A quantitative systematic review pools numerical results from multiple trials and runs a meta-analysis to produce a single effect estimate. A qualitative systematic review synthesizes findings from interview or observational studies using formal interpretive methods. Both types aim to answer a specific, focused question.

Scoping reviews have a different goal. They map the landscape of evidence on a topic and identify key concepts, research trends, and knowledge gaps. Because of this broader aim, they don’t require a formal critical appraisal of each included study’s quality, which is mandatory in systematic reviews. A published comparison in BMC Medical Research Methodology describes this as a defining difference: systematic reviews assess risk of bias in every included study, while scoping reviews generally do not. Scoping reviews also don’t typically produce practice recommendations the way systematic reviews do. Their output is descriptive, not prescriptive.

Where Scoping Reviews Sit in the Research Landscape

In formal typologies of evidence synthesis, scoping reviews occupy their own category. A classification published in PLoS One lists them separately from quantitative systematic reviews, qualitative evidence syntheses, and mixed-methods reviews. They’re grouped alongside mapping reviews as a distinct approach. If you had to assign a label, “mixed” comes closest, because scoping reviews routinely combine evidence from different research traditions. But even “mixed methods review” is treated as a separate category in most typologies.

The PRISMA Extension for Scoping Reviews (PRISMA-ScR), published in the Annals of Internal Medicine and maintained by the EQUATOR Network, defines scoping reviews as “a type of knowledge synthesis” that follows “a systematic approach to map evidence on a topic and identify main concepts, theories, sources, and knowledge gaps.” Notice the language: knowledge synthesis, not qualitative research or quantitative research. The method borrows systematic search strategies from systematic reviews but applies them with a wider lens and a more flexible approach to what counts as relevant evidence.

What This Means If You’re Writing One

If you’re a student or researcher trying to classify your scoping review for a methods section, proposal, or ethics application, describing it as a form of evidence synthesis is the most accurate framing. You can note that it incorporates both qualitative and quantitative sources without being a mixed-methods study in the traditional sense. The key distinction is that you’re not generating new data. You’re systematically organizing existing evidence to show what research exists, what it covers, and where the gaps are.

Your protocol should follow the PRISMA-ScR checklist and the JBI scoping review guidance, both of which are widely recognized as the methodological standards for this type of work. When presenting results, expect to use descriptive summaries, tables charting the characteristics of included studies, and simple visualizations like bar charts or cross-tabulations rather than statistical tests or deep thematic interpretation.