Yes, you can use meta-analysis in a literature review, and doing so often strengthens your work by adding statistical weight to your conclusions. A meta-analysis is the process of statistically combining results from multiple similar studies to produce a single pooled estimate of an effect. It fits naturally inside a broader review when the studies you’re synthesizing share enough common ground to make combining their data meaningful.
That said, not every literature review can or should include a meta-analysis. The method has specific requirements, and using it poorly can undermine the very credibility it’s meant to provide. Here’s what you need to know to decide whether it fits your review and how to do it right.
How Meta-Analysis Relates to Other Review Types
The terminology gets confusing because “literature review,” “systematic review,” and “meta-analysis” overlap but aren’t interchangeable. A traditional (narrative) literature review summarizes and discusses existing research on a topic, often organized thematically. The author selects studies and interprets them, but the process for finding and including those studies isn’t necessarily standardized.
A systematic review is more rigorous. It uses clearly defined, reproducible methods to find all available studies on a specific question, then evaluates their quality. A meta-analysis is the optional statistical layer that can sit inside a systematic review: it takes the numerical results from the included studies and combines them into a single summary estimate. During a systematic review, you evaluate study quality first, then decide whether the data is compatible enough to pool statistically. If the studies use different measurement tools, different timepoints, or report outcomes in incompatible formats, you may need to stop at a narrative synthesis rather than running a meta-analysis.
So meta-analysis is a tool, not a standalone document. You can embed it in a systematic review, and you can reference or incorporate meta-analytic findings in a narrative literature review. What matters is that the underlying methodology is sound.
When Meta-Analysis Works (and When It Doesn’t)
The core requirement is that you need multiple studies measuring roughly the same thing in roughly the same way. If you’re reviewing five randomized trials that all tested the same intervention against a placebo and reported the same outcome measure, pooling their results makes sense. If your studies span different populations, different interventions, and different outcomes, forcing them into a single statistical summary can be misleading.
The formal way researchers assess this is through heterogeneity, which measures how much variation exists between studies beyond what you’d expect from chance alone. The most common metric is I², which ranges from 0% to 100%. The Cochrane Handbook offers a rough guide: 0% to 40% might not be important, 30% to 60% may represent moderate heterogeneity, 50% to 90% may be substantial, and 75% to 100% is considered considerable. These ranges overlap intentionally because context matters. High heterogeneity doesn’t automatically disqualify a meta-analysis, but it does mean you need to investigate why results differ, using techniques like subgroup analyses or statistical models that account for variation between studies.
Broad inclusion criteria pull in more studies but increase heterogeneity. Narrow criteria keep studies more comparable but may leave you with too few to analyze. Finding the right balance is one of the harder judgment calls in the process.
Why It Strengthens Your Review
In evidence hierarchies used across medicine and the sciences, systematic reviews with meta-analyses sit at the top. The Oxford Centre for Evidence-Based Medicine ranks a systematic review of randomized controlled trials (Level 1A) as the strongest form of therapeutic evidence, above individual trials, cohort studies, case-control studies, and expert opinion. Adding a well-executed meta-analysis to your review elevates it from a summary of what exists to a quantitative answer about what the evidence, taken together, actually shows.
A meta-analysis also reveals things that individual studies can’t. A single trial might be too small to detect a meaningful effect. By pooling data across studies, you gain statistical power. You can also quantify how consistent the evidence is, identify patterns across subgroups, and flag whether smaller studies with dramatic results might be skewing the overall picture.
Reading and Presenting a Forest Plot
The forest plot is the signature visual of a meta-analysis, and understanding it is essential whether you’re creating one or interpreting someone else’s. Each row represents a single study. A box marks that study’s point estimate (its individual result), and a horizontal line extending from the box shows the confidence interval, the range of plausible values. The size of the box reflects how much weight that study carries in the overall analysis; larger studies with more precise results get bigger boxes.
At the bottom, a diamond shape represents the pooled result across all included studies. The center of the diamond is the overall point estimate, and its width shows the 95% confidence interval. A vertical line, usually at 1.0 for ratio measures or 0 for mean differences, marks the line of no effect. If the diamond doesn’t cross that line, the pooled result is statistically significant.
Watching for Publication Bias
One of the biggest threats to a meta-analysis is publication bias: the tendency for studies with positive or dramatic results to get published while studies with null findings sit in file drawers. If your meta-analysis only captures the published winners, the pooled estimate will overstate the true effect.
The most common detection tool is a funnel plot, a scatter plot that maps each study’s effect size against its precision (usually related to sample size). In an unbiased set of studies, the plot should look roughly symmetrical, like an inverted funnel. Asymmetry, particularly a gap where small, negative studies should be, suggests that some results may be missing. Statistical tests can formalize this visual assessment, but no method can definitively prove bias exists. The best practice is to search broadly for unpublished studies, trial registrations, and grey literature to minimize the problem from the start.
Reporting Standards to Follow
If you include a meta-analysis in your review, PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is the accepted reporting standard. It specifies that you should report the statistical model you used (fixed-effect or random-effects) and explain why you chose it. You need to describe the method for combining studies, such as inverse-variance weighting. You should specify how you assessed heterogeneity, report the summary estimate with its confidence interval, and include heterogeneity statistics like I². Even the abstract should contain the summary estimate and confidence interval if a meta-analysis was performed.
Following PRISMA isn’t just good practice. Many journals require it, and reviewers will check for it. The checklist keeps your reporting transparent enough that another researcher could evaluate or replicate your work.
Software That Handles the Analysis
You don’t need to build a meta-analysis from scratch. Several software tools support the full process, from searching and screening studies to extracting data and running the statistical analysis. Covidence, developed in association with Cochrane, is widely used for screening and extraction. DistillerSR and EPPI-Reviewer Web are comprehensive platforms that cover nearly the entire review workflow. RevMan Web, also from Cochrane, is a standard choice for generating forest plots and running pooled analyses. For researchers comfortable with coding, packages in R (like “metafor”) and modules in Stata offer more flexibility for advanced analyses like meta-regression.
A 2022 feature analysis found that DistillerSR, Nested Knowledge, and EPPI-Reviewer Web offered the highest density of review-focused tools among web-based platforms. Covidence remains popular for its clean interface and integration with reference managers. The right choice depends on your budget, team size, and how much of the workflow you want handled in a single platform.
Integrating Quantitative and Qualitative Evidence
Sometimes your literature review covers both quantitative studies (trials, cohort studies) and qualitative research (interviews, case studies). You can still use meta-analysis for the quantitative portion while synthesizing qualitative findings narratively. Several formal methods exist for this kind of integration, including mixed studies reviews, realist reviews, and narrative synthesis frameworks. These approaches combine the statistical precision of pooled quantitative data with the contextual richness of qualitative research, giving readers both the numbers and the story behind them.
The key is to be explicit about which portions of your review use which methods. Readers should be able to see clearly where the meta-analytic results end and the narrative interpretation begins.

