Reviewing a scientific manuscript follows a consistent structure: you read the paper multiple times with increasing depth, evaluate the research question and methods, assess whether the data support the conclusions, and write a structured report with a clear recommendation. Whether you’re reviewing your first paper or your fiftieth, the process becomes more efficient when you approach it systematically.
Start With a Quick First Read
Before diving into details, read the entire manuscript once without taking notes. Your goal is to understand the big picture: what question the authors are trying to answer, how they tried to answer it, and what they claim to have found. This first pass helps you assess whether the paper falls within your expertise. If it doesn’t, or if you have a potential conflict of interest with the authors or their institution, decline the invitation promptly so the editor can find someone else.
During this initial read, you’re essentially performing the same triage that editors do before sending a paper out for review. Editors evaluate whether a paper aligns with the journal’s scope, contributes something meaningfully new, and meets a baseline standard of clarity and methodological soundness. Around two-thirds of manuscripts are declined at the editorial stage alone, often because the research question isn’t novel enough or the methods don’t support the conclusions. As a reviewer, you’re picking up where that filter left off.
Evaluate the Research Question and Framing
On your second, more careful read, start with the introduction and ask: is this question worth asking? A strong manuscript identifies a gap in existing knowledge and explains why filling that gap matters. Check whether the authors engage meaningfully with the existing literature. An introduction with only a handful of references, for instance, may signal that the authors haven’t situated their work in the broader field. The research question should be clearly stated, and the study’s goals should follow logically from that question.
Assess the Methods for Rigor
The methods section is the backbone of any scientific paper, and your job is to determine whether the study was designed well enough for the results to be credible and reproducible. Focus on several core questions. Is the experimental design appropriate for the research question? Are proper controls in place? Is the sample size sufficient and justified? Are the analytical methods suitable for the type of data collected?
Look for whether the authors accounted for relevant variables that could influence results, such as age, sex, or other demographic factors in human studies. Check that inclusion and exclusion criteria are clearly defined, that the timeline of the study makes sense, and that the results would be generalizable to the population the authors claim to be studying. If the methods are outdated or unreliable compared to current best practices, that’s a major concern worth flagging.
You don’t need to be a statistician to catch common problems. If the authors report a finding as significant but used a very small sample, or if they didn’t describe how they handled missing data, note it. If you’re unsure about a specific statistical technique, it’s fine to say so in your review and suggest the editor consult a specialist.
Scrutinize the Data and Figures
Once you’re satisfied the methodology is sound (or have noted where it isn’t), turn your attention to the data itself. Tables, figures, and images are where many problems become visible. Look specifically for:
- Insufficient data: Are there enough data points to support the claims being made?
- Contradictory data: Do numbers in different tables or figures conflict with each other, or with what’s stated in the text?
- Unclear presentation: Are axes labeled, units specified, and error bars explained?
- Confirmatory results only: Does the study simply replicate what’s already well established, without making a strong case for why repetition was needed?
Critical flaws at this level, such as data that directly contradict the authors’ own conclusions, often point toward rejection.
Check Whether Conclusions Match the Evidence
One of the most common problems in manuscripts is overreach: authors claiming more than their data actually show. Read the discussion and conclusion sections with the results fresh in your mind. Do the authors acknowledge limitations? Do they distinguish between what their data demonstrate and what they speculate? Conclusions that contradict or go beyond the statistical or qualitative evidence are a major flaw that affects the paper’s credibility.
Write a Structured Review Report
Most journals expect a review in four parts: a summary, your recommendation, major concerns, and minor concerns.
Summary
Open with a one-sentence description of the paper’s main point, then summarize the key findings and your assessment of the work’s significance. This section shows both the editor and the authors that you understood what the paper is trying to accomplish. It also grounds everything that follows. A good summary is three to five sentences.
Recommendation
State your decision clearly, either as the final sentence of your summary or as a standalone line. Reviewers typically choose from four options: accept with no revisions, accept with minor revisions, accept with major revisions, or reject. Being explicit here helps the editor interpret the rest of your comments. If you list ten concerns but don’t clarify your overall recommendation, the editor has to guess how serious you consider those issues to be.
Major Concerns
These are problems that affect whether the paper’s central claims are valid. They include arguments that aren’t internally consistent, claims that contradict established understanding without sufficient evidence, and missing experimental or computational data that would be essential to justify the conclusions. Major concerns typically lead to a recommendation of “reject” or “major revisions.” Number each concern separately and explain not just what the problem is, but why it matters and, where possible, how the authors might address it.
Minor Concerns
Minor concerns don’t undermine the paper’s logic but would improve its clarity. These include grammatical errors, missing references, unclear wording in the discussion, mislabeled figures, or sections that need more (or less) detail. Minor concerns are almost always included when you’re recommending acceptance. When you’re recommending rejection based on major flaws, you can skip or abbreviate this section.
Give Feedback You’d Want to Receive
The tone of your review matters more than many reviewers realize. Authors are significantly more likely to engage with feedback that opens with something positive, even if your overall recommendation is rejection. A sentence like “I appreciate the effort the authors put into this work” before pivoting to methodological issues sets a collaborative tone rather than an adversarial one.
Be specific and constructive. Instead of writing “the methods are inadequate,” try something like “I’d be grateful if the authors could provide precise details about their randomization process, including allocation concealment and sequence generation.” Give the authors a clear path forward: what needs to change, why it needs to change, and how they might go about it. General sweeping statements, such as dismissing an entire methodological approach, aren’t helpful and can reveal bias rather than rigor.
A few things to avoid. Don’t be condescending about writing quality, especially for authors who may not be native English speakers. If grammar issues affect clarity, note the pattern (for example, inconsistent verb tenses) rather than circling individual typos. Don’t pressure authors to cite your own work or that of colleagues. And before you submit, read your review back and imagine how you’d feel receiving it.
Respect Confidentiality and Ethics
Everything about the manuscript you’re reviewing is confidential. You cannot share its contents, discuss it with colleagues outside the review process, or upload it to any external tool or database. This includes running the text through plagiarism checkers or other online platforms on your own.
Evaluate the work impartially, focusing on the content rather than who wrote it. Many journals use anonymous review processes where neither authors nor reviewers know each other’s identities. If you do recognize the authors, disclose any potential conflict of interest to the editor. Even the appearance of bias can undermine the process.
Don’t Use AI to Write Your Review
As of 2025, major publishers including Elsevier explicitly prohibit reviewers from using generative AI tools to assist with scientific evaluation of manuscripts. The reasoning is straightforward: peer review requires critical thinking and domain expertise that current AI tools cannot reliably provide, and there is a real risk of generating incorrect, incomplete, or biased assessments. You are personally responsible and accountable for the content of your review report. AI tools do play a role on the editorial side, where publishers use them to match manuscripts with appropriate reviewers and detect duplicate submissions, but the review itself must be your own work.
Manage Your Time Realistically
Journals typically give reviewers two to four weeks to complete a review, though this varies. Some fast-turnaround journals expect reviews within days, while others allow longer windows. The average time from submission to completed peer review across major publishers runs under 90 days total, which includes the time editors spend finding reviewers and making decisions.
A thorough review of a standard research article takes most experienced reviewers between three and five hours spread across multiple sittings. Reading the paper twice, checking references, examining figures carefully, and writing a detailed report all take time. If you can’t meet the deadline, let the editor know early. A late review delays the entire publication process and leaves authors waiting without information.

