Peer review is the quality-control system of science. Before a research paper gets published in an academic journal, it’s evaluated by other experts in the same field who check whether the methods are sound, the conclusions are supported, and the work adds something meaningful. The process typically takes about two months, with a median of 60 days from submission to first decision, though it can range from three weeks to nearly nine months depending on the journal and field.
How a Paper Moves Through Peer Review
The process starts when a researcher submits a manuscript to a journal. An editor first performs what’s called a “desk review,” a quick assessment of whether the paper fits the journal’s scope and meets basic quality standards. Highly selective journals often make this initial screening decision in about 3 days. If the paper clears this hurdle, the editor sends it out to reviewers. If it doesn’t, the author gets a fast rejection and can try elsewhere.
The editor typically recruits two or three independent reviewers with expertise in the paper’s subject area. These reviewers are unpaid volunteers, usually researchers themselves, who agree to evaluate the manuscript within a set timeframe. Finding willing reviewers is one of the biggest bottlenecks in the system. Once reviewers accept, they go through the paper in detail, assessing the study design, methodology, statistical analysis, and whether the conclusions follow from the data. They also check whether the authors have adequately cited prior work and whether the writing is clear enough for other researchers to reproduce the study.
Each reviewer then writes a report summarizing the paper’s strengths, identifying problems, and recommending a course of action. Good reviews are specific, pointing to exact locations in the manuscript where issues arise, and they maintain a respectful, constructive tone. Reviewers also send confidential comments to the editor that the authors never see, which might include frank opinions about whether the paper deserves publication.
The Four Possible Outcomes
After collecting reviewer reports, the editor makes one of four decisions:
- Accept: The paper is ready for publication as-is. This is rare on the first round.
- Minor revision: The paper needs limited changes, like clarifying ambiguous sections, fixing labeling on figures, or adjusting citations. This signals the editors believe the paper is essentially publishable.
- Major revision: Substantial work is needed. Key elements may be missing, the analysis might need to be redone, or large sections require rewriting. The authors resubmit, and the revised version typically goes back to the same reviewers.
- Reject: The paper has fundamental problems that can’t be fixed through revision, such as flawed study design, or it simply doesn’t offer enough novelty or significance for that particular journal.
Most papers that eventually get published go through at least one round of revision. When authors resubmit a revised manuscript, reviewers check whether their concerns were adequately addressed. This back-and-forth can add weeks or months to the timeline.
Single-Blind, Double-Blind, and Open Review
Not all peer review works the same way. The biggest difference between models is who knows whose identity.
In single-blind review, the most common format, reviewers know who wrote the paper but authors don’t know who reviewed it. This creates a known vulnerability: reviewers may be influenced by the authors’ reputation or institution. A study published in the Proceedings of the National Academy of Sciences found that single-blind reviewers were more likely to bid on papers from top universities and more likely to recommend acceptance for papers by famous authors or researchers at prestigious institutions, compared with double-blind reviewers evaluating the same work.
In double-blind review, neither side knows the other’s identity. This reduces prestige bias, but it’s not foolproof. In small fields, writing style, research topic, or dataset access can make authorship easy to guess.
Open peer review flips the model entirely. Reviewer reports are published alongside the final paper, sometimes with reviewer names attached. Nature Communications, one of the largest open-access journals, has published peer review files alongside all primary research papers submitted since November 2022. About 70% of their authors had already been voluntarily opting in before it became the default. Reviewers can still remain anonymous unless they choose to sign their comments, but knowing their feedback will be public tends to encourage more thorough and measured critiques.
What Reviewers Actually Evaluate
Reviewers assess each section of a manuscript separately. In the introduction, they check whether the research question is clearly stated and the existing literature is accurately represented. In the methods section, they look for enough detail that another researcher could replicate the study. In the results, they verify that the data actually supports the claims being made. In the discussion, they evaluate whether the authors have been honest about limitations and whether conclusions stay within what the evidence allows.
Beyond the science itself, reviewers check practical details: Are the tables and figures properly labeled? Are the references formatted correctly? Are ethics approvals and conflict-of-interest statements included? They’re also expected to search the existing literature to see whether the paper genuinely adds something new or simply repackages known findings.
Reviewers follow strict ethical guidelines. They must decline assignments where they have a conflict of interest, keep the manuscript confidential, and avoid pushing authors to cite the reviewer’s own work unless genuinely relevant. They should not discuss the manuscript with anyone outside the review process.
Known Weaknesses of the System
Peer review is widely considered essential, but it’s far from perfect. The process is susceptible to several forms of bias. Prestige bias, where papers from well-known researchers or elite institutions receive more favorable treatment, is the best documented. But affiliation bias, gender bias, and nationality bias have all been identified as concerns in the literature.
Speed is another persistent problem. A median of 60 days to first decision means half of all papers wait even longer, and that’s before any revisions. For fast-moving fields like infectious disease, this delay can mean findings are outdated by the time they’re published.
The system also relies on volunteer labor. Reviewers aren’t paid, and as the volume of scientific publishing grows, it becomes harder for editors to find qualified people willing to do the work. This can lead to less experienced reviewers being recruited or reviews being rushed.
How Preprints Fit In
Preprints are complete manuscripts posted on public servers like medRxiv or bioRxiv before undergoing peer review. They allow researchers to share findings immediately rather than waiting months for the review process to play out. During the COVID-19 pandemic, preprints became a primary way that urgent research reached the public and other scientists.
The tradeoff is clear: speed comes at the cost of quality assurance. Preprints lack the formal vetting that peer review provides, which raises concerns about unreliable findings entering public discussion and influencing policy. Studies comparing preprints to their later peer-reviewed versions have found that the review process does meaningfully improve methodological rigor and reporting transparency. Preprint servers themselves have started requiring better disclosure of funding and conflicts of interest, but peer review remains the standard for establishing a paper’s reliability.
Many researchers now treat preprints and peer review as complementary rather than competing. A paper might go up as a preprint for rapid feedback while simultaneously being submitted to a journal for formal review.

