Peer review is the process of having independent experts evaluate a scientist’s work before it gets published in a journal. These experts, who specialize in the same field as the author, assess whether the research is valid, significant, and original. The system serves two purposes: filtering out low-quality or flawed research, and improving papers that are good enough to publish by catching errors and suggesting revisions.
How the Process Works
When a researcher finishes a study, they submit their paper (called a manuscript) to a scientific journal. The journey from submission to publication involves several gatekeepers, each with a distinct role.
First, the journal’s editorial staff runs a basic check. They confirm the manuscript follows formatting guidelines and screen it for plagiarism using detection software. If it passes, the paper goes to the editor-in-chief, who assigns it to an associate editor with relevant expertise. At this point, the editors assess whether the paper even fits the journal’s scope. If it clearly doesn’t meet the bar, they can reject it outright without sending it to reviewers. This is called a “desk rejection.”
Papers that survive that initial screen move to the review stage. The associate editor identifies potential reviewers and sends invitations, typically seeking two independent experts per manuscript. Finding willing reviewers can be a challenge on its own, and editors often have to send multiple invitations before enough people accept. Reviewers are usually working scientists who volunteer their time without pay.
Once reviewers accept, they read the full manuscript, examine the data and methods, and write a detailed assessment. They flag errors, question weak arguments, and suggest improvements. When all reviews are in, the associate editor weighs the feedback alongside their own reading and makes a recommendation to the editor-in-chief. The editor-in-chief then makes the final call: accept the paper as is, ask for revisions (minor or major), or reject it. Authors who receive a revision request rewrite their paper addressing each reviewer comment, and the cycle can repeat.
Three Models of Peer Review
Not all peer review works the same way. The differences come down to who knows whose identity during the process.
- Single-blind review is the most common model. Reviewers can see who wrote the paper, but authors don’t know who reviewed it. This gives reviewers the freedom to be candid without fear of retaliation, though critics point out it can allow personal or institutional bias to creep in.
- Double-blind review hides identities in both directions. Neither authors nor reviewers know who the other is. Journals like Social Science & Medicine and General Psychiatry use this approach, aiming to reduce bias based on an author’s reputation, gender, or institutional prestige.
- Open peer review takes the opposite approach: both sides know each other’s identity. Journals like the BMJ and BMC Psychology use this model. The idea is that transparency makes reviewers more constructive and accountable. Interestingly, research has shown that dropping anonymity doesn’t significantly change the quality of reviews or the time it takes to complete them.
How Long It Takes
Peer review is not fast. Across journals, the median time from submission to a first peer-reviewed decision is about 60 days, or roughly two months. But that’s just the first round. When revisions are requested and re-reviewed, the median time from submission to a final decision stretches to about 198 days, more than six months. After acceptance, it takes a median of another 25 days to appear online.
Highly selective journals tend to move faster at the front end, reaching a first decision in about 49 days compared to 64 days at less selective journals. They’re quicker to desk-reject papers that don’t fit, which speeds up the overall timeline. Open-access journals also reach final decisions somewhat faster than traditional subscription journals.
What Reviewers and Editors Actually Do
Reviewers and editors have different but complementary jobs. Reviewers are the subject-matter experts. They evaluate whether the study’s methods are sound, whether the conclusions follow from the data, whether the work is original, and whether anything important was overlooked. They write recommendations, but those recommendations are advisory, not binding.
Editors are the decision-makers. They select which reviewers to invite, oversee the fairness of the process, and ultimately decide what happens to the paper. An editor can overrule a reviewer’s recommendation. If two reviewers disagree, the editor breaks the tie. Editors also monitor their pool of reviewers over time, watching for signs of bias or inconsistency. This balance matters: the editor has the final word, but the system depends on having reviewers who provide honest, high-quality assessments.
Known Weaknesses
Peer review is widely considered essential, but it is far from perfect. Richard Smith, a former editor of the BMJ, described it bluntly: “poor at detecting gross defects and almost useless for detecting fraud,” as well as “slow, expensive, highly subjective, something of a lottery, prone to bias, and easily abused.”
One documented problem is that reviewers miss errors. In experiments at the BMJ where major mistakes were deliberately inserted into manuscripts and sent to reviewers, nobody caught all of them. Most reviewers spotted only about a quarter of the planted errors, and some didn’t catch any. Peer review works largely on trust. It assumes the data being presented is real, and it is not designed as a fraud detection system.
Publication bias is another serious issue. Journals have historically favored “positive” studies, ones where an intervention worked, over “negative” studies that found no effect. Many researchers don’t even bother writing up negative results because they know the odds of publication are low. This skews the scientific record, because it means published literature overrepresents treatments and interventions that appear to work.
There’s also the cost in researchers’ time. Reviewers are rarely paid, and the hours they spend evaluating someone else’s work could be spent on their own research. With the volume of scientific publishing growing every year, reviewer fatigue is a real and growing concern.
Preprints and the Changing Landscape
In recent years, preprints have added a new layer to how science gets shared. A preprint is a complete manuscript posted publicly before it goes through peer review. Platforms like bioRxiv and medRxiv let researchers share findings immediately, which became especially visible during the COVID-19 pandemic.
Preprints don’t replace peer review, but they do change its timing. Surveys show that 90% of researchers believe peer review improves the quality of published work. The question is whether that review has to happen at a journal. Some funding organizations now consider a paper that has undergone a rigorous, journal-independent peer review process to be equivalent to a journal-published article. Preprint review platforms can post evaluations a median of 46 days after the paper appears, compared to the roughly 163 to 199 days it takes for traditional journal publication.
The tradeoff is clear: preprints get information out faster, but readers need to understand that a preprint hasn’t yet been vetted by independent experts. That distinction matters most in fields like medicine, where preliminary findings can influence public behavior before the science is fully checked.
AI in the Review Process
Artificial intelligence tools are beginning to play a supporting role in peer review, though they aren’t replacing human reviewers. Some publishers already use AI to help authors match their manuscripts to the most suitable journals based on title, keywords, and abstract. Editors can use AI to help summarize long review reports or flag potential issues in a submission.
These tools are designed to speed up the administrative side of the process, not to make scientific judgments. The core evaluation, deciding whether a study’s design is sound, whether its conclusions hold up, whether the work advances the field, still depends on human expertise. Journals are actively developing guidelines for when and how AI can be used by authors, reviewers, and editors, recognizing that transparency about AI involvement is essential to maintaining trust in the system.

