Double-blind peer review is a system where neither the reviewers nor the authors know each other’s identities during the evaluation of a research paper. The journal editor is the only person who knows who wrote the paper and who reviewed it. This setup exists to reduce bias and keep the focus on the quality of the research itself.
How the Process Works
When a researcher submits a paper to a journal using double-blind review, the editor strips out or verifies that all identifying information has been removed before sending it to reviewers. The reviewers read the paper, evaluate its methods and conclusions, and recommend whether it should be published, revised, or rejected. At no point do the reviewers see the authors’ names, university affiliations, or funding sources. Likewise, the authors never learn who reviewed their work.
This is different from single-blind review, which is still the most common model in academic publishing. In single-blind review, the reviewers know exactly who wrote the paper, but the authors don’t know who reviewed it. The third major model, open peer review, removes anonymity entirely: both sides know who the other is, and some journals even publish the reviewer comments alongside the final paper.
Why Anonymity Matters
The core argument for double-blind review is that knowing who wrote a paper changes how people judge it. A large study published in the Proceedings of the National Academy of Sciences found that single-blind reviewing gave a significant advantage to papers written by famous authors and researchers at prestigious institutions. When reviewers could see names and affiliations, they were more selective about which papers they were willing to evaluate in the first place, bidding on 22% fewer papers compared to double-blind reviewers.
Bias doesn’t stop at institutional prestige. Research has documented that biases based on race, sex, and geographic origin affect how manuscripts are evaluated. A paper from a well-known lab at a top university may get the benefit of the doubt on a weak methodology section, while identical work from a lesser-known institution might not. Double-blind review is designed to neutralize these dynamics so that a paper from a small college in rural India gets the same scrutiny and the same fair shot as one from Harvard.
What Authors Must Do to Stay Anonymous
Double-blind review puts real work on the author’s shoulders. You can’t just leave your name off the title page and call it done. Journals that use this system typically require authors to:
- Remove all names and affiliations from every file, including datasets and supplementary materials
- Disguise self-citations by replacing them with generic placeholders like “(Author, Year)” instead of “(Jones, 2022),” and avoiding phrases like “our previous work” or “we demonstrated”
- Strip file metadata from Word documents and PDFs, which can contain the author’s name in the background properties
- Anonymize preregistrations on platforms like the Open Science Framework, if the study was preregistered
- Redact identifying details from acknowledgments, funding disclosures, ethics statements, and data availability sections
This process is more involved than most people expect. Even a careless reference to “our lab’s previous findings on X” can give the game away.
The Biggest Weakness: It Often Doesn’t Work
The most serious criticism of double-blind review is that true anonymity is hard to achieve in practice. In many scientific fields, researchers can identify who wrote a paper based on the topic, the methodology, the writing style, or the specific datasets used. A study in the American Journal of Neuroradiology found that reviewers correctly identified the authors 90.3% of the time when they suspected they knew who had written the manuscript. In small, specialized fields where only a handful of labs work on a particular problem, blinding is nearly impossible.
This doesn’t mean the process is useless. Even imperfect blinding may reduce the intensity of bias, and it still protects authors in larger fields where guessing is harder. But it does mean that double-blind review is not the airtight safeguard it’s sometimes presented as.
Which Journals Use It
Double-blind review is common in the social sciences and humanities, where many journals have used it for decades. In the natural sciences and medicine, it has been slower to catch on, though adoption is growing. Nature and its family of journals began offering double-blind review as an option starting in 2015, after a successful pilot at Nature Geoscience and Nature Climate Change beginning in 2013. The key word is “option”: on most Nature titles, authors choose whether they want single-blind or double-blind review at the time of submission. The Journal of Bacteriology similarly piloted a double-blind option in recent years.
This optional model is increasingly popular among large publishers. Rather than mandating one system, journals let authors decide how much anonymity they want, recognizing that the benefits vary by field and by individual circumstance. An early-career researcher from a less prominent institution may benefit most from blinding, while an established scientist might prefer single-blind review where their track record can provide context.
The Case for Open Review Instead
Some journals have moved in the opposite direction, toward full transparency rather than more anonymity. The argument is essentially ethical: open review puts authors and reviewers on equal footing and makes everyone more accountable. When a reviewer’s name is attached to their comments, they’re less likely to be dismissive, careless, or unfairly harsh. Editors also become more accountable for their choice of reviewers and how much weight they give to each opinion.
Journals like The BMJ have championed this approach, arguing that sunlight is ultimately a better disinfectant than darkness. Open review also lets readers see the full timeline of submission, revision, and acceptance, offering a window into how the final published version came to be. The tradeoff is that reviewers may soften legitimate criticism to avoid conflict, and junior researchers may hesitate to critique senior colleagues by name.
There is no consensus on which model is best. Each addresses a different failure mode of peer review: double-blind targets bias from reputation and identity, while open review targets bias from anonymity and lack of accountability. Many journals are experimenting with hybrid approaches, and the landscape continues to shift as publishers weigh the evidence on what actually produces the fairest, most rigorous evaluations.

