A “good” impact factor for a medical journal depends entirely on the specialty, but as a general benchmark, an impact factor above 5.0 is considered strong in most medical fields, and anything above 20 puts a journal among the elite. The number that counts as impressive in surgery would be unremarkable in oncology or general medicine, so context matters more than any single threshold.
How Impact Factor Is Calculated
The impact factor is a simple ratio: take the number of citations a journal’s articles received in a given year, then divide by the number of articles that journal published in the previous two years. A journal that published 200 articles over two years and received 1,000 citations to those articles in the following year would have an impact factor of 5.0. Clarivate, the company behind the Journal Citation Reports, has published this metric since the mid-20th century, and it remains the most widely recognized measure of journal influence.
Because it’s a two-year average, the impact factor rewards journals that publish work attracting quick citations. Fields where research builds rapidly on recent findings (like immunology or infectious disease) naturally produce higher impact factors than fields where citations accumulate more slowly.
What the Numbers Look Like Across Medicine
The top-tier general medical journals operate in a completely different range than specialty journals. The New England Journal of Medicine reached an impact factor of 176.1, while The Lancet hit 202.7 in the same reporting period. These numbers are extreme outliers, inflated partly by a surge in COVID-era citations. No one publishing in a specialty field should measure themselves against these figures.
For most medical specialties, a rough guide looks like this:
- Above 30: Top-tier specialty journals, leaders in their field
- 10 to 30: Highly respected journals that regularly publish influential research
- 5 to 10: Solid, well-regarded journals where strong work routinely appears
- 2 to 5: Respectable mid-range journals, perfectly appropriate for good research
- Below 2: Smaller or more niche journals, not necessarily low quality but with a narrower reach
These ranges shift dramatically by field. Surgical subspecialties tend to have lower impact factors across the board because citation pools are smaller. A vascular surgery journal with an impact factor of 4 might represent the top of its field, while the same number in oncology would place a journal well below the leaders. Comparing impact factors across specialties without adjusting for field size is one of the most common mistakes researchers make.
Quartile Rankings Offer Better Context
Because raw impact factors vary so much between fields, quartile rankings often give you a clearer picture. Both Clarivate’s Journal Citation Reports and the Scimago Journal Rank system sort journals within their subject category and assign them to quartiles: Q1 (top 25%), Q2 (25th to 50th percentile), Q3, and Q4. A Q1 journal in orthopedic surgery might have an impact factor of 4, while a Q1 journal in cell biology might need an impact factor above 15 to earn the same ranking.
If you’re evaluating where to submit a paper or assessing whether a source is credible, the quartile ranking within the relevant category is more informative than the raw number. The two major ranking systems use different citation databases and slightly different calculation methods, so their rankings aren’t directly comparable, but either one gives useful context.
Why Impact Factor Is an Imperfect Measure
The impact factor measures journal-level averages, not the quality of any individual paper. A journal with an impact factor of 40 still publishes articles that receive zero citations, and a journal with an impact factor of 3 can publish a paper that reshapes its field. The distribution of citations within a journal is heavily skewed: a small number of highly cited articles often drive most of the journal’s impact factor, while the majority of articles perform below that average.
Several other problems are well documented. Review articles tend to attract far more citations than original research, so journals that publish many reviews can inflate their scores. Editorials, news pieces, and other non-research content can also influence citation patterns. Some journals have been caught encouraging authors to cite other articles from the same journal, artificially boosting their numbers. Clarivate itself acknowledges these limitations and has stated there is no substitute for informed peer review when evaluating quality.
The inability to normalize across fields with different citation cultures is another core weakness. Research on citation patterns in surgery found stark variability between subspecialties: the correlation between impact factor and other influence metrics was extremely strong in vascular surgery (r = 0.95) but noticeably weaker in plastic surgery (r = 0.77). In some surgical subspecialties, different metrics actively contradicted each other.
Other Metrics Worth Checking
If you’re trying to gauge a journal’s real influence, looking at multiple metrics gives a fuller picture. The Eigenfactor works similarly to impact factor but adds a layer of sophistication: it weights citations based on the prestige of the citing journal. A citation from a top-tier journal counts more than a citation from a low-ranked one, which helps address concerns about self-citation and citation gaming.
The h-index, originally designed to measure an individual researcher’s output, can also be applied at the journal level. It captures both productivity and citation impact in a single number. A journal with an h-index of 50 has published at least 50 articles that have each been cited at least 50 times. Altmetrics take a completely different approach, tracking how research is discussed on social media, covered in news outlets, and referenced in policy documents. This captures a type of real-world influence that citation counts miss entirely, though research shows the correlation between altmetric scores and traditional citations varies widely by specialty.
Practical Guidance for Choosing a Journal
If you’re a researcher deciding where to submit, the impact factor is one data point among several. Start by identifying journals in your specialty’s Q1 or Q2 that regularly publish work similar to yours. A journal’s scope, review timeline, audience, and open-access policies all matter as much as, or more than, its impact factor for determining whether your paper will reach the right readers.
If you’re a student, clinician, or patient evaluating whether a published finding is trustworthy, the journal’s impact factor tells you something about its general standing but nothing definitive about the specific paper. A well-designed study in a Q2 journal can be more reliable than a poorly designed one in a Q1 journal. Look at the study itself: its sample size, methodology, and whether its findings have been replicated. The journal’s reputation is the starting filter, not the final verdict.

