A journal’s impact factor is the average number of citations its recent papers receive, calculated over a two-year window. It tells you how frequently a typical article from that journal gets cited by other researchers, which serves as a rough proxy for the journal’s influence in its field. But the number on its own is almost meaningless without context, and misreading it is common. Here’s how to actually use it.
How the Impact Factor Is Calculated
The formula is straightforward. Take all the citations received in a given year that point to articles published in that journal during the previous two years. Divide that by the total number of articles the journal published in those same two years. If a journal published 200 articles in 2022 and 2023, and those articles collectively received 1,000 citations in 2024, the journal’s 2024 impact factor is 5.0.
Clarivate, the company behind the metric, publishes impact factors annually through its Journal Citation Reports platform. The data comes from the Web of Science database, which covers roughly 11,000 journals. As of 2023, Clarivate extended impact factors to journals in the Emerging Sources Citation Index and the Arts and Humanities Citation Index, so the metric now encompasses all journals in the Web of Science Core Collection.
Why Raw Numbers Are Misleading Across Fields
The single biggest mistake people make is comparing impact factors between disciplines. A score of 3.0 might place a journal in the top tier of mathematics but barely in the middle of the pack for molecular biology. Citation cultures vary enormously: biomedical researchers cite more papers per article, publish more frequently, and work in larger teams than, say, researchers in pure mathematics or the humanities. These habits inflate citation counts in some fields and suppress them in others.
This means an impact factor only makes sense when you compare it to other journals in the same subject category. Clarivate assigns each journal to one or more categories and ranks them accordingly. The useful comparison is never “Is 4.0 a good impact factor?” but rather “Where does 4.0 place this journal among others in its specific category?”
Use Percentiles and Quartiles, Not Raw Scores
Clarivate provides two tools that make cross-field comparison possible: quartile rankings and the Journal Impact Factor Percentile. The quartile ranking (Q1, Q2, Q3, Q4) tells you which quarter of its category a journal falls into, with Q1 being the top 25%. The percentile transforms a journal’s rank into a value from 0 to 100, making it easy to compare journals across completely different disciplines. A 90th-percentile engineering journal and a 90th-percentile physics journal are roughly equivalent in standing within their fields, even if their raw impact factors look nothing alike.
When you’re evaluating a journal, look up its quartile or percentile in its category rather than fixating on the number itself. If a journal sits in Q1 for its field, it’s among the most-cited in that discipline. If it’s in Q3 or Q4, its articles are cited less frequently than most competitors.
A Journal’s Score Doesn’t Describe Individual Papers
This is the most important thing to understand: the impact factor is a journal-level average, and citation distributions within any journal are heavily skewed. A small fraction of highly cited papers pull the average up while most articles receive far fewer citations than the impact factor would suggest. A journal with an impact factor of 10 doesn’t mean each paper gets roughly 10 citations. It means a few papers might get 50 or 100, and the majority might get 2 or 3.
This skewness is why the San Francisco Declaration on Research Assessment, signed by thousands of researchers and institutions worldwide, specifically recommends against using journal impact factors as a proxy for the quality of individual articles. The declaration states plainly: do not use journal-based metrics to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions. The scientific content of a paper matters far more than the impact factor of the journal it appeared in.
An article published in a lower-impact journal can easily outperform one in a prestigious journal in terms of real-world citations and influence. If you want to evaluate a specific paper, look at how many times that paper has been cited, not the journal’s average.
How Journals Can Game the Number
The impact factor’s susceptibility to manipulation is well documented. Several editorial strategies can artificially inflate scores. Journals may publish a high volume of review articles, which tend to contain more references and generate more citations than original research. Editors can use editorials and letters to extensively cite the journal’s own back catalog. Citations to these short pieces count in the numerator of the impact factor formula, but the pieces themselves may not count in the denominator, creating an easy path to inflation.
More troubling, some editors pressure authors to add unnecessary citations to recent articles from their own journal as a condition of acceptance. Clarivate monitors for excessive self-citation and has suppressed journals from its rankings for this practice, but the incentives remain strong enough that it continues to occur. When you see a journal with a suspiciously high self-citation rate, treat its impact factor with extra skepticism.
How CiteScore Differs From the Impact Factor
Elsevier’s CiteScore is the main alternative metric, and the differences are worth knowing. CiteScore uses a three-year citation window instead of two years, draws from the Scopus database (which indexes roughly 22,800 journals compared to Web of Science’s 11,000), and includes all document types in its denominator. The impact factor’s denominator only counts certain “citable items,” and exactly how Clarivate decides what qualifies has historically been opaque. This distinction matters because it means the two metrics can produce noticeably different rankings for the same journal.
CiteScore’s broader journal coverage means it captures publications that don’t appear in Web of Science at all, which can be particularly relevant in fields where important work appears in regional or specialized journals. Neither metric is inherently superior. They measure similar things with slightly different methods, and checking both gives you a more complete picture.
The H-Index Measures Something Different
If you’re evaluating a researcher rather than a journal, the h-index is a more appropriate metric. Proposed by physicist Jorge Hirsch in 2005, it captures both productivity and citation impact in a single number. A researcher with an h-index of 20 has published at least 20 papers that have each been cited at least 20 times. It rewards sustained, well-cited output rather than a single viral paper.
Journals also have h-indexes, but the same cross-discipline warning applies. An h-index of 80 in biology is not equivalent to an h-index of 80 in geosciences. The simplest way to evaluate a researcher’s output, beyond any single metric, is to look at both the number of publications and their individual citation counts, both of which are visible on Google Scholar profiles.
A Practical Checklist for Reading Impact Factors
- Check the category ranking first. Look at the journal’s quartile (Q1 through Q4) or percentile within its subject category, not the raw number.
- Never compare across fields. A 2.0 in mathematics and a 2.0 in oncology represent completely different levels of standing.
- Don’t judge individual papers by it. Citation distributions are skewed, so the journal average tells you little about any single article.
- Look at self-citation rates. Journals with unusually high self-citation percentages may have inflated scores.
- Cross-reference with CiteScore. If the two metrics tell very different stories, dig into why. Differences in what counts as a citable item can explain the gap.
- Use the official source. Verified impact factors are published in Clarivate’s Journal Citation Reports. Numbers listed on journal websites are sometimes outdated or incorrect.

