The h-index (sometimes called the h-factor) is a number that measures a researcher’s publication impact by combining how many papers they’ve published with how often those papers have been cited by other scientists. Physicist Jorge Hirsch proposed it in 2005, and it has since become one of the most widely used metrics in academia for evaluating research output.
How the H-Index Works
The core idea is simple: a researcher has an h-index of h if h of their papers have each been cited at least h times. So if you have an h-index of 7, that means you’ve published at least 7 papers, and each of those 7 papers has been cited at least 7 times by other researchers. Your remaining papers may have fewer than 7 citations each.
This design rewards both quantity and quality. Publishing hundreds of papers that nobody reads won’t raise your score. Neither will having a single blockbuster paper with thousands of citations while the rest go unnoticed. The h-index captures sustained, meaningful output.
What Counts as a Good Score
Hirsch himself suggested some rough benchmarks for careers in research. An h-index of 10 to 12 could be a reasonable guideline for tenure decisions at major research universities. A value of 18 might correspond to a full professorship. An h-index of 45 or higher could signal the kind of career that qualifies for membership in the U.S. National Academy of Sciences.
At the very top, Hirsch calculated the h-index of Nobel Prize winners and found that 84% of them had an h-index of at least 30. These numbers vary by field, though, so they work best as loose reference points rather than rigid cutoffs. A biomedical researcher and a mathematician with identical talent and work ethic will end up with very different scores simply because citation practices differ between disciplines.
Where You Can Look It Up
Three major platforms calculate h-index scores: Google Scholar, Scopus, and Web of Science. Your score will differ depending on which one you check. Google Scholar datasets contain roughly twice as many publications and citations as Scopus datasets, because Google Scholar crawls a much wider range of sources, including online repositories like arXiv that Scopus doesn’t index. The trade-off is that Google Scholar data tends to be broader but noisier, while Scopus is more selective and curated.
If you’re comparing researchers, use the same platform for both. A Google Scholar h-index of 25 and a Scopus h-index of 25 don’t represent the same thing.
Known Limitations
The h-index has real blind spots. It’s cumulative, meaning it can only go up over time. This inherently favors senior researchers over early-career scientists who may be doing equally impactful work but haven’t had decades for citations to accumulate. A postdoc five years into their career is structurally unable to compete with a professor who has been publishing for thirty years.
Cross-field comparisons are unreliable. Fields like biomedicine generate far more citations per paper than fields like mathematics or computer science. Comparing an immunologist’s h-index to a number theorist’s tells you almost nothing about relative quality.
The metric also ignores where a researcher falls on the author list. In many scientific fields, teams of 10 or 20 co-authors are common. A researcher who contributed a minor analysis to a highly cited paper gets the same h-index benefit as the lead scientist who designed the entire study.
Gaming the System With Self-Citations
Researchers can inflate their h-index through strategic self-citation, and simulations have shown the effect is significant. The strategy works like this: instead of citing your own past work randomly, you specifically cite the papers sitting just below your current h-index threshold, pushing them over the line. One simulation found that over 20 years, a researcher using this strategy could inflate their h-index from 14 to 19 compared to someone self-citing at random.
This isn’t a fringe concern. Studies estimate that up to 36% of all citations are self-citations, meaning the potential for inflating bibliometric indicators is enormous. The manipulation is most effective for less productive researchers who attract fewer citations from others, since highly cited researchers already have plenty of organic citations pushing their papers above threshold. Statisticians have developed detection tools (like the q-index) that can flag suspicious self-citation patterns, but these aren’t routinely applied in hiring or promotion decisions.
Alternative Metrics
Several variations attempt to fix the h-index’s shortcomings:
- g-index: Proposed in 2006, this gives more weight to highly cited papers. Your g-index is the largest number g such that your top g papers have at least g-squared total citations combined. So a g-index of 10 means your top 10 papers have accumulated at least 100 citations total. This helps distinguish two researchers who share the same h-index but differ in how influential their best work is.
- i10-index: Introduced by Google Scholar in 2011, this simply counts how many of your publications have 10 or more citations. It’s easy to calculate and easy to understand, though it’s a blunt instrument compared to the h-index.
Neither has displaced the h-index as the default metric, but looking at all three together gives a more complete picture than any single number.
Other Meanings of “H Factor”
If you weren’t searching for academic metrics, “H factor” also refers to the Honesty-Humility dimension in the HEXACO personality model. This is a six-factor framework used in psychology research where the “H” measures traits like sincerity, fairness, and modesty. It’s distinct from the more familiar Big Five personality model and has become an active area of research in organizational psychology and behavioral science.

