What Is an H-Index in Research? How It’s Calculated

The h-index is a single number that measures both how much a researcher has published and how often their work gets cited by other researchers. A scientist with an h-index of 20 has published at least 20 papers that have each been cited at least 20 times. Introduced by physicist Jorge Hirsch in 2005, it has become one of the most widely used tools for evaluating research impact in academia.

How the H-Index Is Calculated

The logic is straightforward: list all of a researcher’s publications in order from most cited to least cited, then count down the list until the ranking number exceeds the citation count. The point where those two numbers meet is the h-index.

Say you’ve published 25 papers. Your most-cited paper has 80 citations, the next has 52, and so on down the list. Your 12th most-cited paper has 14 citations, and your 13th most-cited paper has 11 citations. Since paper number 13 has fewer than 13 citations, your h-index is 12. It doesn’t matter that your top paper has 80 citations or that you have 13 more papers below the cutoff. The h-index only cares about that crossing point.

This design rewards consistent, impactful output. A researcher who publishes one blockbuster paper and nothing else will have an h-index of 1. Someone who publishes hundreds of papers nobody reads will also score low. The metric captures the sweet spot: a body of work that other scientists actually find useful enough to reference.

What Counts as a Good H-Index

There’s no universal “good” number because the h-index depends heavily on career stage, field, and whether someone works primarily in research or clinical practice. That said, some general patterns from studies of academic faculty give useful reference points.

Hirsch himself suggested that an h-index of 10 to 12 might be a reasonable benchmark for tenure at major research universities, 18 could indicate readiness for a full professorship, and 45 or higher could signal eligibility for the U.S. National Academy of Sciences. A study at the University of Alabama at Birmingham’s Department of Surgery found a median h-index of 6 at hiring, 11 at promotion to associate professor, and 17 at promotion to full professor. A broader analysis across 14 disciplines in North American medical schools found assistant professors typically ranged from 2 to 5, associate professors from 6 to 10, and full professors from 12 to 24.

At the very top, about 84% of Nobel Prize-winning physicists had an h-index of at least 30. But these numbers shift dramatically between fields. Physicists and biomedical researchers tend to have higher h-indexes because their fields cite more frequently and publish more papers. A social scientist or mathematician with an h-index of 15 may be just as accomplished, relative to their peers, as a biologist with an h-index of 40.

Why Your H-Index Differs Across Platforms

If you check your h-index on Google Scholar, Scopus, and Web of Science, you’ll likely get three different numbers. This isn’t an error. Each platform indexes different sources and applies different rules about what counts.

Google Scholar casts the widest net, indexing conference papers, theses, preprints, book chapters, and documents from a broad range of sources. Scopus focuses on peer-reviewed journals it has specifically vetted for quality. Web of Science is similarly selective. Because Google Scholar picks up more documents and more citations, it tends to report higher h-indexes than the other two. Whether a platform includes self-citations also plays a role, since counting your own references to your work can nudge the number up.

None of these platforms is “wrong,” but it’s important to compare h-indexes pulled from the same database. An h-index of 15 on Scopus and an h-index of 22 on Google Scholar for the same person aren’t contradictory. They’re just measuring slightly different pools of scholarly activity.

How Universities and Funders Use It

The h-index has become a common shorthand in hiring committees, promotion reviews, and grant applications. Universities often use it to compare candidates for faculty positions, since it provides a quick snapshot of research productivity. Funding agencies look at it when assessing whether a researcher has a track record of impactful work.

That said, there’s no official cutoff for any of these decisions. Some universities still rely on total publication counts or total citation numbers instead. Most institutions treat the h-index as one factor among many, alongside teaching evaluations, recommendation letters, the prestige of journals published in, and the significance of specific discoveries. The metric works best as a starting comparison tool, not as a final verdict on a researcher’s value.

Known Limitations

The h-index has several well-documented blind spots that are worth understanding, especially if your career is being evaluated by it.

It penalizes early-career researchers. Because the h-index can only grow over time as papers accumulate citations, a brilliant postdoc will almost always have a lower h-index than a mediocre senior professor simply because of career length. Hirsch acknowledged this and proposed dividing the h-index by the number of years since a researcher’s first publication, but this adjusted version (called the m-parameter) hasn’t caught on as widely.

It ignores the magnitude of highly cited work. Once a paper crosses the threshold into your top h papers, additional citations don’t raise your score. A researcher with one paper cited 5,000 times gets no more credit for it than if it had been cited 20 times, as long as it’s already above the cutoff. This means a scientist who produced a field-defining landmark paper looks the same, by this metric, as one whose top papers are merely well-cited.

Field comparisons are unreliable. Different disciplines have different publishing cultures. A biomedical researcher might publish 5 to 10 papers a year with dozens of co-authors, while a mathematician might publish 1 to 2 papers a year with one or two collaborators. Comparing their h-indexes head to head is meaningless.

Self-citation can be gamed. While self-citations have a smaller effect on the h-index than on raw citation counts, a researcher can still strategically cite their own borderline papers to push them over the h-index threshold. This is difficult to police and creates perverse incentives.

Alternative Metrics Worth Knowing

Several variations have been developed to address the h-index’s shortcomings. The g-index, proposed in 2006, rewards highly cited papers that the h-index ignores. It works by ranking papers by citation count and finding the largest number g where the top g papers have collectively received at least g-squared citations. A g-index of 10 means your top 10 papers have at least 100 total citations between them. This gives more weight to standout publications.

Google Scholar introduced the i10-index in 2011, which simply counts how many of your papers have been cited 10 or more times. It’s easy to understand at a glance and useful as a rough productivity gauge, though it doesn’t capture citation depth the way the h-index or g-index do.

No single metric tells the full story of a researcher’s impact. The h-index remains the most widely recognized because it balances simplicity with informativeness, but it works best when paired with other indicators and a qualitative understanding of someone’s contributions to their field.