The h-index and i10-index are two numbers that measure a researcher’s publication impact based on how often their papers get cited by other researchers. The h-index, introduced in 2005, balances productivity with influence. The i10-index, created by Google Scholar, takes a simpler approach by counting only papers that have crossed a specific citation threshold. Both appear on Google Scholar profiles and are widely used in academia to evaluate research output.
How the H-Index Works
Jorge Hirsch, a physics professor at the University of California, proposed the h-index in 2005. The concept is elegant: a researcher has an h-index of h if h of their papers have each been cited at least h times. So a researcher with an h-index of 20 has published at least 20 papers that have each received 20 or more citations. The remaining papers in their catalog have fewer than 20 citations each.
What makes this metric useful is that it rewards both quantity and quality simultaneously. Publishing 500 papers that nobody reads won’t raise your h-index. Neither will publishing a single paper that gets cited 10,000 times. You need a sustained body of work that other researchers actually engage with. The h-index can only go up by one point at a time, and each new point requires an additional paper to cross the threshold, so it climbs more slowly as it gets higher.
To calculate it yourself, list all your publications in order from most cited to least cited. Move down the list until you reach the paper whose rank number exceeds its citation count. Your h-index is the number just before that point. If your top five papers have 45, 30, 18, 7, and 3 citations, your h-index is 4, because four papers have at least 4 citations each, but the fifth paper has only 3.
How the I10-Index Works
The i10-index is far simpler: it counts the number of your publications that have received 10 or more citations. That’s it. If you have 15 papers and 8 of them have been cited at least 10 times, your i10-index is 8. Google Scholar created this metric and it appears exclusively on Google Scholar profiles.
The appeal is transparency. Anyone can glance at an i10-index and immediately understand what it means without needing to grasp the recursive logic of the h-index. It gives a quick sense of how many papers a researcher has produced that gained meaningful traction in their field. The threshold of 10 citations is somewhat arbitrary, but it serves as a rough filter separating papers that attracted real engagement from those that were barely noticed.
Where You’ll Find Each Metric
Google Scholar is the only platform that displays the i10-index. It calculates and shows both the h-index and i10-index on researcher profile pages, along with total citation counts. Scopus and Web of Science, two other major academic databases, calculate and display the h-index but not the i10-index.
These databases don’t always agree on the numbers. Google Scholar tends to produce higher citation counts than Web of Science or Scopus because it casts a wider net, indexing conference papers, theses, preprints, and other gray literature that the more curated databases exclude. A researcher might have an h-index of 25 on Google Scholar but only 18 on Scopus for the same body of work. This makes it important to compare researchers using the same database rather than mixing sources.
What Counts as a “Good” H-Index
There is no universal benchmark because the h-index varies enormously by field, career stage, and publication culture. A biomedical researcher will typically accumulate citations faster than a mathematician or a historian, simply because biomedical papers tend to have longer reference lists and larger research communities. An h-index of 15 might be excellent for an early-career social scientist but unremarkable for a mid-career immunologist.
Career length matters just as much. The h-index correlates strongly with what researchers call “scientific age,” meaning the number of years since a researcher’s first publication. A professor with 30 years of active research has had far more time to accumulate citations than someone five years out of graduate school. Comparing the two on h-index alone is misleading. As a very rough guideline, an h-index that roughly matches or exceeds the number of years you’ve been publishing suggests a steady, impactful research career.
Limitations Worth Knowing
Both metrics have real blind spots. The h-index doesn’t account for the number of authors on a paper. A researcher who is the sole author on 20 highly cited papers gets the same h-index as someone who contributed minimally to 20 large team projects. It also ignores author position, so it can’t distinguish a lead researcher from a minor contributor listed in the middle of a 40-person author list.
Self-citation is another issue. Researchers can inflate their h-index by citing their own previous work in every new paper. While some self-citation is natural and appropriate, the h-index doesn’t filter it out. The metric also has a built-in ceiling problem: once a paper is counted within the h-index core, additional citations to that paper don’t improve the score. A researcher with one paper cited 5,000 times gets no more credit from the h-index than if that paper had been cited 50 times, as long as it already fell within the h-core.
The i10-index has its own weaknesses. It treats a paper with 10 citations exactly the same as one with 10,000 citations. It also tells you nothing about the distribution of those citations across your work. Two researchers could both have an i10-index of 12, but one might have 12 papers hovering around 10 to 15 citations while the other has 12 papers with hundreds of citations each.
How These Metrics Are Used in Practice
Review boards, institutions, and funding agencies increasingly rely on bibliometric indicators like the h-index when evaluating researchers for grants, promotions, and tenure decisions. These metrics are perceived as more objective than peer review and can be calculated with far less time and effort. There is evidence that citation-based indicators often align with expert peer judgment, which has encouraged their adoption.
That said, the bibliometrics research community consistently recommends using these indicators as a complement to informed peer review, not a replacement. No single number captures the full picture of a researcher’s contributions. Teaching, mentorship, clinical work, community engagement, and the broader significance of research questions all matter but leave no trace in citation metrics. Hiring committees and grant reviewers who rely solely on the h-index risk overlooking researchers doing important but less citation-friendly work.
Related Metrics That Fill the Gaps
Several alternative indices have been developed to address the h-index’s shortcomings. The g-index, introduced by Leo Egghe in 2006, gives more weight to highly cited papers. It’s defined as the largest number g such that your top g papers have received at least g-squared total citations combined. Because it rewards papers that far exceed the citation threshold rather than ignoring those extra citations, the g-index is always higher than the h-index for the same researcher.
The e-index, proposed by Chun-Ting Zhang in 2009, specifically targets the excess citations that the h-index ignores. It captures the citation surplus above and beyond what the h-index accounts for, making it useful for identifying researchers whose top papers are cited far more heavily than their h-index suggests. It works best for established researchers with substantial citation records and is less informative for those early in their careers.
The m-index attempts to address the career-length bias by dividing the h-index by the number of years since a researcher’s first publication. This produces a per-year rate that makes it easier to compare researchers at different career stages, though it can overvalue researchers who published a burst of impactful work early and then slowed down.

