In science, “impact” refers to the measurable influence that research has on other scientists, on society, or on the economy. It can mean something as narrow as how often a published paper gets cited by other researchers, or as broad as whether a discovery eventually changes medical treatments, government policy, or everyday life. The word carries different meanings depending on context, and understanding those layers matters because impact increasingly determines which scientists get hired, promoted, and funded.
Academic Impact: Influence Within Science
The most common use of “impact” in science refers to academic impact, which is how much a piece of research influences other researchers. The primary currency here is citations. When a scientist publishes a paper and other scientists reference it in their own work, each reference counts as a citation. A paper with hundreds of citations has shaped a field. A paper with zero citations was, for practical purposes, ignored.
This citation-based view of impact drives most of the evaluation systems in academia. Hiring committees, grant agencies, and university rankings all rely heavily on citation data to judge the value of research. That reliance has made impact metrics one of the most powerful forces in modern academic life, influencing everything from who gets tenure to which journals attract the best submissions.
The Journal Impact Factor
The most well-known impact metric is the Journal Impact Factor, or JIF. It measures journals, not individual papers. The calculation is straightforward: take the number of citations a journal received in a given year to articles it published in the two previous years, then divide by the total number of articles published in those two years. A journal with an impact factor of 10 means its recent articles were cited an average of 10 times each.
Publishing in a high-impact journal (like Nature or The Lancet) signals prestige, while publishing in a low-impact journal may carry less weight on a CV. Clarivate, the company that calculates the JIF, releases updated rankings annually through its Journal Citation Reports. Starting in 2024, it unified rankings across subject categories, eliminating the separate rankings that previously existed for journals indexed in multiple fields.
The JIF is useful as a rough gauge of a journal’s visibility, but it has a well-known flaw: it describes the average, not any single paper. A journal with a high impact factor might contain a few massively cited papers pulling the average up, alongside many papers that are rarely cited at all. Publishing in a prestigious journal does not guarantee that your specific paper will be influential.
Measuring an Individual Scientist’s Impact
For individual researchers, the most widely used metric is the h-index. A scientist has an h-index of, say, 7 if they have published at least 7 papers that have each been cited at least 7 times. It rewards both productivity (publishing many papers) and influence (those papers being cited frequently).
What counts as a “good” h-index depends heavily on career stage. Jorge Hirsch, the physicist who invented the metric, suggested that after 20 years of active research, an h-index of 20 is good, 40 is outstanding, and 60 is exceptional. He also noted that roughly 84% of Nobel Prize-winning physicists had an h-index of at least 30. In practical terms for academic careers, an h-index around 3 to 5 is typical for assistant professors, 8 to 12 for associate professors, and 15 to 20 for full professors.
Like the JIF, the h-index has limitations. It favors researchers who have been publishing for decades, since citations accumulate over time. It also varies enormously between fields. A biomedical researcher will naturally accumulate more citations than a mathematician, simply because biomedical papers tend to have longer reference lists and larger research communities.
Societal Impact: Change Beyond Academia
Academic citations only capture influence within the research world. Increasingly, governments and funding bodies want to know whether science makes a difference in people’s lives. This broader concept is called societal impact, and it’s harder to quantify.
The UK’s Research Excellence Framework, one of the most developed systems for evaluating societal impact, judges research on two dimensions: reach and significance. Reach looks at the extent and diversity of people affected by the research, not just population size but the proportion of realistic potential beneficiaries. Significance looks at whether the research actually improved outcomes for those people in a meaningful way. A clinical study that changes treatment guidelines for a common disease, for example, would score highly on both.
Societal impact can take many forms. Research might influence legislation, reshape clinical practice, improve agricultural yields, or change public behavior during a health crisis. A study on vaccine communication, for instance, might appear in news stories and policy briefs long before it accumulates academic citations. These pathways from lab to real world are what translational science tries to accelerate, turning observations from the laboratory, clinic, and community into diagnostics, treatments, and behavioral changes that benefit health and society.
Economic Impact
A related but distinct question is whether scientific research generates economic returns. Governments that fund billions of dollars in research naturally want to know whether the investment pays off. Economic impact analysis tries to answer this by assigning monetary values to both the costs of research and the benefits it produces.
Return on investment is the most common framework, but it is deceptively difficult to apply to basic science. One approach relates a company’s or industry’s output to its investment in research and development. Another uses stock market valuations to estimate how financial markets price a firm’s knowledge assets. At the national level, economists have tried to link total factor productivity (a measure of how efficiently a country turns inputs into outputs) to different types of government research spending.
The fundamental challenge is that scientific impact rarely follows a straight line. A discovery in particle physics might enable a medical imaging technology decades later through a chain of innovations no one predicted at the time. These diffuse, interconnected pathways make it nearly impossible to assign a clean rate of return to any single research investment. Treating publicly funded science purely as an economic input, where the goal is to maximize financial returns, misrepresents what most government-funded research is designed to do.
Altmetrics: Tracking Attention Online
Traditional citation metrics miss a lot. They don’t capture when a paper is discussed on social media, covered by journalists, downloaded thousands of times, or referenced in a government policy document. Altmetrics fill this gap by tracking non-citation indicators of attention and engagement across online sources.
Altmetric platforms monitor mentions in news stories, blog posts, policy documents, social media platforms, and reference managers. Many journal websites display an “altmetric donut,” a color-coded visual showing where a paper has received attention. Usage metrics like views and downloads indicate how many people are reading the work, while mention metrics track references in news, blogs, or policy briefs.
Altmetrics are especially useful for capturing early-stage and public-facing impact. A paper that shapes national health policy or goes viral on social media may have enormous real-world influence that citation counts alone would miss entirely.
The Push to Reform Impact Measurement
The dominance of citation-based metrics has generated serious backlash. The San Francisco Declaration on Research Assessment, known as DORA, is the most prominent reform effort. Its core recommendation is blunt: do not use journal-based metrics like the impact factor as a stand-in for the quality of individual research articles, individual scientists, or hiring and promotion decisions.
DORA calls on funding agencies to be explicit about their evaluation criteria and to emphasize, especially for early-career scientists, that the scientific content of a paper matters far more than the impact factor of the journal it appeared in. It asks researchers serving on hiring and tenure committees to base their assessments on what the science actually says, not on where it was published or how many times it was cited.
The concern driving these reforms is that when careers hinge on metrics, scientists optimize for the metrics rather than for good science. This can incentivize flashy, publishable results over careful, reproducible work. It can push researchers toward hot topics and away from important but niche questions. The metrics of scientific impact have become the driving force behind the academic environment, and many scientists and institutions are now reckoning with whether that force is pointed in the right direction.

