Research metrics are quantitative tools used to assess the performance, visibility, and influence of research outputs such as journals, articles, and individual researchers. They help scholars, institutions, and funders understand how research is being used, cited, and discussed within a field, and inform decisions about publishing strategies, promotion, collaboration, and funding.
Broadly, research metrics can be grouped into three main categories: author‑level, journal‑level, and article‑level metrics. Each operates at a different scale and answers different questions about research impact, so they should be interpreted together rather than in isolation.
Journal‑level metrics assess the collective impact, reach, or prestige of a journal as a venue for scholarly communication. They are based on aggregated citation patterns to all citable items in the journal over a defined time window and are intended to describe the journal's influence, not the quality of individual articles or authors.
Tracking at the journal level is useful to:
Examples of journal‑level metrics include Journal Impact Factor (JIF), Journal Citation Indicator (JCI), CiteScore, SNIP (Source Normalized Impact per Paper), and SJR (SCImago Journal Rank).
The Impact Factor is calculated by dividing the number of citations received in the assessment year by the total number of articles published in the previous two years. An impact factor of 1 means the journal has received at least an average of 1 citation in the assessment year for the articles published in the previous two years. If the Impact factor is 2.5, it means the journal has received at least an average of 2.5 citations in the assessment year for articles published in the previous two years. For the calculation of impact factor, the citations will be used from the Web of Science indexed journals only.
The Journal Citation Indicator (JCI) is a field-normalized metric designed to represent the average citation impact of scholarly articles and reviews published by a journal over a recent three-year period. This indicator takes into account differences in citation practices across disciplines, making it useful for comparing the influence of journals from different research fields. JCI value of 1.0 means the journal's published papers received citations equal to the average within its subject category; values above 1.0 indicate a higher-than-average citation impact, while those below 1.0 represent a lower-than-average performance.
Three things are considered while calculation of JCI, field of study, type of document and article age and the data used from the Web of Science.
CiteScore is a journal-level metric provided by Elsevier, utilizing Scopus data to indicate the mean citation rate for articles published in a journal over a four-year period. If a journal's CiteScore is 5, this means that each article published in the journal during the past four years has received an average of 5 citations.
The calculation uses the following formula:
CiteScore = Total citations in the last 4 years to documents published in the journal / Total number of documents published in the journal over the past 4 years
This indicator reflects broad citation impact, incorporates multiple peer-reviewed publication types, and is updated annually (with monthly tracking available) for transparency and comparability.
SNIP (Source Normalized Impact per Paper) is a journal-level metric developed by Henk Moed and his team to provide a fair comparison of citation impact across different subject fields. It is calculated using Scopus data and corrects for differences in citation practices between disciplines by weighting each citation according to the field's typical citation rate.
The SNIP value for a journal is determined by dividing the number of citations received in the current year to its papers published over the prior three years by the total number of papers published in those three years. Each citation is further weighted based on the citation potential of the field, so a citation in a subject with fewer total citations carries more value than one in a field with frequent citations.
This approach allows SNIP to measure the actual citations received relative to the citations expected for a journal's subject field, enabling meaningful comparisons between journals from disciplines with different citation behaviors. SNIP is especially useful for cross-disciplinary analysis and for identifying journals performing strongly in their specific areas.
SCImago Journal Rank (SJR) is a journal-level metric that measures the scientific influence of scholarly journals by considering both the number of citations received and the prestige of the citing journals. SJR applies a weighted approach, where citations coming from highly ranked (prestigious) journals contribute more to the score than citations from less influential journals.
This formula allows the SJR to reflect both the volume of citations and the significance of citing sources, offering a normalized indicator valuable for comparing journals from different disciplines.
| Subject Area | Journal | JCR Impact Factor | CiteScore | SNIP | JCI | SJR |
|---|---|---|---|---|---|---|
| Medicine | CA: A Cancer Journal for Clinicians | 232.4 | 1154.2 | 201.167 | 120.891 | 45.004 |
| Business, Management and Accounting | International Journal of Information Management | 27.5 | 46.9 | 32.636 | 6.26 | 7.382 |
| Social Science | Review of Educational Research | 15.4 | 20.7 | 7.382 | 3.084 | 4.7 |
| Economics, Econometrics | Quarterly Journal of Economics | 12.7 | 21.9 | 9.325 | 4.535 | 5.995 |
| Psychology | Annual Review of Psychology | 29.4 | 58.1 | 11.936 | 5.321 | 3.786 |
Each metric provides a different perspective on journal influence and impact:
Article‑level metrics focus on the impact and engagement associated with a specific publication, such as a single journal article, chapter, or preprint. They may include traditional citation counts as well as usage data (views, downloads) and alternative indicators of online attention (mentions in news, policy documents, blogs, and social media).
Tracking metrics at this level helps to:
Examples of article‑level metrics include field‑weighted citation impact (FWCI) and suites such as PlumX metrics, which aggregate citations, usage, captures, mentions, and social media activity for individual items.
Using author‑, journal‑, and article‑level metrics together provides a more nuanced picture of research influence across different levels of scholarly communication.
Field-Weighted Citation Impact (FWCI) is a metric that shows how well cited a document is in comparison to similar documents globally. FWCI accounts for key variables like the year of publication, document type, and disciplines associated with the document's source, making it a normalized indicator across different fields.
The FWCI for a document is calculated as:
FWCI = Total citations received by the document / Average citations received by all similar documents (same field, document type, year) in a three-year window
This field-normalization allows fair comparison across disciplines, as each field contributes equally, eliminating differences in citation behavior and researcher output. FWCI is widely used to benchmark research performance and is available for articles, journals, authors, and institutions indexed by Scopus.
Citation benchmarking is a process or metric that demonstrates how the citations received by a specific document compare against the average number of citations received by similar documents within the same field, document type, and publication year. This approach enables an objective assessment of research impact by accounting for differences in citation behaviors across disciplines and document types.
A citation benchmarking score or visualization typically allows researchers to see if their work is cited more, less, or about the same as comparable research outputs. Values above the average mean the document is performing well relative to its peers, while values below the average indicate fewer citations than expected given the context.
PlumX Metrics organizes research metrics into five main categories to provide a comprehensive view of how research outputs are engaged with and cited in the scholarly and broader community. Here are the five categories and what each measures:
These categories together offer a multidimensional view of research performance, from traditional scholarly impact to real-world engagement and visibility.
Altmetrics are alternative metrics designed to measure the impact of scholarly research outputs based on their online activity and attention beyond traditional scholarly citations. Unlike citation counts, h-index, or journal impact factor—which reflect academic interest and accumulate impact over several years—altmetrics provide a real-time, immediate view on how and where research is being discussed, shared, and engaged with online.
Altmetrics track mentions, shares, and activity from sources such as social media (Twitter, Facebook), blogs, news media, Wikipedia, and download/view statistics, as well as online reference managers like Mendeley and Zotero. These metrics highlight the social interest and broader engagement generated by a research output, helping researchers understand its reach and influence within both academic and public spheres.
Altmetrics are meant to complement—not replace—traditional citation-based metrics, offering a parallel measure of impact. While traditional bibliometrics grow slowly as citations accumulate over time, altmetrics reveal when and where research is being discussed immediately upon publication. This makes altmetrics especially useful for tracking early and societal impact, providing timely feedback on research dissemination and public interest.
Learn about altmetrics:
https://youtu.be/M6XawJ7-880For appointment and promotion in academic and research institutions, several research metrics and scholarly achievements are considered essential:
These factors collectively contribute to decisions on appointments and promotions, reflecting both academic excellence and broader research impact.
For Tenure appointment & Promotion in academic and research institutions, several research metrics and scholarly achievements are considered essential:
For grant renewal, research metrics help demonstrate the measurable outcomes and broader impact of the funded work, providing evidence to justify continued or additional funding.
Effectively combining these research metrics supports a persuasive grant renewal case, demonstrating how the funded work benefited both the academic community and broader society.