Research Metrics

Research Metrics

Central Tribal University of Andhra Pradesh

Introduction to Research Metrics

Research metrics are quantitative tools used to assess the performance, visibility, and influence of research outputs such as journals, articles, and individual researchers. They help scholars, institutions, and funders understand how research is being used, cited, and discussed within a field, and inform decisions about publishing strategies, promotion, collaboration, and funding.

Broadly, research metrics can be grouped into three main categories: author‑level, journal‑level, and article‑level metrics. Each operates at a different scale and answers different questions about research impact, so they should be interpreted together rather than in isolation.

(i) What are author-level metrics? Why track at this level?

Author‑level metrics measure the aggregate impact and productivity of an individual researcher across their body of work, rather than for a single paper or journal. They are usually derived from citation counts to an author's publications and aim to summarize a career‑level footprint in the scholarly literature.

Tracking metrics at this level can help:

  • demonstrate research impact for hiring, tenure, promotion, and grant applications;
  • compare an author's output and influence within a discipline over time (with appropriate caution across fields);
  • identify potential collaborators or experts in a particular area of research.

Common author‑level indicators include the h‑index, g‑index, and related variants, which combine information about productivity (number of publications) and influence (citations received).

h-index

The h-index was introduced in 2005 by physicist Jorge Hirsch of the University of California, San Diego, as an author-level metric to measure both the productivity and citation impact of a researcher's publications. The h-index of a researcher is defined as the maximum number h such that the researcher has published h papers, each of which has been cited at least h times.

This means, for example, if a scientist's h-index is 10, he/she has 10 papers that have each been cited at least 10 times. The h-index is widely used to assess research output, as it combines both the quantity and quality of publications and is less influenced by outliers, such as a few highly cited papers or a large number of low-impact publications. It is now the most common index for measuring the productivity and impact of scientists and researchers across disciplines, though it should preferably be used for comparison within the same field due to differences in citation practices.

Calculation of h-index

Article Rank Citation Count Meets h-index?
1 33 Yes
2 30 Yes
3 20 Yes
4 15 Yes
5 7 Yes
6 6 Yes
7 5 No
8 5 No
9 4 No
10 3 No

The h-index in this case is 6, as the first 6 articles each have at least 6 citations.

Advantages of H-Index

  • Single Index for Impact & Productivity: The h-index combines both the quantity (number of publications) and quality (citations received) of an author's work into one score, providing a balanced measure of research performance.
  • Focus on Consistency: It emphasizes sustained academic influence rather than isolated instances of highly cited papers, rewarding a consistent record of impactful publications over time.
  • Well Accepted in Academia: The metric is extensively used by universities, grant committees, and funding agencies for promotions, tenure, and career advancement decisions, reflecting its broad acceptance within the scholarly community.
  • Field-Appropriate Evaluation: The h-index supports meaningful comparisons within disciplines by considering both productivity and influence, making it suitable for evaluating author performance in subject-specific contexts.

Disadvantages of H-Index

  • Career of Author: The h-index is biased towards senior researchers, as it can only increase over time and tends to favor those with longer careers, putting early-career researchers at a disadvantage regardless of the quality of their work.
  • Subject of the Author: It does not account for differences in citation and publication practices among disciplines, making cross-field comparisons problematic and sometimes misleading, as citation volumes vary widely between subjects.
  • Authorship Position: The h-index does not differentiate the author's contribution or position (last, first, corresponding) in multi-authored publications, treating all authors equally regardless of their role.
  • Most Cited Papers: Once a paper is included among the top h papers, any further citations to that paper do not affect the index, meaning exceptionally high-impact papers do not increase a scholar's h-index beyond the threshold.

G-Index

The G-index was introduced by Leo Egghe in 2006 as an improvement of the h-index, aiming to better reflect the citation performance of highly cited papers within an author's total output.

How G-Index Is Calculated

  1. Organize all articles in descending order by number of citations.
  2. The G-index is the highest number g such that the sum of citations for the top g articles is at least .

In formula terms, if the top g articles together have at least citations, then g is the G-index for that set.

Example Calculation

Paper Cited by Cumulative number of Citations
1 34 34
2 18 52
3 16 68
4 14 82
5 13 95
6 10 105
7 9 114
8 9 123
9 8 131
10 7 138
11 7 145
12 8 153
13 3 156

g = 12 is the highest rank such that the top 12 papers have at least 144 = 153 citations (here 153 > 144); on rank 13 we have 156 < 169 = 169 citations.

(II) What are journal-level metrics? Why track at this level?

Journal‑level metrics assess the collective impact, reach, or prestige of a journal as a venue for scholarly communication. They are based on aggregated citation patterns to all citable items in the journal over a defined time window and are intended to describe the journal's influence, not the quality of individual articles or authors.

Tracking at the journal level is useful to:

  • compare journals within a field when deciding where to submit manuscripts;
  • understand a journal's standing, reach, and readership over time;
  • support collection development and evaluation decisions in libraries and institutions.

Examples of journal‑level metrics include Journal Impact Factor (JIF), Journal Citation Indicator (JCI), CiteScore, SNIP (Source Normalized Impact per Paper), and SJR (SCImago Journal Rank).

JCR Impact Factor

The Impact Factor is calculated by dividing the number of citations received in the assessment year by the total number of articles published in the previous two years. An impact factor of 1 means the journal has received at least an average of 1 citation in the assessment year for the articles published in the previous two years. If the Impact factor is 2.5, it means the journal has received at least an average of 2.5 citations in the assessment year for articles published in the previous two years. For the calculation of impact factor, the citations will be used from the Web of Science indexed journals only.

Calculation of 2025 IF of a journal:

  • A = Total number of citations were received in the year 2025 for the articles published in the previous two years.
  • B = Total number of articles published in the year 2023 and 2024
  • C = A/B = JCR impact factor of 2025.

Example: The Indian Journal of Labour Economics

  • A = 112 citations were received in the year 2025
  • B = 128 articles published in the year (2023 (59 articles) & 2024 (69 articles))
  • C = A/B = 0.875 (Impact Factor 2025).

Advantages of JCR Impact Factor

  • Globally Recognized: JCR Impact Factor is accepted and referenced worldwide across disciplines for journal evaluation, institutional ranking, and research assessment.
  • Citations from Prestigious Journals: The metric is based on citation data from reputable, peer-reviewed journals, making it a trusted indicator of journal influence within scholarly communities.
  • Long History (since 1975): JCR has tracked journal citation performance since 1975, providing a robust historical data set and enabling longitudinal analysis of journal impact and scientific trends.
  • Effective Quality Measure: The impact factor is used as a proxy for journal quality, as it reflects how frequently articles are cited—journals with higher impact factors are generally considered more authoritative in their field.
  • Allows Within-Field Comparison: JCR categorizes journals by discipline and supplies quartile/percentile rankings, making it possible to benchmark a journal's influence against others in the same subject area for fair, in-field comparison.

Disadvantages of JCR Impact Factor

  • Lack of Subject Normalization: JCR Impact Factor does not adequately adjust for disciplinary differences or citation cultures, making direct cross-field comparisons unreliable and potentially misleading.
  • Non-discloser of Data/Transparency Issue: The underlying data used to calculate impact factors is not made publicly available, leading to criticisms regarding a lack of transparency and reproducibility of the metric.
  • Favours Prestigious Journals: Impact Factor tends to favor established, high-prestige journals, which may receive disproportionate citations, marginalizing smaller or niche journals and non-English-language publications.
  • Citations from Web of Science Core Collection: Calculations rely solely on citations within the Web of Science core database, potentially limiting inclusivity and underrepresenting research from regions, languages, or disciplines not well-covered in WOS.
  • Unclear Definition of "Citable" Items: Ambiguities persist regarding which items (articles, reviews, editorials) count as "citable," leading to inconsistent or manipulated calculations.
  • No Option to Verify Data and Analysis: Users and institutions cannot independently verify JCR calculations, making the metric a "black box" that relies on trust rather than demonstrable accuracy.

Journal Citation Indicator (JCI)

The Journal Citation Indicator (JCI) is a field-normalized metric designed to represent the average citation impact of scholarly articles and reviews published by a journal over a recent three-year period. This indicator takes into account differences in citation practices across disciplines, making it useful for comparing the influence of journals from different research fields. JCI value of 1.0 means the journal's published papers received citations equal to the average within its subject category; values above 1.0 indicate a higher-than-average citation impact, while those below 1.0 represent a lower-than-average performance.

Three things are considered while calculation of JCI, field of study, type of document and article age and the data used from the Web of Science.

CiteScore

CiteScore is a journal-level metric provided by Elsevier, utilizing Scopus data to indicate the mean citation rate for articles published in a journal over a four-year period. If a journal's CiteScore is 5, this means that each article published in the journal during the past four years has received an average of 5 citations.

The calculation uses the following formula:

CiteScore = Total citations in the last 4 years to documents published in the journal / Total number of documents published in the journal over the past 4 years

This indicator reflects broad citation impact, incorporates multiple peer-reviewed publication types, and is updated annually (with monthly tracking available) for transparency and comparability.

SNIP (Source Normalized Impact per Paper)

SNIP (Source Normalized Impact per Paper) is a journal-level metric developed by Henk Moed and his team to provide a fair comparison of citation impact across different subject fields. It is calculated using Scopus data and corrects for differences in citation practices between disciplines by weighting each citation according to the field's typical citation rate.

The SNIP value for a journal is determined by dividing the number of citations received in the current year to its papers published over the prior three years by the total number of papers published in those three years. Each citation is further weighted based on the citation potential of the field, so a citation in a subject with fewer total citations carries more value than one in a field with frequent citations.

This approach allows SNIP to measure the actual citations received relative to the citations expected for a journal's subject field, enabling meaningful comparisons between journals from disciplines with different citation behaviors. SNIP is especially useful for cross-disciplinary analysis and for identifying journals performing strongly in their specific areas.

SJR - SCImago Journal Rank

SCImago Journal Rank (SJR) is a journal-level metric that measures the scientific influence of scholarly journals by considering both the number of citations received and the prestige of the citing journals. SJR applies a weighted approach, where citations coming from highly ranked (prestigious) journals contribute more to the score than citations from less influential journals.

The SJR calculation is as follows:

  1. Determine the average number of weighted citations received by a journal in a given year (where weighting is based on the prestige of the citing serial and subject field).
  2. Divide this value by the total number of documents published by the journal in the previous three years.

This formula allows the SJR to reflect both the volume of citations and the significance of citing sources, offering a normalized indicator valuable for comparing journals from different disciplines.

Journal Metrics Comparison Table (2024)

Subject Area Journal JCR Impact Factor CiteScore SNIP JCI SJR
Medicine CA: A Cancer Journal for Clinicians 232.4 1154.2 201.167 120.891 45.004
Business, Management and Accounting International Journal of Information Management 27.5 46.9 32.636 6.26 7.382
Social Science Review of Educational Research 15.4 20.7 7.382 3.084 4.7
Economics, Econometrics Quarterly Journal of Economics 12.7 21.9 9.325 4.535 5.995
Psychology Annual Review of Psychology 29.4 58.1 11.936 5.321 3.786

Each metric provides a different perspective on journal influence and impact:

  • JCR Impact Factor: Calculates citations in the current year to articles published in the previous two years.
  • CiteScore: Measures citations over the last four years relative to documents published.
  • SNIP: Weighs citations according to subject area norms.
  • JCI: Field-normalized mean citation rate for articles over three years.
  • SJR: Weights citations by prestige of the citing journal and normalizes by output.

(III) What are article-level metrics? Why track at this level?

Article‑level metrics focus on the impact and engagement associated with a specific publication, such as a single journal article, chapter, or preprint. They may include traditional citation counts as well as usage data (views, downloads) and alternative indicators of online attention (mentions in news, policy documents, blogs, and social media).

Tracking metrics at this level helps to:

  • capture the reach and influence of individual studies, including early attention before citations accumulate;
  • reveal how readers are discovering, sharing, and discussing a particular work across platforms;
  • complement journal‑ and author‑level measures with a more granular view of impact.

Examples of article‑level metrics include field‑weighted citation impact (FWCI) and suites such as PlumX metrics, which aggregate citations, usage, captures, mentions, and social media activity for individual items.

Using author‑, journal‑, and article‑level metrics together provides a more nuanced picture of research influence across different levels of scholarly communication.

Field-Weighted Citation Impact (FWCI)

Field-Weighted Citation Impact (FWCI) is a metric that shows how well cited a document is in comparison to similar documents globally. FWCI accounts for key variables like the year of publication, document type, and disciplines associated with the document's source, making it a normalized indicator across different fields.

The FWCI for a document is calculated as:

FWCI = Total citations received by the document / Average citations received by all similar documents (same field, document type, year) in a three-year window

  • A value of 1.00 means the document is cited just as often as expected according to the global average.
  • Greater than 1.00 means the document is cited more often than expected (e.g., 1.48 is 48% higher than average).
  • Less than 1.00 means the document is cited less than expected.

This field-normalization allows fair comparison across disciplines, as each field contributes equally, eliminating differences in citation behavior and researcher output. FWCI is widely used to benchmark research performance and is available for articles, journals, authors, and institutions indexed by Scopus.

Citation Benchmarking

Citation benchmarking is a process or metric that demonstrates how the citations received by a specific document compare against the average number of citations received by similar documents within the same field, document type, and publication year. This approach enables an objective assessment of research impact by accounting for differences in citation behaviors across disciplines and document types.

A citation benchmarking score or visualization typically allows researchers to see if their work is cited more, less, or about the same as comparable research outputs. Values above the average mean the document is performing well relative to its peers, while values below the average indicate fewer citations than expected given the context.

PlumX Metrics

PlumX Metrics organizes research metrics into five main categories to provide a comprehensive view of how research outputs are engaged with and cited in the scholarly and broader community. Here are the five categories and what each measures:

The Five PlumX Metric Categories

  1. Citations: Tracks the number of times research is cited in scholarly databases such as Web of Science (WoS) and Scopus, as well as in policy documents, clinical guidelines, and patents.
  2. Usage: Measures interactions such as clicks, downloads, views, and library holdings. This shows how often research is accessed, read, or otherwise used, offering insight into practical engagement beyond citations.
  3. Captures: Counts bookmarks, code forks, favorites, readers, and watchers. Captures indicate interest in returning to or reusing the work and may predict future citations or use.
  4. Mentions: Includes blog posts, comments, reviews, Wikipedia references, and coverage in news media. Mentions demonstrate broader outreach and public engagement, showing how research is discussed outside academia.
  5. Social media: Tracks tweets, Facebook likes, shares, comments, and other social network activity. Social media metrics show the online buzz and promotional reach of research among diverse audiences.

These categories together offer a multidimensional view of research performance, from traditional scholarly impact to real-world engagement and visibility.

Altmetrics = alternative metrics

Altmetrics are alternative metrics designed to measure the impact of scholarly research outputs based on their online activity and attention beyond traditional scholarly citations. Unlike citation counts, h-index, or journal impact factor—which reflect academic interest and accumulate impact over several years—altmetrics provide a real-time, immediate view on how and where research is being discussed, shared, and engaged with online.

Altmetrics track mentions, shares, and activity from sources such as social media (Twitter, Facebook), blogs, news media, Wikipedia, and download/view statistics, as well as online reference managers like Mendeley and Zotero. These metrics highlight the social interest and broader engagement generated by a research output, helping researchers understand its reach and influence within both academic and public spheres.

Altmetrics are meant to complement—not replace—traditional citation-based metrics, offering a parallel measure of impact. While traditional bibliometrics grow slowly as citations accumulate over time, altmetrics reveal when and where research is being discussed immediately upon publication. This makes altmetrics especially useful for tracking early and societal impact, providing timely feedback on research dissemination and public interest.

Learn about altmetrics:

https://youtu.be/M6XawJ7-880

Uses of research metrics

  • Decisions on publication, guiding where to submit research based on journal prestige and impact.
  • Appointments to academic and research positions at universities and institutes.
  • Promotions, including advancement from lower to higher ranks in academic and research roles.
  • Selection for postdoctoral fellowships at reputable institutions.
  • Attainment of tenure and career promotions.
  • Success in applying for research grants.
  • Building professional reputation within both the academic community and the broader society.

How to use research metrics

Appointment /promotion

For appointment and promotion in academic and research institutions, several research metrics and scholarly achievements are considered essential:

  • Papers published in SCI (Science Citation Index) or SSCI (Social Science Citation Index)-indexed journals: These demonstrate the quality and recognition of research in reputable databases.
  • Authorship position, such as first or corresponding author: Indicates leadership and contribution to the research project.
  • Journal impact indicators (Impact Factor, SNIP, SJR, CiteScore): Used to assess the prestige and influence of journals where research is published.
  • Receipt of research grants: Highlights the researcher's capacity to attract funding and resources for research.
  • Author impact metrics (H-index, G-index): Measure productivity and impact based on publication and citation data.
  • Citations in policy documents: Reflects real-world influence and policy relevance of research.
  • Citations in newspapers and media: Demonstrates societal and public impact.

These factors collectively contribute to decisions on appointments and promotions, reflecting both academic excellence and broader research impact.

Tenure & Promotion

For Tenure appointment & Promotion in academic and research institutions, several research metrics and scholarly achievements are considered essential:

  • Total publications in Scopus/Web of Science: Quantify your scholarly output and show consistent productivity over time. Highlight your career publication count and trends in top journals indexed in recognized databases.
  • Authorship position: Specify your role (e.g., first, corresponding, senior author) to contextualize your research contributions. In some disciplines, first or senior authorship carries greater weight.
  • Collaborations: Demonstrate diverse or international collaboration patterns by indicating co-authors' countries/institutions. This reflects interdisciplinary and global engagement.
  • Journal impact factor, SNIP, SJR, CiteScore: List the impact metrics of journals you publish in to show the prestige and visibility of your research. Use these to support claims about the quality of your research outlets.
  • Funding agency: Document grant support as evidence of research competitiveness and external recognition.
  • Citations, h-index, g-index, PlumX metrics: Provide quantitative data on how often your work is cited, your overall influence (h-index, g-index), and broader attention indicators (PlumX, altmetrics). These demonstrate both scholarly and societal impact.
  • Subject category: Indicate your main research areas, aligning them with institutional priorities or subject benchmarks.
  • Highly cited papers: Identify your most influential publications using citation thresholds or "highly cited" lists in Scopus/Web of Science.

Applying Metrics in Grant Proposals

  • Publication Relevance to Proposal: Highlight key publications that directly support, justify, or lay the foundation for your proposed research topic.
  • Indexed in Science Citation Index/SSCI: Emphasize work published in journals indexed in trusted platforms like the Science Citation Index or Social Science Citation Index, as these signal quality and peer recognition.
  • Authorship Position (First/Corresponding): Clarify your role in significant publications; leading or corresponding authorship is often seen as evidence of substantial intellectual contribution and project leadership.
  • Citations Received: Document the number of citations to your key publications, which illustrates the influence and uptake of your research within the scholarly community.
  • Citations in Policy Documents: If your work has informed or been cited in policy documents, mention this to demonstrate direct societal or practical impact, which funders increasingly value.
  • Gaps in Research (Justification): Use synthesis of current literature, including your publications, to identify unmet needs or gaps that your proposal addresses, enhancing the rationale for funding.
  • PlumX Metrics: Utilize PlumX metrics to provide additional evidence for engagement and broader impact, such as usage, downloads, social media attention, or policy mentions; these demonstrate that your research reaches beyond citations alone.

Grant Renewal

For grant renewal, research metrics help demonstrate the measurable outcomes and broader impact of the funded work, providing evidence to justify continued or additional funding.

Important Metrics for Grant Renewal

  • Number of Publications (WS/Scopus) After Grant: Document new peer-reviewed articles indexed in Web of Science or Scopus that directly result from the grant funding.
  • Number of Citations Received: Show scholarly impact by reporting citations to these publications, as tracked by Scopus, Web of Science, or Google Scholar.
  • Impact Factor of Journals: List the impact factors of journals in which the work was published, as these are traditional indicators of journal quality.
  • Mendeley Readership: Report the number of people who have saved or read the articles in Mendeley, showing engagement from the academic community.

Social Media and Policy Engagement

  • Tweets or Mentions: Track if the research was shared or discussed on Twitter, which can indicate broader community or disciplinary interest.
  • Citations in Policy Documents: Cite instances where work influenced policies, guidelines, or government reports—demonstrating societal or practical impact.
  • Citations in Newspapers/Media: Provide examples of mainstream media coverage that brought research findings to a wider public audience.
  • Blog/Facebook Mentions: Highlight engagement and discussion of research in blogs or social media comments to show outreach and dissemination beyond traditional academia.

Best Practices

  • Present quantitative evidence (publication counts, citation metrics, journal impact factors) alongside qualitative context, explaining the significance of coverage in policy, media, and social platforms.
  • Use tracking tools, such as Altmetric and PlumX, that capture online engagement (Mendeley, blogs, Twitter) and provide visual summaries for grant renewal reports.
  • Align all metrics directly with the objectives and outcomes promised in the original grant, proving that deliverables were met and impact was achieved or exceeded.
  • Whenever possible, indicate growth and expansion in reach—such as increasing citations, readership, or new forms of influence—since the last funding period.

Effectively combining these research metrics supports a persuasive grant renewal case, demonstrating how the funded work benefited both the academic community and broader society.

© 2025 Central Tribal University of Andhra Pradesh. All rights reserved.

© 2022- Central Library. All Rights Reserved. || Implemented and Customized by Daphne Systems Private Limited.