Skip to Main Content

Research Impact

Research impact refers to the demonstration of the reach and influence of a scholar's work, using a combination of qualitative and quantitative measures.
Journal-level metrics aim to measure a publication's impact using citation analysis. Historically, journal metrics were used to help librarians decide which publications should be included in their collections. Use of these metrics has expanded over time to include broader purposes in academia, such as helping authors decide where to submit their work for publication.

Key Considerations

  • Journal-based metrics apply to the place that an output is published rather than the merits and reliability of the output itself.
  • Journal-level metrics do not reflect new/emerging fields of research as new journals will have lower metrics than those established earlier. 

Measurements Explained

What is the Journal Impact Factor?

The Journal Impact Factor only applies to journals indexed in the Science Citation Index Expanded and/or Social Sciences Citation Index by Clarivate Analytics. The Journal Impact Factor is a measure reflecting the annual average (mean) number of citations to recent articles published in that journal. An essay written by the Institute of Scientific Information (ISI) states “The JCR provides quantitative tools for ranking, evaluating, categorizing, and comparing journals. The impact factor is one of these; it is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published.”

Journal Impact Factor - The Metrics Toolkit by The Metrics Toolkit Editorial Board, licensed under a CC BY 4.0


Journal Impact Factor - Calculation

The following is the calculation used to create the Journal Impact Factor. 

In any given year, the two-year journal impact factor is the ratio between the number of citations received in that year for publications in that journal that were published in the two preceding years and the total number of "citable items" published in that journal during the two preceding years:

 

{\displaystyle {\text{IF}}_{y}={\frac {{\text{Citations}}_{y}}{{\text{Publications}}_{y-1}+{\text{Publications}}_{y-2}}}.}

 

For example, Nature had an impact factor of 41.577 in 2017:

 

Impact Factor by Wikipedia licensed under a Creative Commons BY SA License


Journal Impact Factor - Source

Journal Citation Reports 

  • Provides the impact factor, immediacy index, Eigenfactor metrics, and other citation data for approximately 12,000 scholarly and technical journals and conference proceedings from more than 3,300 publishers in over 60 countries in the Science Citation Index Expanded and Social Science Citation Index in the Web of Science Core Collection. Journals listed exclusively in Arts and Humanities Citation Index are not included.
  • Clarivate - Journal Citation Reports Training

Limitations 

For additional limitations, go to the Journal Impact Factors - The Metrics Toolkit and Introduction to Research Methods - Journal Metrics

What is CiteScore?

The CiteScore of an academic journal reflects the yearly average number of citations to recent articles published in that journal. It is produced by Elsevier, based on the citations recorded in the Scopus database. CiteScore metrics provide data to help you measure the citation impact for journals, book series, conference proceedings and trade journals.


CiteScore - Calculation

The calculation of CiteScore for the current year is based on the number of citations received by a journal in that year for the documents published in the journal in the past three years, divided by the documents indexed in Scopus published in those three years.


CiteScore Metrics - Source

Scopus - CiteScore

  • Scopus is a comprehensive abstract and citation database with enriched data and linked scholarly literature across various disciplines, including science, technology, medicine, social science, arts and humanities. CiteScore is powered by Scopus with active titles from 7000+ publishers across 333 disciplines.
  • Scopus Metrics Training Video

Limitations
  • CiteScore biases against journals that publish a lot of front-matter (i.e. the section at the beginning of a book, thesis, report, or other long-form publication that appears before the main content).
  • Many high-impact-factor journals performed very poorly in CiteScore due to the inclusion of non-research material (news, editorials, letters, etc.) in its denominator. For exmaple, top medical journals, like The New England Journal of Medicine and The Lancet, and general multidisciplinary science journals, like Nature and Science, rank well below mid-tier competitors.
  • CiteScore treats all documents equally, creating a bias against publications that focus on news and discussion. While the boost in metrics may not be high, journals that produce more front matter have a higher Impact Factor Score.

Davis, P. (2016, December 12). CiteScore–Flawed but still a game changer [Blog post]. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2016/12/12/citescore-flawed-but-still-a-game-changer/

What are SCImago Metrics?

The SCImago Journal & Country Rank is a publicly available portal that includes the journals and country scientific indicators developed from the information contained in the Scopus database (Elsevier B.V.). More Information


SCImago Metrics - Calculation 

A journal's SJR indicator is a numeric value representing the average number of weighted citations received during a selected year per document published in that journal during the previous three years, as indexed by Scopus. Higher SJR indicator values are meant to indicate greater journal prestige.

 
 
SCImago Journal Rank by Wikipedia licensed under a Creative Commons BY SA License

 SCImago Metrics - Source

Limitations
  • May be skewed by citation outliers (e.g., a single article may receive the vast majority of citations)
  • May penalize interdisciplinary works (i.e., citations from within the journal's co-citation network are weighted higher than those from journals outside of the co-citation network)
  • Not all citations are "good" citations (e.g., Article A may cite Article B to reject Article B's findings)
  • A "good" SJR differs by field
  • Includes some self-citation (but has a percentage cap, which helps mediate gamification, to an extent)
  • Does not take into account social impact (e.g., an article trending on Twitter)
  • Complex, and, due to large dataset, difficult to replicate
  • Like all impact metrics, vulnerable to gamification (e.g., journal citation cartels)

Wilson, P. and Adamus, T. Impact Metrics: SCImago Journal Rank (SJR) Ebling Library, University of Wisconsin-Madison, Health Sciences

What is the Eigenfactor?

The Eigenfactor came out of the Metrics Eigenfactor Project in 2008, a bibliometric research project conducted by Professor Carl Bergstrom and his laboratory at the University of Washington. 

Eigenfactor Score:

  • Counts citations to journals in both the sciences and social sciences.
  • Eliminates self-citations. Every reference from one article in a journal to another article from the same journal is discounted.
  • Weights each reference according to a stochastic measure of the amount of time researchers spend reading the journal.

Eigenfactor scores are scaled so that the sum of the Eigenfactor scores of all journals listed in Thomson Reuters’ Journal Citation Reports (JCR) is 100. 

The Eigenfactor uses Thomson Reuters Web of Science citation data.


Eigenfactor - Calculation

The Eigenfactor Score is based on the number of citations received by articles in a journal, weighted by the rank of the journals in which the citations appear, calculated over the previous 5 years.


Eigenfactor - Source

Eigenfactor

  • Source data is from Web of Science, with a 6-month delay
  • Article Influence® Score (AI), a measure of the average influence of each of its articles over the first five years after publication.
  • Eigenfactor and AI scores are available in Journal Citation Reports, calculated over the previous 3 years
  • Ranked lists of journals in the same Field Categories as Journal Citation Reports (e.g. History, Forestry, Pathology), based on Eigenfactor® Score. 
  • Methodology
  • About Eigenfactor

Limitations
  • Eigenfactors and total citations to journals correlate very strongly (R=0.95), meaning that journals which receive a lot of citations tend to also be those that receive a high Eigenfactor. (Davis, P. (2008) Eigenfactor - The Scholarly Kitchen, The Scholarly Kitchen)
  • The concepts of popularity (as measured by total citation counts) and prestige (as measured by a weighting mechanism) appear to provide very similar information. (Davis, P. (2008) Eigenfactor - The Scholarly Kitchen, The Scholarly Kitchen)
What is SNIP?

A key indicator offered by CWTS Journal Indicators is the SNIP indicator, where SNIP stands for source normalized impact per paper. This indicator measures the average citation impact of the publications of a journal. SNIP corrects differences in citation practices between scientific fields, allowing for more accurate between-field comparisons of citation impact. CWTS Journal Indicators also provides stability intervals that indicate the reliability of the SNIP value of a journal.  (CWTS Methodology)


SNIP - Calculation

The source normalized impact per publication, calculated as:

The difference with IPP is that in the case of SNIP citations are normalized in order to correct for differences in citation practices between scientific fields. Essentially, the longer the reference list of a citing publication, the lower the value of a citation originating from that publication.

A detailed explanation is offered in the paper, Some Modifications to the SNIP Journal Impact Indicator.


SNIP - Source

Scopus / Science Direct

  • You can access SciVal with the same username and password you use for other Elsevier products (such as ScienceDirect or Scopus). If you do not have access to other Elsevier products, you will need to register first. To learn how to use SciVal, review the SciVal Support Center FAQs

Limitations
  • SNIP does not distinguish between ordinary research articles and review articles. Review articles tend to be cited substantially more frequently than ordinary research articles. Journals that publish many review articles therefore tend to have higher SNIP value than journals that publish mainly ordinary research articles  (CWTS Methodology).
  • Some journals may try to increase their citation impact by increasing their number of self citations, sometimes in questionable ways (e.g., coercive citing). SNIP does not correct for this. However, the percentage of self citations of a journal is reported as a separate indicator (CWTS Methodology).
  • SNIP is less reliable for small journals with only a limited number of publications than for larger journals  (CWTS Methodology).
  • SNIP is sensitive to ‘outliers’, that is, these indicators may sometimes be strongly influenced by one or a few very highly cited publications. It is therefore important to take into consideration not only the value of the indicator but also the width of the stability interval  (CWTS Methodology).
  • Additional limitations/problems with SNIP can be found in the article - Mingers, J. (2014). Problems with the SNIP indicator. *Journal of Informetrics, 8*(4), 890–894. https://doi.org/10.1016/j.joi.2014.09.004