Journal Citation Reports 2024: some reflections


Reading time
3 mins
 Journal Citation Reports 2024: some reflections

For more than 50 years, Journal Citation Reports (JCR) has been providing a measure of the quality or standing of academic journals. The measure, namely Journal Impact Factor, is essentially a reflection of the extent to which papers published in a given journal are referred to, or cited, in other papers published in many journals (including papers published in that journal). Put simply, the measure is simply how often, on average, papers published in a given journal are cited. More specifically, the Journal Impact Factor of Journal A for a given year is the ratio of (1) the number of times the papers published in Journal A were cited in other papers published in two previous years to (2) the number of papers published by Journal A in the same two years.

In 2024, Clarivate Analytics, the publisher of JCR, has introduced a major change that makes it simpler to assess the standing of a journal in its own field. Earlier, JCR featured several editions, each edition comprising a major division of knowledge, namely the physical and biological sciences, social sciences, arts and humanities, and technology, the corresponding editions being (1) Science Citation Index Expanded, (2) Social Science Citation Index, (3) Arts and Humanities Citation Index, and (4) Emerging Sources Citation Index. Each edition then had its categories. The major change in 2024 is that the categories – a total of 254 of them – have been unified across the editions. Simply put, this makes it easier to see a journal’s standing among all the journals in that same category: for example, psychiatry is a category, and all the journals in that field are ranked by their Impact Factor. Such a listing helps scholars to choose among a number of journals in a particular category.

The previous year had already introduced one more simplification: since 2023, the Journal Impact Factor is given only to one decimal place instead of three decimal places. The change was made “to encourage users to consider other indicators and descriptive data in the JCR when comparing journals”.

Although the ‘unified categories’ approach mentioned above makes it easier to assess the standing of a journal, one important question remains unanswered: Is the Journal Impact Factor a valid measure?

Is Journal Impact Factor a valid measure of journal quality?

Reducing the whole matter of measuring a journal’s quality to a single number may be going too far but it is important to remember that to have that number – irrespective of its magnitude – is in itself a measure of quality because Clarivate Analytics follows a rigorous process. For a journal to qualify, it must meet 24 criteria, ranging from some very basic, such as whether the journal has an ISSN (international standard serial number), a unique title, and a clearly identified publisher with a valid contact address, to more advanced, such as scholarly content and clarity of language, the composition of the journal’s editorial board and the standing of its members, and citations gathered by papers published in that journal. These 24 criteria, which can be to some extent subjective, are supplemented with four metrics.

Shortcomings of Journal Impact Factor

Reducing any complex phenomenon to a single numerical metric will always have its pitfalls—“When a measure becomes a target, it ceases to be a good measure”, which is a pithy rendering – often referred to as Goodhart’s Law – by Strathern1 of what Goodhart wrote2: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” In the case of JIF, its shortcomings were mentioned3 early on by Eugene Garfield himself, who proposed and developed the very concept of using citations as a measure of quality. The metric is also open to manipulation, or gaming. For instance, Falagas and Alexiou listed ten ways4 in which JIFs can be manipulated, and Heathers and Grimes showed in their case study how the JIF of British Journal of Sports Medicine increased steadily from about 1.0 in 2001 to 13.8 in 2021. However, some shortcomings are inherent in the metric itself because it operates in the sphere of academic publishing and is limited to journals; these shortcomings need to be kept in mind before using the JIF as the sole criterion of merit.
• Reviews typically attract more citations than papers reporting original research; therefore, journals that publish only reviews have high impact factors.
• Papers describing methods are cited more often than papers that present original research that may have used those methods.

• At times, a paper is cited more often simply because it contains a major flaw or error.
• Multidisciplinary journals typically have higher impact factors than specialty journals.
• Journals in the sciences and engineering typically have higher JIFs than journals in fine arts, humanities, and education.5

Conclusion

Journal Citation Reports and JIFs are useful tools but, as with any tool, they need to be used with skill, discretion, and judgment6 and can only be as good as their users. To my mind, DORA, the [San Francisco] Declaration on Research Assessment, still has the last word, and its first recommendation says it all: “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

References

1 Strathern M. 1997. ‘Improving ratings’: audit in the British University system. European Review 5: 305–321.

2 Goodhart C A E. 1984. Problems of monetary management: the UK experience, pp. 91–121 in Monetary Theory and Practice. London: Palgrave. 282 pp.

3 Garfield E. 1955. Citation indexes for science: a new dimension in documentation through association of ideas. Science 122: 108–111

4 Falagas M E and Alexiou V G. 2008. The top-ten in journal impact factor manipulation. Archivum Immunologiae et Therapiae Experimentalis 56: 223–226

5 Galbraith Q, Butterfield A C, and Cardon C. 2023. Judging journals: how impact factor and other metrics differ across disciplines. College Research and Libraries 84 (6): 26 pp. Available at <https://crl.acrl.org/index.php/crl/article/view/
26097/34019>

6 Joshi Y. 2014. Evaluating scientists scientifically. Current Science 107: 1363–1364

Author

Yateendra Joshi

Communicator, Published Author, BELS-certified editor with Diplomate status.

See more from Yateendra Joshi

Found this useful?

If so, share it with your fellow researchers


Leave a Reply

Your email address will not be published. Required fields are marked *

Related post

Related Reading