The ICMJE recommendations on AI: Advice for authors and peer reviewers
Unless you’ve been living under a rock for the past couple of years, you know by now that generative AI, which includes large language models (LLMs) like ChatGPT, are being widely used across academia. These tools have the potential to make different aspects of research easier, faster, and simpler, right from recruiting participants (Lu et al., 2024) to drafting the final journal article.
But generative AI is still in its infancy, comparatively speaking, and there’s a lot we don’t know yet about how much we can safely rely on it. That’s why, as academics lap up new technologies, journals and publishers are caught up in an intense debate about how much AI is permissible and how can they safeguard the scientific community as well as general public from malicious or incompetent AI use.
Among various bodies in scholarly publishing, the International Committee of Medical Journal Editors (ICMJE) stands out for the role it plays in publication ethics. The ICMJE consists of a group of general medical journal editors and representatives of related organizations who work together with the aim of enhancing the quality of medical science and its reporting. The ICMJE guidelines, formally known as the Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals, are followed by numerous medical journals, ranging from leaders in the field to new, specialty journals.
It’s little wonder then, that the ICMJE is among the foremost to put together guidelines and principles regarding the use of AI in conducting and reporting research. Let’s dive into the key recommendations in this regard from the ICMJE:
1. AI use should be disclosed
In the 2023 update to the ICMJE guidelines, authors are required to disclose the use of AI in the cover letter, acknowledgements section, or methods section as appropriate. More specifically, any use of AI for writing, editing, or proofreading the paper should be described in the acknowledgements section, while any use of AI to collect and analyze data or to create figures should be reported in the methods section.
Essentially, while the ICMJE doesn’t ban the use of AI technologies, it requires complete and transparent reporting of how these technologies have been used in the study and related research manuscript.
2. AI is not an author
Any AI tool cannot fulfill one of the basic criteria of authorship according to the ICMJE: taking responsibility for the accuracy, integrity, and originality of the paper’s contents. Therefore, no AI tool, such as ChatGPT, can be listed as an author, even if it has been extensively used in the study or paper. ICMJE also cautions that AI-generated output could be incorrect, incomplete, or biased, and warns authors to carefully review even the most authoritative AI content. It’s also necessary to check any AI-generated text or images for plagiarism.
On similar lines, AI tools cannot be listed in the reference list as sources, because AI is not considered an authoritative source of scientific information.
3. AI should be used cautiously for manuscript evaluation
The January 2024 update to the ICMJE guidelines mentions that “Editors should be aware that using AI technology in the processing of manuscripts may violate confidentiality.” It’s therefore essential that any peer reviewers who want to use AI tools to facilitate their review, obtain permission from the journal editor beforehand. Uploading an entire manuscript to an unauthorized platform or AI software could be considered a breach of confidentiality.
Moreover, just as authors have to be careful about the accuracy and objectivity of AI-generated text, peer reviewers too need to be aware that AI-generated peer review comments could be inaccurate, flawed, or biased.
Bottom line
AI technologies can be tremendously useful in the scientific process, but they’re not foolproof. When a journal published an AI-generated image of a rat with an absurdly large penis and testicles, it attracted ridicule from scientists and non-scientists alike. It’s absolutely necessary to use AI cautiously, if you want to avoid becoming the joke of the scientific community.
Peer reviewers too are waking up to the dangers of unrestricted AI use: a recent Times Higher Education global survey shows that peer reviewers hold a lot of distrust toward ChatGPT and other such tools. As far back as June 2023, the US NIH banned the use of generative AI tools for peer review.
So, whether you’re using AI to draft your own manuscript or review another’s work, remember that no tool can be a substitute for your own knowledge, experience, and expertise.
Comments
You're looking to give wings to your academic career and publication journey. We like that!
Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.
Subscribe to Conducting Research