Peer review and research integrity in the age of artificial intelligence 


Reading time
7 mins
Peer review and research integrity in the age of artificial intelligence 

Advancements in artificial intelligence (AI) technologies have made life easier for many, including those in the research community. Researchers are increasingly relying on the power of AI in different aspects of their work—from planning experiments to academic writing. AI can also assist in several stages of peer review, right from selecting suitable experts for review to checking the statistical power of data. As AI slowly becomes ubiquitous, publishers and journals are honing their guidelines and policies around the use of AI in various scholarly publishing workflows and processes. While there are different schools of thought on this topic, the use of AI is gaining traction within the publishing ecosystem, with talks about its potential role in peer review coming into the spotlight.  

The growing strain on publishing workflow and peer review 

Recent years have seen an exponential growth in the number of published papers. According to a study, about 1.92 million articles were published in 2016. In 2022, this number was as high as 2.82 million. Such a high volume of scientific papers puts strain on the system, including journal editorial staff as well as reviewers, who are tasked with screening and assessing them. This is further evident by a report which has estimated that reviewers spend more than 15 million hours every year to review manuscripts that have been previously rejected and resubmitted to another journal. 

To increase the efficiency of peer review and reduce the burden on reviewers, some journals have started using automated tools. Such methods can help the journal editors triage papers based on quality or suitability for the journal, decreasing the load on reviewers.  

Exploring the potential of AI in the manuscript review process 

1. Initial screening 

Several journals employ automated tools for initial manuscript screening. For instance, to detect plagiarism, check compliance with formatting guidelines, grammatical and language-related errors, and even match the manuscript with the most suitable reviewers. Implementing AI-assisted processes in the pre-peer review stages can help in early detection of manuscripts that may not qualify the journal’s requirements, and consequently reduce the load on the peer review system. 

IOP Publishing offers authors access to Paperpal Preflight during article submission, which helps check for common manuscript errors, allowing authors to identify and fix these issues before submission. 

2. Assessing manuscript quality 

Experts believe that AI-based tools can flag low-quality studies, creating potential for peer review that can be partially automated, while some journals also employ such tools to summarize the manuscript content. 

For instance, Statcheck and StatReviewer are software that can evaluate the statistics used in a manuscript. Some of the journals that use these tools are Psychological Science, Canadian Journal of Human Sexuality, and PsychOpen.  

3. Tackling bias in peer review 

Peer review is also under scrutiny for exposing inherent biases in academia. Reviewer bias, whether implicit or explicit, can stem from various factors such as an author's nationality, language, affiliation, or prior work. Bias can also affect the perceived quality of a manuscript's language, influenced by geographical factors, which can all hinder objective evaluation of a manuscript. 

Funding bodies in China have reportedly employed an automated tool to review grants in order to reduce reviewer bias. 

4. Flagging image manipulations 

Jana Christopher, an Image Integrity Analyst, has made some interesting observations regarding the state of image integrity in academic publishing. In her extensive work screening accepted manuscripts prior to publication across multiple journals, she found that the percentage of manuscripts flagged for image-related issues notably fluctuates between 20 to 35%. Furthermore, she has discovered that, alarmingly, the rate at which acceptance of these manuscripts is ultimately rescinded can reach as high as 8%. These figures underscore ongoing concerns about the integrity and accuracy of images in scholarly work.  

Considering the vast volume of submissions journals handle and the complexity of the numerous figures found in each paper, it becomes a challenging task to meticulously verify each image and identify all errors. Responsible use of specialized image integrity tools can help enhance the review process and maintain image integrity. The Science family of journals has officially announced the implementation of Al technology to identify manipulated images in their publications. 

Role of AI in peer review to enhance research integrity 

Adhering to ethical principles and maintaining responsible standards is non-negotiable in scientific research. Scientists are required to be honest, accurate, and objective while conducting and reporting their work. Maintaining research integrity ensures that scientific findings are reproducible and trustworthy and benefit society. 

In an age where publishing peer reviewed research is the basis for promotions, obtaining grants and career advancements, there have been instances of researchers resorting to unethical practices to publish their work. These include plagiarism, data fabrication and falsification, and undisclosed conflicts of interest, including reviewers’ personal or financial interests that may bias their decision. Such fraudulent practices not only mislead future research based on the reported results, but also diminish the public’s trust in science and the scientific process. 

Often, reviewers are overwhelmed with the sheer volume of papers they receive for assessment and are rarely compensated for their time spent reviewing these submissions. Given this, researchers suggest that using AI-based tools can potentially help enhance scientific integrity. Some tools help in identifying potential plagiarism in the text, while some machine learning algorithms can also help reviewers and journal editors identify inconsistency in data, and even detect image manipulation or anomalies in statistics. 

Collaborative publishing systems can enhance transparency, ensuring upholding research integrity. While policies regarding the use of Al in the peer review process are still evolving, various journals and funding organizations have different guidelines for incorporating Al in peer review. Funding agencies such as the National Institutes of Health and Australian Research Council have banned the use of AI in the peer review process. While Sage publication allows editors to use AI to identify suitable reviewers, use of AI anywhere else in the peer review process is discouraged.  

 

Ethical considerations in using AI for peer review 

Despite support for the use of AI-based tools in the process of peer review, the scholarly community’s stance on this topic remains divided. The major concern is that implementing AI in this process brings forward a unique set of ethical challenges. For instance, it is still unclear to what extent AI systems can avoid human biases. In a notable example, grants approved by an online system to manage applications saw female researchers fare worse. Consequently, there have been calls for the grant-reviewing body to undo this revamp and go back to face-to-face grant approval meetings. 

Experts have also doubted the reliability of some of the AI-based tools used in manuscript assessment. Journal guidelines also acknowledge that using AI technology in the processing of manuscripts may violate confidentiality when the manuscripts are uploaded into AI tools. Furthermore, ICJME guidelines also explicitly state that reviewers must seek permission from the editors if they intend to use AI for peer review. 

The future of AI in peer review 

Emerging AI technologies have the potential to reshape the landscape of peer review. However, the increasing use of AI-based tools for peer review also underscores the need for balance between innovation and responsibility. This requires ethical supervision. There is no data suggesting that AI alone is sufficient for peer review at the moment, but some believe that this may change as systems continue to evolve. 

Some experts are of the opinion that going forward, AI-based tools in peer review should be used as supplementation to human judgement, not as a replacement. There have been calls for researchers and reviewers to undergo continuous training programs to ensure effective as well as ethical use of AI. Additionally, there is also a need to have clear ethical frameworks put in place in order to use AI in scientific publishing and peer review responsibly. Overall, the scholarly publishing community’s ability to leverage the powers of AI ethically and in combination with expertise offered by human reviewers has the potential to create a rigorous and reliable scientific publishing process. 

Be the first to clap

for this article

Published on: Sep 25, 2024

She's a biologist turned freelance science journalist from India, with a passion to communicate science where it intersects with the society.
See more from Sneha Khedkar

Comments

You're looking to give wings to your academic career and publication journey. We like that!

Why don't we give you complete access! Create a free account and get unlimited access to all resources & a vibrant researcher community.

One click sign-in with your social accounts

1536 visitors saw this today and 1210 signed up.