Academic Writing and AI: Do’s and Don’ts for Researchers


Reading time
3 mins
 Academic Writing and AI: Do’s and Don’ts for Researchers

Researchers everywhere seem to be using ChatGPT and other AI tools. Almost every month, there’s a new debate around using AI for academic writing, especially research paper creation, with researchers, authors, journal editors, publishers, research society leaders, and other stakeholders voicing their views.

[To make things precise, when we use “AI” in this article, we’re referring to generative AI: tools that can create audio, visual, or textual output that closely mimics what human beings can produce.]

In a 2023 survey of UK-based researchers by Watermeyer et al., over 70% of the respondents felt that AI tools were changing the way they work and over 80% anticipated using AI more in the future. In December 2023, David Maslach, associate professor at Florida State University, authored a blogpost in Harvard Business Publishing in which he claimed that generative AI can “supercharge” research, though warning scientists that “While AI can excel in certain tasks, it still cannot replicate the passion and individuality that motivate educators; however, what it can do is help spark our genius.” In January 2024, Nature published a correspondence piece that literally asked “Does generative AI help academics to do more or less?”

More recently, a survey in May 2024 by Oxford University Press showed that while over 2/3rds of their sample of 2,000 researchers had experienced the benefits of using AI, around 90% are looking for guidance on how to use AI. 

Today, we’re going to look at using AI specifically for academic writing, and how to responsibly leverage LLMs in creating a research paper. Below, are the do’s and don’t’s we recommend that all researchers follow:

DO Confirm You’re Allowed to Use AI for Academic Writing

Many journals and universities have outlined policies around what kind of AI-generated content they’re okay with. The University of North Carolina at Chapel Hill, for example, warns students that “when your instructors authorize the use of generative AI tools, they will likely assume that these tools may help you think and write—not think or write for you.” They recommend using generative AI mainly for brainstorming, generating outlines, summarizing longer texts, and refining language. Journals like Science have detailed guidelines on both using and disclosing the use of AI, warning authors that “Editors may decline to move forward with manuscripts if AI is used inappropriately.”

Before you use any generative AI in your research paper or thesis, make sure that you’re actually allowed to.

DO Choose the Right AI Tool for Academic Writing

The one LLM to rule them all—at least for non-scientific audiences—is ChatGPT, which has been repeatedly shown to be unsuitable for academic writing. As early as July 2023, six months after the launch of GPT-3, a podcast by Nature talked about how the tool could help a researcher write a paper in just an hour, but with obvious quality implications. Kacena et al. (2024) found that ChatGPT 4.0 produced an article where up to 70% of the references cited were fake.

What does this mean for you as a researcher? If you want to use AI to improve the quality of your academic writing, you must choose a tool that is actually suited for this kind of writing. Paperpal, for example, has been trained on millions of scholarly articles, rather than just anything on the internet, and gives users a free summary report of 30+ language and technical checks to evaluate if their manuscript is journal-ready.

Mainstream LLMs like ChatGPT can’t always meet the academic writing standards that journals and universities expect. Choose your AI tool carefully.

DO Carefully Review AI Output for Academic Writing

No AI tool can assume responsibility the way a human writer can for the accuracy and integrity of its output. As a responsible scientist, you have to review any AI-generated text you use in your research paper and modify any AI-generated content you don’t agree with.

AI is great, but it can’t replace your critical thinking skills and domain knowledge. Treat any AI tool for academic writing like a handy assistant, not the lead author!

DON’T Assume That AI is Better at Academic Writing Than You

AI-powered tools are great when you’re not a native English speaker or are a novice researcher, trying to get the right tone and style right for an academic paper. Or when you’re pressed for time and need assistance for routine academic communications, like emails to the journal editor. But AI is no substitute for your knowledge of the field or critical thinking skills. And it may not even be up to date; GPT-4o, for example, has a “knowledge cutoff date” of September 2021. Meaning that, the tool is simply not aware of any events, including research papers published, after that cutoff.

DON’T Use AI-Generated Images in Academic Writing

Remember how Frontiers in Cell Development and Biology  was ridiculed for the article containing an image of an absurd rat? Even the mainstream media was abuzz. Besides the notorious rat image, the article also contained other AI-generated images that scientists found equally silly: images that purported to illustrate scientific processes or pathways which looked more like donuts with sprinkles or pizzas with psychedelic toppings.

The flaws of AI-generated images are known to not just scientists: the general public too has been talking about the “racially diverse Nazi soldiers” or the bananas that apparently can’t exist singly. The Atlantic summarizes the problem: “Generative AI is not built to honestly mirror reality, no matter what its creators say.”

Publishers like Elsevier and Taylor & Francis have, as of July 2024, imposed an outright ban on using AI-generated images (excepting a few cases of actual research on AI image generation). Here’s where it’s smarter to NOT use AI: create your images the old-fashioned way and use a reputed academic figure preparation service if needed.

DON’T List AI as an Author in Academic Writing

Since May 2023, the International Committee of Medical Journal Editors has firmly forbidden listing ChatGPT or any AI tool as an author of a scientific paper. The reason? A tool cannot fulfill one of the basic tenets of authorship: taking responsibility for the accuracy and integrity of the contents of a scientific paper. This stance has been taken up by numerous journals, ranging from BMJ to the Asian Journal of Environment & Ecology.

Author

Marisha Fonseca

An editor at heart and perfectionist by disposition, providing solutions for journals, publishers, and universities in areas like alt-text writing and publication consultancy.

See more from Marisha Fonseca

Found this useful?

If so, share it with your fellow researchers


Related post

Related Reading