Many research papers published across various academic journals were likely written, at least in part, by artificial intelligence, according to a new report.
An investigation by 404 Media, a tech journalism site, revealed that AI-generated papers are being published in academic journals, which has raised the question of the impact of AI-powered tools on academia as a whole.
The report cited Google Scholar, a journal database, and when searching this database with phrases such as “As of my last knowledge update” and “I don’t have access to real-time data,” two phrases commonly used by AI in its responses to prompts from users, more than 100 studies become listed.
404 Media reported that AI-generated papers were being passed off not only in low-quality academic journals but also in some reputable ones.
This highlighted not only the increasing prevalence of AI but also existing issues about quality, admissions standards, and pay-to-play business structures in academia, the report noted.
It was noted that at least one paper seemed to be so blatantly copy-pasted from a chatbot that the individuals who submitted it to the respected chemistry journal Surfaces and Interfaces didn’t even remove the chatbot’s introduction before publication after peer review.
The report revealed that Bellingcat researcher Kolina Koltai posted a screenshot of the paper titled “The three-dimensional porous mesh structure of Cu-based metal-organic-framework – aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries.”
The introduction of the paper contained the phrase “Certainly, here is a possible introduction for your topic,” which strongly resembles an algorithmically polite response to an AI prompt.
When Futurism conducted a similar search on Google Scholar, they found results that primarily either utilized the OpenAI chatbot as a “co-author” or demonstrated its limitations as a research and writing tool.
However, they also discovered numerous instances where it appeared likely that academics had lazily employed AI to generate text.
The existence of these papers, particularly in reputable journals, suggests that AI has permeated academia to a greater extent than previously realized.
The study suggested that until journals begin enforcing stricter standards, the presence of AI-generated content will continue to obscure the integrity of what are meant to be esteemed intellectual institutions.
A 2023 survey of scientists conducted by Nature found that 1,600 respondents, or around 30 per cent of those polled, admitted to using AI tools to help them write manuscripts.
And while phrases like “As an AI algorithm” are dead giveaways exposing a sentence’s large language model origin, many other more subtle uses of the technology are harder to root out.
Detection models used to identify AI-generated text have proven frustratingly inadequate.