The Era of AI Writing and Reviewing Papers: Is Scientific Publishing "Evolving" or "Collapsing"? How Will "Trust in Science" Change?

The Era of AI Writing and Reviewing Papers: Is Scientific Publishing "Evolving" or "Collapsing"? How Will "Trust in Science" Change?

1. AI Penetrates the "Entrance" of Research

The topic of generative AI often focuses on the research itself (hypothesis generation and experimental design). However, the impact of AI infiltrating the "entrance" that supports the trust in science—namely, the processes of paper writing, peer review, editing, and publishing—will gradually become more significant.


An article from Undark depicts this change from both the "writing" and "peer review" perspectives. Mohamad Hosseini, an AI ethics researcher who handled submitted manuscripts as an editor, has seen a certain number of manuscripts that are clearly AI-generated and unnatural. The overuse of dashes, leaps in logic, and disjointed writing are some of the "telltale signs" in the field. However, as AI quality continues to improve, this intuition may eventually become obsolete—this sense of crisis is the starting point of the article.


2. Increasing "Capable Writers": AI as Writing Support

The first area where generative AI proves its value is in "writing tasks" such as summarizing literature, drafting, translating, and refining English. For non-English speakers, the language barrier can be a disadvantage in disseminating research findings. The article introduces how AI can help bridge this gap, enabling more researchers to compete on the international stage.


In fact, the use of AI by researchers is already reflected in statistics. A survey conducted by Nature among approximately 5,000 international researchers shows that a certain percentage use AI for drafting, translating, summarizing, and editing. Another large-scale analysis, focusing on a vast number of abstracts in the biomedical field, estimates that by 2024, a substantial number may have been processed by language models, based on the increasing use of phrases suspected to be AI-derived.


The important point here is not so much "AI writing papers" itself, but rather that the reduction in "writing task costs" leads to an increase in research output. As the speed of writing increases, submissions will rise, and the number of manuscripts reaching editorial offices will become overwhelming. Balancing quality and quantity will become increasingly challenging.


3. Hallucinations, Plagiarism, Fabrication: Amplified Risks Behind Convenience

AI's weaknesses are well known: hallucinations (plausible falsehoods), misattribution of citations, and presenting non-existent evidence. The article further emphasizes serious issues unique to academic publishing—plagiarism and the proliferation of "paper mills." AI can rapidly generate well-structured text from scratch, lowering the barrier to misconduct. While data fabrication existed before AI, AI provides "mass production" and "speed."


What is frightening is that misconduct does not necessarily occur out of malice. Researchers might use AI as a "convenient training wheel," inadvertently mixing in misinformation or plagiarism. The more natural the output appears, the more both readers and writers are likely to be swayed by its "plausibility."


4. The Temptation to Use AI in Peer Review: Labor Shortages and Expectations of Fairness

The next central topic of discussion is peer review. There are claims that the shortage of peer reviewers worsened after the pandemic, with editors facing challenges such as rejections or no responses to review requests. This has led to expectations that "using AI might broaden the pool of reviewers." The article mentions Roy Perlis, editor-in-chief of JAMA+AI, who refers to the potential of AI to lighten researchers' burdens and increase participation in reviews.


Furthermore, there is the illusion that AI peer review is "neutral." If it can distance itself from biases towards specific schools of thought, networks, or hypotheses, it might actually enhance fairness. However, the article quickly cautions that AI, trained on past publication data, may reproduce historical biases (favoring renowned researchers, prestigious institutions, and central countries). In fact, multiple studies have shown a tendency for AI to favor high-status institutions and prominent authors.


What emerges here is the fact that AI is not "neutral" but rather "an average of the past." If past publication culture was biased, AI peer review risks "automating" that bias.


5. Establishing Rules: The Reality of Permission, Prohibition, and Disclosure

So how are publishers and academic journals responding? According to the article, many major journals provide guidance on the use of generative AI, prohibiting uses that could lead to research misconduct, while conditionally allowing language editing and analytical assistance. PLOS requires the disclosure of the tool name used, the method of use, the evaluation of output validity, and the scope of impact.


Regarding peer review, confidentiality becomes the main issue. Inputting unpublished manuscripts into external AI services raises concerns about information leaks. Therefore, some major publishers ask reviewers not to upload unpublished manuscripts to generative AI, as mentioned in the article.


Additionally, there is a growing trend of not recognizing AI as a co-author and not allowing AI-generated images or AI-modified images. Ultimately, unless the responsibility is fixed on "humans," the verifiability of science is undermined. The article strongly asserts the principle that "ultimately, it is the human authors who are responsible for every word and number in the paper."


6. Detection is Not Omnipotent: The Game of Cat and Mouse Begins

The idea of "just detecting AI-generated content" is appealing, but the article remains calm on this point. Detection tools have limitations, and both users and those being used evolve. As stylistic quirks disappear, logic becomes coherent, and citations appear "plausible," distinguishing becomes more difficult. Moreover, over-reliance on detection could lead to another type of unfairness, such as non-English speakers being suspected just for using AI for proofreading.


In other words, academic publishing cannot fully prohibit or fully accept the use of AI. The practical solution involves "operation" combining (1) transparency (disclosure), (2) confidentiality (input restrictions), (3) human oversight (responsibility fixation), and (4) anti-fraud measures (strengthening editorial and review processes).


7. Reactions on Social Media: Optimism and Pessimism Accelerate Simultaneously

 

Not only references to the article itself, but reactions on social media to "AI x academic publishing" in general are noticeably polarized.


The pessimistic viewpoint is simple: "Publishing is not ready," "Misconduct will increase," "Trust will be broken." In communities monitoring retractions and research misconduct, there is strong caution against an "AI-driven flood of junk papers," with repeated tones that the publishing infrastructure cannot keep up.


The optimistic viewpoint is that "new workflows can be created" and "there is room for redesign in peer review and publishing." For example, there are communications from arXiv stakeholders discussing the potential for scientific publishing to incorporate new tools and methods in the era of generative AI, with discussions proceeding on the premise of change.


And as a pragmatic approach, there is a lot of discussion about "what is acceptable and what should be disclosed." Posts introducing Nature's survey reveal the temperature differences and conditional debates within the researcher community regarding the permissibility of AI use, indicating a demand for "gray operations" rather than black-and-white.


Summarizing the atmosphere on social media:

  • Convenience is not denied (especially in writing, summarizing, translating)

  • But there is strong anxiety about the increase in misconduct, fabrication, and junk

  • The solution is "rules + supervision + transparency" rather than "prohibition"
    The very issues depicted in the Undark article are being reiterated in different words.

8. Future Focus: "Who Bears Responsibility for What?"

AI transforming scientific publishing—this prospect itself is now a foregone conclusion. The issue is "how it changes."


If AI accelerates the mass production of papers and misconduct, and trivializes peer review, the trust in science will erode. Conversely, if AI narrows language gaps, supplements the shortage of reviewers, and streamlines the editorial process, science will become more open.


The turning point is not technology but governance. Disclosure, confidentiality, responsibility fixation, and the human eye. As the article concludes, AI is pressuring us to reassess every stage of the publishing process. Therefore, we should not debate whether to "introduce AI or not," but rather whether it is designed so that "humans can bear responsibility."



Source URL