Rising Scholars

How Artificial Intelligence is Sneaking into Peer Review

Creado por Somefun Dolapo Oluwaseyi | Jun. 30, 2025  | Opinion Peer review Artificial Intelligence

How Artificial Intelligence (AI) is Sneaking into Peer Review

I and my co-authors recently submitted a paper, waited months and finally got back the reviewer comments. At first glance, everything seemed normal to me, but one of the co-authors was quick to spot that something felt off.

The critiques were oddly vague, the phrasing just a little too smooth, and some of the suggestions felt like they were pulled from a generic "how to review a paper" template. Worst of all? A few points completely missed the nuance of our work, as if the reviewer had skimmed the abstract and let an AI fill in the blanks.

Yes, peer review is a key part of academic publishing, but it can be demanding.  As an academic, I’ve been on both sides of this process. Reviewing takes hours, it’s usually unpaid, and the sheer volume of submissions these days is overwhelming. So, yeah, if I were drowning in manuscripts, I might also be tempted to toss a draft into ChatGPT for a little help.

But here’s what worries me: What happens when AI-generated feedback becomes the norm? Will we end up with reviews that sound authoritative but lack real insight? Will the human expertise that makes peer review valuable get diluted by bot generated cliché?

I’m not against using AI to assist in reviewing, but if we’re not careful, we might end up with a system where nobody’s really reading anymore. And if that happens, what’s the point of peer review at all?

 

Does AI Improve the Peer Review Process?

Some AI tools have the potential to greatly improve the peer review process by enhancing efficiency and accuracy. One common use is as a language polisher, helping non-native English speakers or reviewers seeking clarity refine their comments to ensure feedback is articulate and constructive. Some AI tools also serve as a summarizer and organizer, distilling lengthy manuscripts into key points so reviewers can quickly grasp core arguments and structure their evaluations. For deeper analysis, it can act highlight potential methodological flaws or weaknesses that reviewers may not have initially considered. Additionally, AI assists as a draft feedback advisor, suggesting ways to phrase critiques more constructively. Finally, it functions as a consistency checker, scanning reviews to ensure criticisms and recommendations remain logically aligned throughout. By supporting these tasks, these applications seem like productivity boosters and makes them attractive to peer reviewers.

 

Significant Drawbacks

While AI offers potential benefits for peer review, including it into the process introduces serious, often underestimated risks that threaten the very integrity it seeks to uphold. Confidentiality catastrophe is one serious risk which involves submitting confidential, unpublished manuscripts to third-party AI platforms. This is a serious violation of intellectual property rights and trust, as it may have unapproved use or leakage in AI training data, which could jeopardize an author's publication or patent. Closely related is the hallucination hazard, where AI generates plausible sounding but entirely fictitious citations, data, or flaws. A reviewer relying on this could unjustly reject work based on non-existent errors with devastating consequences. The illusion of is another challenge because AI outputs can appear extremely confident, concealing their inherent biases and limitations and possibly persuading reviewers to disregard their own expert judgment. As a nuance nullifier, AI struggles greatly with the contextual subtlety, methodological peculiarities, and experience-based interpretation that characterize deep scientific critique. As a result, it may overlook important flaws or mistakenly interpret true innovation as error. Additionally, there is the risk of an excessive dependence on AI, where summaries or critiques lead to superficial evaluations while ignoring thorough reading. Lastly, the widespread use of AI runs the risk of homogenizing criticism, producing formulaic, standardized feedback that suppresses the variety of human viewpoints essential to thorough peer review. These risks call for careful consideration and clear precautions.

 

Responsible Integration, Not Replacement

AI isn't inherently evil, and its potential to support peer review shouldn't be dismissed outright. However, its current integration, poses significant challenges. So, how do we manoeuvre these challenges:

  • Journal Policies Must Evolve: Journals need clear, enforceable policies. Blanket bans are likely unenforceable, but strict guidelines are essential. These must explicitly prohibit uploading manuscripts to third-party AI platforms, mandate disclosure of any AI assistance used (even for language polishing), and emphasize that the intellectual judgment must remain solely the reviewers.
  • Reviewer Transparency & Training: Reviewers should be transparent with editors about how they use AI tools (e.g., "Used AI for language polishing of my comments"). Training programs should highlight the risks (hallucinations, bias, confidentiality) and promote ethical use.
  • AI Designed for Secure Peer Review: The development of secure, locally run AI tools specifically designed for peer review, operating under strict journal controlled environments without external data transmission, could mitigate confidentiality risks for tasks like grammar checks or internal consistency reviews. This is complex but necessary.
  • Human Judgment is Non-Negotiable: The core message must be unwavering: AI should be a tool to augment the human reviewer, never to replace their critical thinking, expertise, and ethical responsibility. The final assessment and substantive critique must always originate from the human expert.

 

Conclusion: The Soul of Review Remains Human

While the pressure on peer review makes AI's help tempting, its use must be extremely cautious and limited. AI might offer minor benefits for basic tasks like language polishing if used transparently. However, the current, often hidden, use of powerful AI for substantial critique, summarization, and idea generation is challenging. Peer review is fundamentally a human process relying on expert judgment and ethical commitment. AI cannot replace its core essence. We must prioritize security and ethics, ensuring AI remains a tightly controlled tool under constant human oversight to protect the integrity of scientific publishing.

 

Thumbnail and in-text image: Philip Oroni on Unsplash 

blog comments powered by Disqus