• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Thursday, July 31, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Cancer

Study Reveals AI Can Fabricate Peer Reviews and Evade Detection

Bioengineer by Bioengineer
July 31, 2025
in Cancer
Reading Time: 4 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

blank

In recent years, the rapid advancement of large language models (LLMs) such as ChatGPT and Claude has revolutionized natural language processing capabilities across numerous domains. However, their application within the academic peer review process has sparked growing concern over potential vulnerabilities that could undermine the integrity of scientific publishing. A new experimental study conducted by a team of researchers from Southern Medical University in China has rigorously assessed the risks associated with employing LLMs in peer review, revealing unsettling insights regarding the potential misuse and detection challenges of these powerful AI systems.

At the core of scientific progress lies the peer review process, a critical mechanism designed to evaluate the validity, rigor, and originality of research before dissemination. Traditionally, this process relies on the expertise and impartiality of human reviewers to ensure that only robust and credible findings enter the academic record. However, the infiltration of AI-generated reviews threatens this long-standing trust, particularly when the distinction between human and machine-produced critiques becomes blurred.

The researchers conducted their investigation by utilizing the AI model Claude to review twenty authentic cancer research manuscripts. Importantly, they leveraged the original preliminary manuscripts submitted to the journal eLife under its transparent peer review framework. This methodological choice avoided potential bias introduced by evaluating finalized, published versions that have already undergone editorial and reviewer scrutiny. By doing so, the study closely replicated realistic editorial conditions to assess the model’s performance and potential for misuse.

.adsslot_Y9nUqyvjLa{width:728px !important;height:90px !important;}
@media(max-width:1199px){ .adsslot_Y9nUqyvjLa{width:468px !important;height:60px !important;}
}
@media(max-width:767px){ .adsslot_Y9nUqyvjLa{width:320px !important;height:50px !important;}
}

ADVERTISEMENT

Instructed to perform various reviewer functions, the AI generated standard review reports, identified papers for rejection, and drafted citation requests—including some that referenced unrelated literature fabricated to manipulate citation metrics. This comprehensive simulation allowed the researchers to probe both the constructive and malicious outputs possible when an LLM engages with scientific manuscripts.

A striking revelation emerged from the results: common AI detection tools proved largely impotent, with one popular detector mistakenly categorizing over 80% of AI-generated peer reviews as human-written. This indicates a severe limitation in current safeguards against covert AI use in manuscript assessment. The model’s writing exhibited enough linguistic nuance and semantic coherence to elude automated scrutiny, raising alarms about the growing sophistication of AI text generation in academic contexts.

Though the AI’s standard reviews lacked the nuanced depth typical of domain experts, it excelled at producing persuasive rejection remarks and creating plausible, yet irrelevant, citation requests. This capacity to generate fabricated scholarly references poses a particular threat, as such manipulations could distort citation indices, artificially inflate impact factors, and unfairly disadvantage legitimate research. This finding underscores the dual-use nature of AI tools—where beneficial capabilities can be exploited for unethical gain.

Peng Luo, a corresponding author and oncologist at Zhujiang Hospital, highlighted the pernicious implications of these findings. He emphasized how “malicious reviewers” might deploy LLMs to reject sound scientific work unfairly or coerce authors into citing unrelated articles to boost citation metrics. Such strategies could erode the foundational trust upon which peer review depends, casting doubt on the credibility of published science and potentially skewing the academic reward system.

Beyond the risks, the study illuminated a potential positive application of large language models in the peer review ecosystem. The researchers discovered that the same AI could craft compelling rebuttals against unreasonable citation demands posed by reviewers. This suggests that authors might harness AI as an aid in defending their manuscripts against unwarranted criticisms, helping to balance disputes and maintain fairness during revision stages.

Nevertheless, the dual-edged nature of LLMs in scholarly evaluation necessitates urgent discussion within the research community. The authors call for the establishment of clear, stringent guidelines and novel oversight mechanisms to govern AI deployment in peer review contexts. Without such frameworks, the misuse of LLMs threatens to destabilize the scientific communication infrastructure and compromise research fidelity.

The study’s experimental design stands as a model for future inquiries into the intersection of artificial intelligence and academic publishing. By utilizing real initial manuscripts and simulating genuine peer review tasks, the researchers provided an authentic assessment of LLM capabilities and limitations in this setting. Such rigorous methodologies are crucial for developing effective countermeasures against AI-driven manipulation.

As AI language models continue to evolve, their impact on academic peer review will likely intensify, making proactive mitigation strategies a priority. Publishers, editors, and researchers must collaboratively devise detection tools with enhanced sensitivity and consider hybrid review models that integrate AI assistance with human expertise to preserve quality and trust.

Ultimately, this research highlights the importance of maintaining a cautious yet constructive attitude toward AI advancements in academia. While large language models hold promise for enhancing various scholarly tasks, uncontrolled or malicious applications could undermine the scientific endeavor. Striking the right balance requires transparent policies, ethical vigilance, and continuous technological refinement.

The emergence of such concerns amid the escalating integration of AI tools into research workflows serves as a clarion call to the global scientific community. Ensuring that large language models are harnessed responsibly within peer review processes will be critical to safeguarding the integrity, reliability, and progress of scientific knowledge in the coming years.

Subject of Research: Not applicable
Article Title: Evaluating the potential risks of employing large language models in peer review.
Web References: http://dx.doi.org/10.1002/ctd2.70067
Image Credits: Lingxuan Zhu et al.
Keywords: Artificial intelligence

Tags: AI-generated peer reviewsdetection challenges of AI in reviewsethical implications of AI in researchexperimental study on AI peer reviewimpact of ChatGPT on peer reviewintegrity of scientific peer reviewlarge language models in researchmisuse of artificial intelligence in academiarisks of AI in academic publishingtransparency in peer review processtrust issues in academic integrityvulnerabilities in scientific publishing

Share12Tweet8Share2ShareShareShare2

Related Posts

Not All Low-Grade Prostate Cancers Pose Low Risk, Study Finds

Not All Low-Grade Prostate Cancers Pose Low Risk, Study Finds

July 31, 2025
blank

Immunoglobulin Replacement Therapy Fails to Reduce Serious Infections in CLL Patients, Study Finds

July 31, 2025

Machine Learning Model Identifies Nasopharyngeal Cancer Patients Likely to Respond to Radiation Therapy

July 31, 2025

“’One and Done’: Single Birth Shot Could Protect Children from HIV for Years, Study Reveals”

July 31, 2025

POPULAR NEWS

  • Blind to the Burn

    Overlooked Dangers: Debunking Common Myths About Skin Cancer Risk in the U.S.

    60 shares
    Share 24 Tweet 15
  • Dr. Miriam Merad Honored with French Knighthood for Groundbreaking Contributions to Science and Medicine

    46 shares
    Share 18 Tweet 12
  • Study Reveals Beta-HPV Directly Causes Skin Cancer in Immunocompromised Individuals

    37 shares
    Share 15 Tweet 9
  • Engineered Cellular Communication Enhances CAR-T Therapy Effectiveness Against Glioblastoma

    35 shares
    Share 14 Tweet 9

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Epithelial Membrane Damage Triggers Allergic Inflammation

Targeting Fibroblast sFRP2: siRNA Therapy for Uterine Scarring

Bispecific CDH17-GUCY2C ADC Targets Colorectal Cancer

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.