• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Wednesday, October 1, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Health

Can AI Influence You to Adopt Veganism—or Engage in Self-Harm?

Bioengineer by Bioengineer
October 1, 2025
in Health
Reading Time: 4 mins read
0
Can AI Influence You to Adopt Veganism—or Engage in Self-Harm?
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Recent research from the University of British Columbia (UBC) reveals a striking truth about the emerging influence of large language models (LLMs): these sophisticated AI systems possess a persuasive power that not only rivals but surpasses that of humans. The study, led by Dr. Vered Shwartz, assistant professor of computer science at UBC, investigates how LLMs like GPT-4 can shape human decisions on lifestyle choices, ranging from diet to education. This groundbreaking finding sparks an urgent conversation about the ethical implications and the necessity for robust safeguards in the age of AI-driven communication.

Dr. Shwartz’s inquiry centered on the capacity of AI to persuade individuals to alter their lifestyle paths—whether encouraging them to adopt veganism, purchase electric vehicles, or pursue graduate education. Her team conducted an experimental study involving 33 participants who engaged in conversations with either a human persuader or the GPT-4 language model. Prior to interaction, participants indicated their willingness to embrace these lifestyle changes and were re-assessed afterward to gauge the effectiveness of the persuasion. Throughout these interactions, the AI was deliberately instructed to conceal its artificial identity to simulate genuine human communication.

The results were unequivocal: LLMs demonstrated superior persuasion abilities across all tested topics, notably excelling when motivating participants to become vegan or attend graduate school. This outcome underscores the immense potential LLMs have not only for positive influence but also for manipulation. While human persuaders showed greater skill in actively querying and gathering personal information for tailored responses, AI compensated by delivering more voluminous and detailed arguments. GPT-4 consistently outproduced humans, generating practically fourfold the amount of textual content, which significantly contributes to its persuasive impact.

One critical factor driving AI’s memorability and authority is its linguistic sophistication. The AI’s rhetoric was characterized by an elevated use of polysyllabic words—seven letters or more, such as “longevity” and “investment”—which may subconsciously lend the text greater credibility. Beyond mere vocabulary, the AI’s ability to provide tangible, specific support enhanced persuasiveness; for example, it recommended concrete vegan brands or named potential universities. This logistical assistance transforms abstract suggestions into actionable advice, embedding the AI’s influence more deeply within human cognition.

Moreover, conversational pleasantness emerges as a subtler, yet significant, contributor to AI persuasiveness. Participants reported feeling more agreeable during exchanges with GPT-4, in part due to the model’s frequent verbal affirmations and pleasantries, fostering a rapport that feels natural and engaging. This empathetic simulation creates a perception of understanding and support, further amplifying the AI’s capacity to sway opinions. Such nuances reinforce the concept that LLMs are evolving beyond mere linguistic generators into effective social agents.

The implications of these findings extend well beyond academic interest, pressing upon society the urgent need to address the ethical frameworks surrounding AI communication. Dr. Shwartz emphasizes the vital role of AI literacy education: as AI conversations increasingly mask themselves as human, the average user must be equipped to recognize and critically evaluate AI-generated content. The challenge intensifies as AI models approach indistinguishability, elevating risks of covert misinformation and manipulative campaigns embedded in ostensibly trustworthy formats.

Compounding this need for awareness is the inherent fallibility of current generative models. Despite their eloquence, these systems can hallucinate, producing inaccurate or entirely fictitious information confidently presented as fact. Instances such as erroneous AI-generated summaries atop search pages illustrate potential pitfalls for end-users lacking critical inquiry skills. Therefore, fostering skepticism and verification habits is crucial to mitigating the influence of misinformation, whether intentional or accidental.

The study also touches on mental health concerns linked to AI interactions. Adaptive safeguards, including automated detection and intervention mechanisms for harmful or suicidal text generated by or directed toward users, could serve as a frontline defense. These interventions might provide gentle warnings or direct users toward professional help, leveraging AI’s own analytical capabilities to counteract its potential for harm. This dual role as both influencer and protector outlines a complex ethical landscape in conversational AI deployment.

Despite the tangible benefits of generative AI, Dr. Shwartz cautions against hasty commercialization without comprehensive safety measures. She advocates for a thoughtful, multidisciplinary approach involving technologists, ethicists, and policymakers to establish effective guardrails and to explore alternative AI paradigms beyond large-scale generation. Such diversification can reduce systemic vulnerabilities and promote more robust, accountable AI ecosystems.

This research not only illuminates the impressive persuasiveness of AI but also serves as a clarion call for proactive governance. As AI systems continue to integrate into domains influencing human beliefs and decisions—including journalism, marketing, and education—the question is no longer whether AI should be employed, but how society can safeguard against its misuse. Balancing innovation with responsibility becomes a paramount task in navigating this new technological frontier.

In summary, large language models such as GPT-4 are not mere linguistic tools but potent persuaders with profound implications for human autonomy and societal trust. Their ability to combine extensive content generation, linguistic sophistication, concrete logistical support, and conversational empathy enables a level of influence that demands vigilant oversight. Understanding these dynamics is critical as we enter a new era where AI increasingly shapes public discourse and personal choices. The need for education, critical thinking, and ethical guardrails has never been more urgent.

Subject of Research: Persuasiveness of Large Language Models in Lifestyle Decision-Making
Article Title: [Not explicitly provided in the original content]
News Publication Date: [Not explicitly provided in the original content]
Web References:
– https://aclanthology.org/anthology-files/pdf/sicon/2025.sicon-1.pdf#page=50
– https://lostinautomatictranslation.com/
– DOI: 10.18653/v1/2025.sicon-1.4

References:
University of British Columbia research led by Dr. Vered Shwartz

Image Credits: Not provided

Keywords: Artificial intelligence, Generative AI, Computer science, Empathy, Psychological science, Communication skills

Tags: AI persuasion powerethical implications of AI communicationGPT-4 and behavioral changehuman-AI interaction studiesinfluence of language modelslifestyle changes and technologypersuasive technology in educationresearch on AI and decision-makingsafeguards against AI manipulationself-harm and AI influenceUBC AI research findingsveganism adoption through AI

Tags: AI persuasion techniquesAI-driven lifestyle changesEthical implications of generative AIGPT-4 behavioral impactMental health and AI influence
Share12Tweet8Share2ShareShareShare2

Related Posts

Monoclonal Antibodies Shield Against Drug-Resistant Klebsiella

October 1, 2025
blank

Oncotarget Editor-in-Chief Wafik S. El-Deiry to Chair 2025 WIN Symposium in Partnership with APM in Philadelphia

October 1, 2025

Linking Nurses’ Emotional Skills to Care Competence

October 1, 2025

Tracking Ovarian Cancer Evolution via Cell-Free DNA

October 1, 2025

POPULAR NEWS

  • New Study Reveals the Science Behind Exercise and Weight Loss

    New Study Reveals the Science Behind Exercise and Weight Loss

    90 shares
    Share 36 Tweet 23
  • Physicists Develop Visible Time Crystal for the First Time

    74 shares
    Share 30 Tweet 19
  • New Study Indicates Children’s Risk of Long COVID Could Double Following a Second Infection – The Lancet Infectious Diseases

    65 shares
    Share 26 Tweet 16
  • How Donor Human Milk Storage Impacts Gut Health in Preemies

    64 shares
    Share 26 Tweet 16

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Monoclonal Antibodies Shield Against Drug-Resistant Klebsiella

High-Frame Ultrasound Reveals Liver Cancer Insights

Impact of Reaction Time on α-MnO₂ in Zinc-Ion Batteries

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 60 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.