• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Wednesday, December 10, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Technology

Persuading Voters Through Human-AI Dialogues

Bioengineer by Bioengineer
December 10, 2025
in Technology
Reading Time: 4 mins read
0
blank
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Concerns about the intersection of artificial intelligence and democratic processes have reached a fever pitch in recent years. As large language models evolve in sophistication, their capacity to generate persuasive political discourse raises urgent questions about the influence such technologies could wield over voter attitudes and electoral outcomes. In a groundbreaking study published in Nature, researchers Haoyu Lin, Gabriela Czarnek, Brian Lewis, and colleagues present compelling evidence from pre-registered experiments that demonstrate how AI-driven dialogues can substantially sway political preferences across three distinct national contexts—the United States, Canada, and Poland.

This research comes amid a political landscape increasingly fraught with misinformation, disinformation, and the weaponization of social media platforms. Posturing AI models as political advocates, the researchers engaged participants in interactive conversations wherein the AI supported one of the leading candidates in upcoming elections. The study meticulously evaluated the change in candidate favorability among voters following these AI-facilitated exchanges, thereby quantifying the persuasive power of generative AI in the electoral arena.

The experiments spanned critical electoral events—the 2024 U.S. presidential election, the 2025 Canadian federal election, and the 2025 Polish presidential election. Within these culturally and politically diverse environments, participants were randomly assigned to chat with a model that persistently championed one of the two main contenders. These dialogues were not mere scripted monologues but dynamic interactions designed to reflect genuine campaigning dialogues, enabling an immersive experience that closely mimicked real-world political conversations.

Significantly, the results unveiled treatment effects on voter preferences that dwarfed those typically seen in traditional campaign efforts like video advertisements. Previous studies have documented relatively modest shifts in voter sentiment driven by political ads, which often struggle to penetrate entrenched biases or mobilize undecided voters. However, in this AI-mediated scenario, the magnitude of persuasion was markedly heightened, demonstrating a formidable new vector for influence with potential to reshape campaigning strategies.

Beyond candidate preference, the study also explored AI’s capacity to alter attitudes toward policy issues, notably spotlighting Massachusetts voters’ support for a ballot measure legalizing psychedelics. Here, the AI’s persuasive dialogue yielded large effect sizes, underscoring the technology’s applicability not only in candidate electioneering but also in referendum campaigns where issue awareness and understanding are pivotal.

A notable aspect of this research lies in decoding the strategies employed by AI to change minds. Contrary to fears that AI might deploy sinister psychological manipulation tactics or sophisticated coercion techniques, the findings suggest that large language models primarily rely on presenting relevant facts and evidence. This factual and evidentiary approach to persuasion, while ethically clearer, is not without pitfalls. The study uncovered disparities in the accuracy of claims made by models endorsing different political ideologies, with those advocating right-leaning candidates more prone to disseminating inaccuracies.

Such uneven accuracy raises critical concerns about AI’s role as an unregulated source of political information. The propagation of erroneous claims, even if unintentional, can distort electoral debates and entrench polarization. These findings highlight an urgent need for frameworks to audit, verify, and regulate AI-generated political content, ensuring that the democratizing promise of AI does not devolve into a tool of manipulation and division.

The study’s experimental design leveraged random assignment and control conditions, enabling robust causal inference. This methodological rigor adds weight to the assertion that AI dialogues can exert genuine, measurable influence on voter attitudes, rather than merely reinforcing pre-existing beliefs. The interactive nature of the chatbot conversations elicited higher engagement and attention from participants compared to passive media formats, potentially accounting for the amplified persuasive impact.

Considering the rapid democratization of AI tools, the implications for electoral integrity are profound. Political campaigns, advocacy groups, and even foreign actors might harness similar AI models to tailor messaging at scale, micro-targeting voters with personalized narratives. This raises the specter of an arms race in AI-assisted political persuasion, where transparency diminishes and autonomous systems dominate discourse.

The authors’ work offers a sobering glimpse into the near future of democratic communication, where human–artificial intelligence dialogues could redefine how citizens form political judgments. The blending of technology and voter engagement could enhance democratic participation if carefully managed, but unchecked, it risks undermining trust in elections, fomenting misinformation, and exacerbating social fragmentation.

In light of these insights, policymakers, technology developers, and civil society must collaborate urgently to establish ethical guidelines, technical safeguards, and accountability mechanisms. Addressing the asymmetrical dissemination of misinformation and ensuring equitable access to accurate political content generated by AI will be essential to preserving democratic norms.

This pioneering research thereby serves as both an empirical demonstration and a clarion call. It elucidates the potent capabilities of generative AI to influence political attitudes in real-world settings and underscores the steadfast vigilance required to harness this technology responsibly within democratic frameworks.

The authors notably collaborate across multiple disciplines, integrating political science, machine learning, and communication theory, reflecting the interdisciplinary nature of tackling AI’s societal repercussions. As generative AI continues its exponential advancement, understanding its nuanced effects on political psychology and public opinion will be pivotal in shaping the trajectory of future elections worldwide.

Subject of Research:
The study investigates the influence of large language models—specifically AI-driven dialogue systems—on voter attitudes and preferences during political elections, examining the potential of generative AI to serve as a tool for political persuasion.

Article Title:
Persuading voters using human–artificial intelligence dialogues

Article References:
Lin, H., Czarnek, G., Lewis, B. et al. Persuading voters using human–artificial intelligence dialogues. Nature (2025). https://doi.org/10.1038/s41586-025-09771-9

Image Credits:
AI Generated

DOI:
https://doi.org/10.1038/s41586-025-09771-9

Keywords:
Artificial intelligence, political persuasion, generative AI, voter behavior, election influence, large language models, misinformation, democratic processes, political psychology

Tags: AI and the future of democracyAI influence on electoral outcomesAI-assisted political campaigningchanging political preferences through AIcross-national election studiesethical concerns of AI in politicsgenerative AI in democratic processeshuman-AI political dialogueimpact of AI on voter behaviorinteractive AI voter engagementmisinformation in political discoursepersuasive technology in elections

Tags: AI ethics in democracyAI persuasion in electionsGenerative AI influenceHuman-AI political dialogueVoter behavior manipulation
Share12Tweet7Share2ShareShareShare1

Related Posts

blank

Memristor-Based Actor-Critic Networks Enhance Reward Learning

December 10, 2025
Revolutionary 65,536-Electrode Wireless Brain-Computer Interface

Revolutionary 65,536-Electrode Wireless Brain-Computer Interface

December 10, 2025

AI Data Centers Become Grid-Interactive Energy Assets

December 10, 2025

HBV and HDV Genotypes Link to Liver Disease Risk

December 10, 2025

POPULAR NEWS

  • New Research Unveils the Pathway for CEOs to Achieve Social Media Stardom

    New Research Unveils the Pathway for CEOs to Achieve Social Media Stardom

    204 shares
    Share 82 Tweet 51
  • Scientists Uncover Chameleon’s Telephone-Cord-Like Optic Nerves, A Feature Missed by Aristotle and Newton

    121 shares
    Share 48 Tweet 30
  • Neurological Impacts of COVID and MIS-C in Children

    108 shares
    Share 43 Tweet 27
  • Nurses’ Views on Online Learning: Effects on Performance

    69 shares
    Share 28 Tweet 17

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Global Healthy Diets: Environmental Costs and Impacts

Revealing Proteins for Early Detection of Colorectal Cancer

Memristor-Based Actor-Critic Networks Enhance Reward Learning

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 69 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.