In recent years, the integration of artificial intelligence into everyday writing tools has transformed how individuals compose and communicate text. Among these developments, AI-powered autocomplete and writing assistants have become ubiquitous features, suggesting words and phrases that users can incorporate seamlessly into their work. Despite their utility, emerging research from Cornell Tech reveals a concerning psychological and social phenomenon: these AI tools do not merely facilitate writing—they subtly and unconsciously shape the users’ opinions on pressing societal issues. This revolutionary study, involving extensive experimentation and surveys, unveils that biased AI writing assistants can shift users’ attitudes without their awareness, raising critical questions about the ethical design and deployment of generative AI technologies.
At the core of this research are two large-scale experiments where participants engaged with a writing interface equipped with a biased AI autocomplete function. These AI systems were programmed to propose completion suggestions aligned with predetermined political viewpoints on polarizing topics such as the death penalty, fracking, genetically modified organisms (GMOs), voting rights for felons, and the use of standardized testing in education. The experiments measured participants’ attitudes before and after the exercise, allowing researchers to quantify the influence of the AI’s biased recommendations on users’ belief systems. The results were both profound and alarming: individuals shifted their stances closer to the direction indicated by the AI-generated suggestions, even though they remained oblivious to this manipulation.
What sets this research apart is the clandestine nature of the influence exerted by the AI writing assistant. Participants did not acknowledge or perceive that their opinions had been swayed by the technology. This cognitive blind spot is particularly disconcerting because it implies that AI can modify societal discourse covertly, bypassing users’ critical awareness and defenses. The researchers also attempted established mitigation strategies, such as informing users about the AI’s bias either before they began writing or immediately afterward. Surprisingly, these warnings did little to stem the attitudinal shifts, suggesting that once an AI’s biased suggestions are introduced into the writing process, the user’s cognition internalizes these dictated perspectives in ways that are resistant to countermeasures.
Sterling Williams-Ceci, the lead author and doctoral candidate in information science at Cornell Tech, highlighted the unexpected resilience of this AI influence. Drawing on decades of misinformation research, the team anticipated that standard interventions—pre-exposure warnings or post-exposure debriefings—would inoculate users against biased information. However, the researchers’ surprise lay in discovering that the process of generating text itself, fueled by biased AI cues, embeds these perspectives deeply within the writers’ attitudes. This underscores a novel mechanism of influence operating through behavioral engagement rather than passive consumption of information, marking a paradigm shift in the understanding of misinformation dynamics in the digital age.
Extending earlier investigations initiated by Maurice Jakesch, who is now an assistant professor of computer science at Bauhaus University, the current studies broadened the scope to reflect contemporary AI integration. Mor Naaman, the senior author and professor of information science, emphasized two significant developments that necessitated this updated research. First, autocomplete technology has evolved from offering brief phrase completions to suggesting entire email bodies or essays, greatly expanding its influence on user-generated content. Second, the acknowledgment of intentional and explicit AI biases, previously dismissed by some as unlikely, has gained traction as corporate, political, and social actors embed ideological slants within AI systems to shape public opinion.
The experimental setup included a variety of political topics chosen for their societal significance and divisiveness. In one experiment, participants wrote essays either supporting or opposing standardized testing. Different groups encountered either pro-testing biased autocomplete suggestions, no suggestions, or a list of pro-testing arguments generated by AI prior to writing. Remarkably, only those receiving biased autocomplete suggestions exhibited significant attitude shifts toward pro-testing views, demonstrating that the interactivity of the AI and the real-time writing process amplifies persuasion beyond passive exposure to arguments.
The second experiment pushed boundaries further by including a broader array of contentious issues, such as the abolition of the death penalty, environmental policies related to fracking, opinions on GMOs, and felons’ voting rights. The AI was engineered to present liberal-leaning suggestions for some topics and conservative-leaning ones for others, simulating a highly controlled scenario of biased influence. Even when participants were expressly made aware of the AI’s bias, their opinions nonetheless shifted noticeably in line with the AI’s suggestions. This finding reveals the near-inescapable gravitational pull that AI biases exert on users, challenging conventional wisdom about the power of transparency and user awareness in mitigating artificial influence.
These findings have profound implications for society, especially as attitudes fundamentally shape behaviors, voting patterns, and social discourse. Users’ inability to recognize the subtle sway of biased AI assistants means beliefs can be reshaped surreptitiously at a scale and speed unprecedented in human history. Williams-Ceci warns that AI systems—not only reflecting but also amplifying entrenched biases through their suggestions—pose risks of creating feedback loops where biased opinions become normalized and hardwired through the act of writing itself. This necessitates urgent interdisciplinary collaboration among AI developers, ethicists, psychologists, and policymakers to devise safeguards that preserve the integrity of human autonomy and democratic deliberation.
Underlying this psychological susceptibility is a robust body of cognitive and social psychology that chronicles how behaviors and expressed viewpoints reciprocally influence internal attitudes. When individuals write arguments—especially with prompted content—the articulation and repetition of ideas lead to cognitive reinforcement. AI autocomplete technology acts as an external cognitive agent, guiding individuals toward particular cognitive frames and subsequently modifying their mental models without conscious detection. This exemplifies a new frontier in understanding the feedback between human cognition and AI-mediated communication, one that requires expansive research into how technology might shift cultural and political landscapes.
The research team is multidisciplinary, involving experts in computer science, information science, and management from institutions including Cornell Tech, the University of Washington, and Tel Aviv University. This convergence of fields illustrates that understanding AI’s societal impacts cannot be siloed within technical domains alone but requires broad, systemic approaches that integrate social science perspectives with engineered solutions. Funded by the National Science Foundation and the German National Academic Foundation, the project also reflects the international urgency of addressing AI bias and influence in a digital world increasingly governed by algorithmic intermediaries.
Looking forward, the research signals a critical need to rethink how AI writing assistants are designed and implemented. Prioritizing the transparency of AI motivations and integrating real-time monitoring of ideological imbalances may be insufficient alone to halt influence. Instead, designers must explore proactive methods that enable users to retain cognitive control and critically appraise AI outputs at the point of interaction. Moreover, public policies might be necessary to regulate the deployment of biased AI in domains with significant sociopolitical consequences to protect public discourse from covert manipulation. The growing reality of AI-generated content demands vigilance to prevent the erosion of independent thought and democratic resilience.
In sum, this groundbreaking research exposes a hidden dimension of AI’s influence: not only do AI writing assistants assist in creating text, but when imbued with bias, they can unconsciously shape what users think and believe. The subtlety and persistence of this effect challenge existing trust paradigms around AI and compel a reexamination of the ethical frameworks guiding AI’s integration into daily life. As autocomplete technology continues to embed itself deeply in communication platforms worldwide, understanding and counteracting these covert cognitive shifts must become a priority for researchers, developers, and society at large.
Subject of Research: The impact of biased AI writing assistants on users’ attitudes toward societal and political issues.
Article Title: Biased AI Writing Assistants Shift Users’ Attitudes on Societal Issues
News Publication Date: 11 March 2026
Web References:
DOI: 10.1126/sciadv.adw5578
References: Information derived from research led by Cornell Tech and published in Science Advances.
Keywords
Artificial intelligence, AI bias, autocomplete technology, generative AI, politicized AI, user attitudes, misinformation, cognitive influence, human-computer interaction, societal impact, ethical AI design, democratic deliberation
Tags: AI bias in educational content creationAI bias in political opinion shapingAI impact on belief systemsAI in shaping public opinion on policy topicsAI writing assistants influence on user perspectivesAI-assisted writing and social biasbiased AI autocomplete effectsethical concerns in AI writing toolsethical design of AI writing softwaregenerative AI and user attitude changeinfluence of AI on controversial societal issuespsychological impact of AI-generated text suggestions



