Check the comments section of many social media and digital news platforms, and you’re likely to find a cesspool of insults, threats and even harassment. In fact, a Pew Research Center survey found that 41% of American adults have personally experienced online harassment, and one in five adults say they’ve been harassed online for their political views.
Credit: Chalet Moleni
Check the comments section of many social media and digital news platforms, and you’re likely to find a cesspool of insults, threats and even harassment. In fact, a Pew Research Center survey found that 41% of American adults have personally experienced online harassment, and one in five adults say they’ve been harassed online for their political views.
But researchers at BYU and Duke University say derisive online conversations don’t have to be the norm. A joint paper between the two universities found that artificial intelligence can be used to improve conversation quality and promote civil dialogue in online interactions.
Using a specially developed online platform built by BYU undergraduate Vin Howe, the researchers conducted a distinctive experiment. They paired participants with opposing viewpoints in an online chat and asked them to discuss a highly polarizing topic in American politics: gun control. During the conversation, one user would intermittently receive a prompt from an AI tool suggesting a rephrasing of their message to make it more polite or friendly but without altering its content. Participants had the freedom to adopt, modify or dismiss the AI tool’s suggestion. When the chat concluded, participants were directed to a survey to assess the quality of the conversation.
Over 1,500 individuals participated in the experiment, leading to a total of 2,742 AI-generated rephrasings being accepted by participants. The results revealed a promising transformation in the dynamics of online interactions. Chat partners of individuals who implemented one or more AI rephrasing suggestions reported significantly higher conversation quality and, remarkably, exhibited greater willingness to listen to the perspectives of their political opponent.
“We found the more often the rephrasings were used, the more likely participants were to feel like the conversation wasn’t divisive and that they felt heard and understood,” said BYU computer science professor David Wingate, a co-author on the study who is helping launch BYU’s degree in computer science with an emphasis in machine learning this fall.
Importantly, AI-assisted rephrasings didn’t alter the content of the conversations, nor did they change the viewpoints of the participants, said Wingate, who noted that AI chat assistance is vastly different from persuasive AI, which is dangerous and ethically fraught. “But helping people have productive and courteous conversations is one positive outcome of AI.”
The implications of the research are far-reaching, since it offers a scalable solution to combat toxic online culture that has plagued the internet for years. Unlike traditional methods, such as professional training sessions led by expert moderators that are limited in scope and availability, AI intervention can be broadly implemented across various digital channels.
By properly utilizing the power of AI, online platforms could be transformed into constructive forums where individuals from differing backgrounds and opinions come together to discuss current issues with empathy and respect. Ultimately, this research shows that AI technology, when thoughtfully integrated, can play a pivotal role in shaping a more positive online landscape.
“My hope is that we’ll continue to have more BYU students build pro-social applications like this and that BYU can become a leader in demonstrating ethical ways of using machine learning,” said Wingate. “In a world that is dominated by information, we need students who can go out and wrangle the world’s information in positive and socially productive ways.”
The study was recently published in the scientific journal PNAS by Wingate and BYU professors Lisa Argyle, Ethan Busby and Josh Gubler, as well as professor Chris Bail from Duke University. Former BYU graduate students Chris Rytting and Taylor Sorensen also co-authored the study.
Journal
Proceedings of the National Academy of Sciences
DOI
10.1073/pnas.2311627120
Method of Research
Experimental study
Subject of Research
People
Article Title
Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale
Article Publication Date
3-Oct-2023