• HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Tuesday, May 17, 2022
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Cleaning up online bots’ act – and speech

Bioengineer by Bioengineer
April 21, 2022
in Science News
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Researchers at the University of California San Diego have developed algorithms to rid speech generated by online bots of offensive language, on social media and elsewhere. 

Cleaning up bot's speech--illustration

Credit: University of California San Diego

Researchers at the University of California San Diego have developed algorithms to rid speech generated by online bots of offensive language, on social media and elsewhere. 

Chatbots using toxic language is an ongoing issue. But perhaps the most famous example is Tay, a Twitter chatbot unveiled by Microsoft in March 2016. In less than 24 hours, Tay, which was learning from conversations happening on Twitter, started repeating some of the most offensive utterances tweeted at the bot, including racist and misogynist statements. 

The issue is that chatbots are often trained to repeat their interlocutors’ statements during a conversation. In addition, the bots are trained on huge amounts of text, which often contain toxic language and tend to be biased;​​certain groups of people are overrepresented in the training set and the bot learns language representative of that group only. An example is a bot producing negative statements about a country, propagating bias because it’s learning from a training set where people have a negative view of that country.

“Industry is trying to push the limits of language models,” said UC San Diego computer science Ph.D. student Canwen Xu, the paper’s first author. “As researchers, we are comprehensively considering the social impact of language models and addressing concerns.”

Researchers and industry professionals have tried several approaches to clean up bots’ speech–all with little success. Creating a list of toxic words misses words that when used in isolation are not toxic, but become offensive when used in combination with others. Trying to remove toxic speech from training data is time consuming and far from foolproof. Developing a neural network that would identify toxic speech has similar issues.

Instead, the UC San Diego team of computer scientists first fed toxic prompts to a pre-trained language model to get it to generate toxic content. Researchers then trained the model to predict the likelihood that content would be toxic. They call this their “evil model.” They then trained a “good model,” which was taught to avoid all the content highly ranked by the “evil model.” 

They verified that their good model did as well as state-of-the-art methods–detoxifying speech by as much as 23 percent. 

They presented their work at the AAAI Conference on Artificial Intelligence held online in March 2022. 

Researchers were able to develop this solution because their work spans a wide range of expertise, said Julian McAuley, a professor in the UC San Diego Department of Computer Science and Engineering and the paper’s senior author. 

“Our lab has expertise in algorithmic language, in natural language processing and in algorithmic de-biasing,” he said. “This problem and our solution lie at the intersection of all these topics.” 

However, this language model still has shortcomings. For example, the bot now shies away from discussions of under-represented groups, because the topic is often associated with hate speech and toxic content. Researchers plan to focus on this problem in future work. 

“We want to make a language model that is friendlier to different groups of people,” said computer science Ph.D. student Zexue He, one of the paper’s co-authors. 

The work has applications in areas other than chatbots, said computer science Ph.D. student and paper co-author Zhankui He. It could, for example, also be useful in diversifying and detoxifying recommendation systems. 

 Leashing the Inner Demons: Self-Detoxification for Language Models

Canwen Xu, Zexue He, Zhankui He and Julian McAuley, Department of Computer Science and Engineering, University of California San Diego

https://arxiv.org/pdf/2203.03072.pdf

 

 

 



Method of Research

Computational simulation/modeling

Share12Tweet7Share2ShareShareShare1

Related Posts

Lung cancer screening

Nearly half of patients at high risk for lung cancer delayed screening follow-up

May 17, 2022
The subtropical North Atlantic

Deep ocean warming as climate changes

May 17, 2022

For large bone injuries, it’s Sonic hedgehog to the rescue

May 17, 2022

New light on organic solar cells

May 17, 2022

POPULAR NEWS

  • Weybourne Atmospheric Observatory

    Breakthrough in estimating fossil fuel CO2 emissions

    46 shares
    Share 18 Tweet 12
  • Hidden benefit: Facemasks may reduce severity of COVID-19 and pressure on health systems, researchers find

    44 shares
    Share 18 Tweet 11
  • Discovery of the one-way superconductor, thought to be impossible

    43 shares
    Share 17 Tweet 11
  • Sweet discovery could drive down inflammation, cancers and viruses

    42 shares
    Share 17 Tweet 11

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Tags

Weather/StormsVehiclesVaccineUrbanizationUniversity of WashingtonViolence/CriminalsVirologyVaccinesZoology/Veterinary ScienceUrogenital SystemVirusWeaponry

Recent Posts

  • Nearly half of patients at high risk for lung cancer delayed screening follow-up
  • Deep ocean warming as climate changes
  • For large bone injuries, it’s Sonic hedgehog to the rescue
  • New light on organic solar cells
  • Contact Us

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Posting....