• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Saturday, August 16, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Online toxicity can only be countered by humans and machines working together, according to Concordia researchers

Bioengineer by Bioengineer
February 28, 2024
in Science News
Reading Time: 3 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Wading through the staggering amount of social media content being produced every second to find the nastiest bits is no task for humans alone.

Ketra Schmitt and Arezo Bodaghi

Credit: Concordia University

Wading through the staggering amount of social media content being produced every second to find the nastiest bits is no task for humans alone.

Even with the newest deep-learning tools at their disposal, the employees who identify and review problematic posts can be overwhelmed and often traumatized by what they encounter every day. Gig-working annotators, who analyze and label data to help improve machine learning, can be paid pennies per unit worked.

In a new Concordia-led paper published in IEEE Technology and Society Magazine, researchers argue that supporting these human workers is essential and requires a constant re-evaluation of the techniques and tools they use to identify toxic content.

The authors examine social, policy and technical approaches to automatic toxicity detection and consider their shortcomings while also proposing potential solutions.

“We want to know how well current moderating techniques, which involve both machine learning and human annotators of toxic language, are working,” says Ketra Schmitt, one of the paper’s co-authors and an associate professor with the Centre for Engineering in Society at the Gina Cody School of Engineering and Computer Science.

She believes that human contributions will remain essential to moderation. While existing automated toxicity detection methods can and will improve, none is without error. Human decision-makers are essential to review decisions.

“Moderation efforts would be futile without machine learning because the volume is so enormous. But lost in the hype around artificial intelligence (AI) is the basic fact that machine learning requires a human annotator to work. We cannot remove either humans or the AI.”

Arezo Bodaghi is a research assistant at the Concordia Institute for Information Systems Engineering and the paper’s lead author. “We cannot simply rely on the current evaluation matrix found in machine and deep learning to identify toxic content,” Bodaghi adds.  “We need them to be more accurate and multilingual as well.

“We also need them to be very fast, but when machine learning techniques are fast, they can lose accuracy. There is a trade off to be made.”

Broader input from diverse groups will help machine-learning tools become as inclusive and bias free as possible. This includes recruiting workers who are non-English speakers and come from underrepresented groups such as LGBTQ2S+ and racialized communities. Their contributions can help improve the large language models and data sets used by machine-learning tools.

Keeping the online world social

The researchers offer several concrete recommendations companies can take to improve toxicity detection.

First and foremost is improving the working conditions for annotators. Many companies pay them by the unit of work rather than by the hour. Furthermore, these tasks can be easily offshored to workers demanding lower wages than their North American or European counterparts, so companies can wind up paying their employees less than a dollar an hour. And little in the way of mental-health treatment is offered even though these employees are front-line bulwarks against some of the most horrifying online content.

Companies can also deliberately build online platform cultures that prioritize kindness, care and mutual respect as opposed to others such as Gab, 4chan, 8chan and Truth Social that celebrate toxicity.

Improving algorithmic approaches would help large language models reduce the number of errors made around misidentification and differentiating context and language.

Finally, corporate culture at the platform level has an impact at the user level.

When ownership deprioritizes or even eliminates user trust and safety teams, for instance, the effects can be felt company-wide and risk damaging morale and user experience.

“Recent events in the industry show why it is so important to have human workers who are respected, supported, paid decently and have some safety to make their own judgements,” Schmitt concludes.

Benjamin Fung of McGill University’s School of Information Studies also contributed to this study.

Read the cited paper: “Technological Solutions to Online Toxicity: Potential and Pitfalls”



Journal

IEEE Technology and Society Magazine

DOI

10.1109/MTS.2023.3340235

Method of Research

Systematic review

Subject of Research

People

Article Title

Technological Solutions to Online Toxicity: Potential and Pitfalls

Article Publication Date

1-Dec-2023

COI Statement

None

Share12Tweet7Share2ShareShareShare1

Related Posts

Breakthrough Cancer Drug Eradicates Aggressive Tumors in Clinical Trial

Breakthrough Cancer Drug Eradicates Aggressive Tumors in Clinical Trial

August 16, 2025
blank

Study Reveals Thousands of Children in Mental Health Crisis Face Prolonged Stays in Hospital Emergency Rooms

August 16, 2025

How Large Language Models Are Revolutionizing Drug Development in Medicine

August 16, 2025

Advancing Precision Cancer Therapy Through Tumor Electrophysiology Insights

August 16, 2025

POPULAR NEWS

  • blank

    Molecules in Focus: Capturing the Timeless Dance of Particles

    140 shares
    Share 56 Tweet 35
  • Neuropsychiatric Risks Linked to COVID-19 Revealed

    79 shares
    Share 32 Tweet 20
  • Modified DASH Diet Reduces Blood Sugar Levels in Adults with Type 2 Diabetes, Clinical Trial Finds

    59 shares
    Share 24 Tweet 15
  • Predicting Colorectal Cancer Using Lifestyle Factors

    47 shares
    Share 19 Tweet 12

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Breakthrough Cancer Drug Eradicates Aggressive Tumors in Clinical Trial

Study Reveals Thousands of Children in Mental Health Crisis Face Prolonged Stays in Hospital Emergency Rooms

How Large Language Models Are Revolutionizing Drug Development in Medicine

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.