• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Tuesday, October 21, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Health

Malicious content exploits pathways between platforms to thrive online, subvert moderation

Bioengineer by Bioengineer
September 6, 2025
in Health
Reading Time: 3 mins read
0
IMAGE
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

New research demonstrates how stopping the spread of harmful content will require inter-platform action

WASHINGTON (June 15, 2021)–Malicious COVID-19 online content — including racist content, disinformation and misinformation — thrives and spreads online by bypassing the moderation efforts of individual social media platforms, according to new research published in the journal Scientific Reports. By mapping online hate clusters across six major social media platforms, researchers at the George Washington University show how malicious content exploits pathways between platforms, highlighting the need for social media companies to rethink and adjust their content moderation policies.

Led by Neil Johnson, a professor of physics at GW, the research team set out to understand how and why malicious content thrives so well online despite significant moderation efforts, and how it can be stopped. The team used a combination of machine learning and network data science to investigate how online hate communities sharpened COVID-19 as a weapon and used current events to draw in new followers.

“Until now, slowing the spread of malicious content online has been like playing a game of whack-a-mole, because a map of the online hate multiverse did not exist,” Johnson, who is also a researcher at the GW Institute for Data, Democracy & Politics, said. “You cannot win a battle if you don’t have a map of the battlefield. In our study, we laid out a first-of-its-kind map of this battlefield. Whether you’re looking at traditional hate topics, such as anti-Semitism or anti-Asian racism surrounding COVID-19, the battlefield map is the same. And it is this map of links within and between platforms that is the missing piece in understanding how we can slow or stop the spread of online hate content.”

The researchers began by mapping how hate clusters interconnect to spread their narratives across social media platforms. Focusing on six platforms — Facebook, VKontakte, Instagram, Gab, Telegram and 4Chan — the team started with a given hate cluster and looked outward to find a second cluster that was strongly connected to the original. They found the strongest connections were VKontakte into Telegram (40.83% of cross-platform connections), Telegram into 4Chan (11.09%), and Gab into 4Chan (10.90%).

The researchers then turned their attention to identifying malicious content related to COVID-19. They found that the coherence of COVID-19 discussion increased rapidly in the early phases of the pandemic, with hate clusters forming narratives and cohering around COVID-19 topics and misinformation. To subvert moderation efforts by social media platforms, groups sending hate messages used several adaptation strategies in order to regroup on other platforms and/or reenter a platform, the researchers found. For example, clusters frequently change their names to avoid detection by moderators’ algorithms, such as vaccine to va$$ine. Similarly, anti-Semitic and anti-LGBTQ clusters simply add strings of 1’s or A’s before their name.

Based on their findings, the team suggests several ways for social media platforms to slow the spread of malicious content:

  • Artificially lengthen the pathways that malicious content needs to take between clusters, increasing the chances of its detection by moderators and delaying the spread of time-sensitive material such as weaponized COVID-19 misinformation and violent content.
  • Control the size of an online hate cluster’s support base by placing a cap on the size of clusters.
  • Introduce non-malicious, mainstream content in order to effectively dilute a cluster’s focus.

“Our study demonstrates a similarity between the spread of online hate and the spread of a virus,” Yonatan Lupu, an associate professor of political science at GW and co-author on the paper, said. “Individual social media platforms have had difficulty controlling the spread of online hate, which mirrors the difficulty individual countries around the world have had in stopping the spread of the COVID-19 virus.”

Going forward, Johnson and his team are already using their map and its mathematical modeling to analyze other forms of malicious content — including the weaponization of COVID-19 vaccines in which certain countries are attempting to manipulate mainstream sentiment for nationalistic gains. They are also examining the extent to which single actors, including foreign governments, may play a more influential or controlling role in this space than others.

###

Media Contact
Timothy Pierce
[email protected]

Related Journal Article

http://dx.doi.org/10.1038/s41598-021-89467

Tags: Algorithms/ModelsInfectious/Emerging DiseasesInternetPolitical SciencePublic HealthResearchers/Scientists/AwardsTechnology/Engineering/Computer Science
Share12Tweet8Share2ShareShareShare2

Related Posts

Mouse study uncovers enduring metabolic risks associated with ketogenic diet

October 21, 2025

Distinct Risk Profiles Identified for Suicide Attempts Versus Completed Suicide

October 21, 2025

New Study Finds Babies Born 8-10 Weeks Premature Can Safely Be Milk Fed Without Gut Complications

October 21, 2025

Wayne State University Appoints New Director for Institute of Gerontology, Announces Vice President for Research & Innovation

October 21, 2025
Please login to join discussion

POPULAR NEWS

  • Sperm MicroRNAs: Crucial Mediators of Paternal Exercise Capacity Transmission

    1271 shares
    Share 508 Tweet 317
  • Stinkbug Leg Organ Hosts Symbiotic Fungi That Protect Eggs from Parasitic Wasps

    304 shares
    Share 122 Tweet 76
  • ESMO 2025: mRNA COVID Vaccines Enhance Efficacy of Cancer Immunotherapy

    138 shares
    Share 55 Tweet 35
  • New Study Suggests ALS and MS May Stem from Common Environmental Factor

    130 shares
    Share 52 Tweet 33

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Combining Flupyradifurone and Fungal Pathogen Boosts Ant Control

Sex-Specific Heart Failure Benefits of Combined B Vitamins

Mouse study uncovers enduring metabolic risks associated with ketogenic diet

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 66 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.