• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Tuesday, October 3, 2023
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Speech deepfakes frequently fool humans, even after training on how to detect them

Bioengineer by Bioengineer
August 2, 2023
in Science News
Reading Time: 3 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

In a study involving more than 500 people, participants correctly identified speech deepfakes only 73 percent of the time, and efforts to train participants to detect deepfakes had minimal effects. Kimberly Mai and colleagues at University College London, UK, presented these findings in the open-access journal PLOS ONE on August 2, 2023.

Warning: Humans cannot reliably detect speech deepfakes

Credit: Adrian Swancar, Unsplash, CC0 (https://creativecommons.org/publicdomain/zero/1.0/)

In a study involving more than 500 people, participants correctly identified speech deepfakes only 73 percent of the time, and efforts to train participants to detect deepfakes had minimal effects. Kimberly Mai and colleagues at University College London, UK, presented these findings in the open-access journal PLOS ONE on August 2, 2023.

Speech deepfakes are synthetic voices produced by machine-learning models. Deepfakes may resemble a specific real person’s voice, or they may be unique. Tools for making speech deepfakes have recently improved, raising concerns about security threats. For instance, they have already been used to trick bankers into authorizing fraudulent money transfers. Research on detecting speech deepfakes has primarily focused on automated, machine-learning detection systems, but few studies have addressed humans’ detection abilities.

Therefore, Mai and colleagues asked 529 people to complete an online activity that involved identifying speech deepfakes among multiple audio clips of both real human voices and deepfakes. The study was run in both English and Mandarin, and some participants were provided with examples of speech deepfakes to help train their detection skills.

Participants correctly identified deepfakes 73 percent of the time. Training participants to recognize deepfakes helped only slightly. Because participants were aware that some of the clips would be deepfakes—and because the researchers did not use the most advanced speech synthesis technology—people in real-world scenarios would likely perform worse than the study participants.

English and Mandarin speakers showed similar detection rates, though when asked to describe the speech features they used for detection, English speakers more often referenced breathing, while Mandarin speakers more often referenced cadence, pacing between words, and fluency.

The researchers also found that participants’ individual-level detection capabilities were worse than that of top-performing automated detectors. However, when averaged at the crowd-level, participants performed about as well as automated detectors and better handled unknown conditions for which automated detectors may not have been directly trained.

Speech deepfakes are likely to only become more difficult to detect. Given their findings, the researchers conclude that training people to detect speech deepfakes is unrealistic, and efforts should focus on improving automated detectors. However, they suggest that crowdsourcing evaluations on potential deepfake speech is a reasonable mitigation for now.

The authors add: “The study finds that humans could only detect speech deepfakes 73% of the time, and performance was the same in English and Mandarin.”

#####

In your coverage please use this URL to provide access to the freely available article in PLOS ONE: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285333

Citation: Mai KT, Bray S, Davies T, Griffin LD (2023) Warning: Humans cannot reliably detect speech deepfakes. PLoS ONE 18(5): e0285333. https://doi.org/10.1371/journal.pone.0285333

Author Countries: UK

Funding: KM and SB are supported by the Dawes Centre for Future Crime (https://www.ucl.ac.uk/future-crime/). KM is supported by EPSRC under grant EP/R513143/1 (https://www.ukri.org/councils/epsrc). SB is supported by EPSRC under grant EP/S022503/1. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.



Journal

PLoS ONE

DOI

10.1371/journal.pone.0285333

Method of Research

Experimental study

Subject of Research

People

Article Title

Warning: Humans cannot reliably detect speech deepfakes

Article Publication Date

2-Aug-2023

COI Statement

The authors have declared that no competing interests exist.

Share12Tweet8Share2ShareShareShare2

Related Posts

Captured endangered Preble’s meadow jumping mouse (Zapus hudsonius preblei)

New biobanking partnership safeguards the genetic diversity of America’s endangered species

October 3, 2023
Mangroves

Improved mangrove conservation could yield cash, carbon, coastal benefits

October 3, 2023

How floods kill, long after the water has gone – global decade-long study

October 3, 2023

Host genetics helps explain childhood cancer survivors’ mortality risk from second cancers

October 3, 2023

POPULAR NEWS

  • blank

    Microbe Computers

    59 shares
    Share 24 Tweet 15
  • A pioneering study from Politecnico di Milano sheds light on one of the still poorly understood aspects of cancer

    35 shares
    Share 14 Tweet 9
  • Fossil spines reveal deep sea’s past

    34 shares
    Share 14 Tweet 9
  • Scientists go ‘back to the future,’ create flies with ancient genes to study evolution

    75 shares
    Share 30 Tweet 19

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

New biobanking partnership safeguards the genetic diversity of America’s endangered species

Improved mangrove conservation could yield cash, carbon, coastal benefits

How floods kill, long after the water has gone – global decade-long study

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 56 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In