• HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Sunday, May 22, 2022
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Machine learning improves human speech recognition

Bioengineer by Bioengineer
March 1, 2022
in Science News
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

WASHINGTON, March 1, 2022 — Hearing loss is a rapidly growing area of scientific research as the number of baby boomers dealing with hearing loss continues to increase as they age.

Overview of the human speech recognition model

Credit: Jana Roßbach

WASHINGTON, March 1, 2022 — Hearing loss is a rapidly growing area of scientific research as the number of baby boomers dealing with hearing loss continues to increase as they age.

To understand how hearing loss impacts people, researchers study people’s ability to recognize speech. It is more difficult for people to recognize human speech if there is reverberation, some hearing impairment, or significant background noise, such as traffic noise or multiple speakers.

As a result, hearing aid algorithms are often used to improve human speech recognition. To evaluate such algorithms, researchers perform experiments that aim to determine the signal-to-noise ratio at which a specific number of words (commonly 50%) are recognized. These tests, however, are time- and cost-intensive.

In The Journal of the Acoustical Society of America, published by the Acoustical Society of America through AIP Publishing, researchers from Germany explore a human speech recognition model based on machine learning and deep neural networks.

“The novelty of our model is that it provides good predictions for hearing-impaired listeners for noise types with very different complexity and shows both low errors and high correlations with the measured data,” said author Jana Roßbach, from Carl Von Ossietzky University.

The researchers calculated how many words per sentence a listener understands using automatic speech recognition (ASR). Most people are familiar with ASR through speech recognition tools like Alexa and Siri.

The study consisted of eight normal-hearing and 20 hearing-impaired listeners who were exposed to a variety of complex noises that mask the speech. The hearing-impaired listeners were categorized into three groups with different levels of age-related hearing loss.

The model allowed the researchers to predict the human speech recognition performance of hearing-impaired listeners with different degrees of hearing loss for a variety of noise maskers with increasing complexity in temporal modulation and similarity to real speech. The possible hearing loss of a person could be considered individually.

“We were most surprised that the predictions worked well for all noise types. We expected the model to have problems when using a single competing talker. However, that was not the case,” said Roßbach.

The model created predictions for single-ear hearing. Going forward, the researchers will develop a binaural model since understanding speech is impacted by two-ear hearing.

In addition to predicting speech intelligibility, the model could also potentially be used to predict listening effort or speech quality as these topics are very related.

###

The article “A model of speech recognition for hearing-impaired listeners based on deep learning” is authored by Jana Roßbach, Birger Kollmeier, and Bernd T. Meyer. The article will appear in The Journal of the Acoustical Society of America on March 1, 2022 (DOI: 10.1121/10.0009411). After that date, it can be accessed at https://aip.scitation.org/doi/full/10.1121/10.0009411.

ABOUT THE JOURNAL

The Journal of the Acoustical Society of America (JASA) is published on behalf of the Acoustical Society of America. Since 1929, the journal has been the leading source of theoretical and experimental research results in the broad interdisciplinary subject of sound. JASA serves physical scientists, life scientists, engineers, psychologists, physiologists, architects, musicians, and speech communication specialists. See https://asa.scitation.org/journal/jas.

ABOUT ACOUSTICAL SOCIETY OF AMERICA

The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

###



Journal

The Journal of the Acoustical Society of America

DOI

10.1121/10.0009411

Article Title

A model of speech recognition for hearing-impaired listeners based on deep learning

Article Publication Date

1-Mar-2022

Share12Tweet7Share2ShareShareShare1

Related Posts

Graphyne

Long-hypothesized ‘next generation wonder material’ created for first time

May 21, 2022
Flower strips next to a conventional wheat field

Organic farming or flower strips – which is better for bees?

May 21, 2022

Haptics device creates realistic virtual textures

May 20, 2022

Researchers unveil a secret of stronger metals

May 20, 2022

POPULAR NEWS

  • Weybourne Atmospheric Observatory

    Breakthrough in estimating fossil fuel CO2 emissions

    46 shares
    Share 18 Tweet 12
  • Hidden benefit: Facemasks may reduce severity of COVID-19 and pressure on health systems, researchers find

    44 shares
    Share 18 Tweet 11
  • Discovery of the one-way superconductor, thought to be impossible

    43 shares
    Share 17 Tweet 11
  • Sweet discovery could drive down inflammation, cancers and viruses

    43 shares
    Share 17 Tweet 11

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Tags

Violence/CriminalsUniversity of WashingtonVaccineVehiclesWeather/StormsWeaponryVirusUrbanizationVaccinesUrogenital SystemVirologyZoology/Veterinary Science

Recent Posts

  • Long-hypothesized ‘next generation wonder material’ created for first time
  • Organic farming or flower strips – which is better for bees?
  • Haptics device creates realistic virtual textures
  • Researchers unveil a secret of stronger metals
  • Contact Us

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Posting....