• HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Wednesday, June 29, 2022
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Health

Cognitive neuroscience could pave the way for emotionally intelligent robots

Bioengineer by Bioengineer
April 28, 2021
in Health
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Researchers propose a novel auditory perception based feature for extracting emotions from human speech using neural networks

IMAGE

Credit: Masashi Unoki

Ishikawa, Japan – Human beings have the ability to recognize emotions in others, but the same cannot be said for robots. Although perfectly capable of communicating with humans through speech, robots and virtual agents are only good at processing logical instructions, which greatly restricts human-robot interaction (HRI). Consequently, a great deal of research in HRI is about emotion recognition from speech. But first, how do we describe emotions?

Categorical emotions such as happiness, sadness, and anger are well-understood by us but can be hard for robots to register. Researchers have focused on “dimensional emotions,” which constitute a gradual emotional transition in natural speech. “Continuous dimensional emotion can help a robot capture the time dynamics of a speaker’s emotional state and accordingly adjust its manner of interaction and content in real time,” explains Prof. Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), who works on speech recognition and processing.

Studies have shown that an auditory perception model simulating the working of a human ear can generate what are called “temporal modulation cues,” which faithfully capture the time dynamics of dimensional emotions. Neural networks can then be employed to extract features from these cues that reflect this time dynamics. However, due to the complexity and variety of auditory perception models, the feature extraction part turns out to be pretty challenging.

In a new study published in Neural Networks, Prof. Unoki and his colleagues, including Zhichao Peng, from Tianjin University, China (who led the study), Jianwu Dang from Pengcheng Laboratory, China, and Prof. Masato Akagi from JAIST, have now taken inspiration from a recent finding in cognitive neuroscience suggesting that our brain forms multiple representations of natural sounds with different degrees of spectral (i.e., frequency) and temporal resolutions through a combined analysis of spectral-temporal modulations. Accordingly, they have proposed a novel feature called multi-resolution modulation-filtered cochleagram (MMCG), which combines four modulation-filtered cochleagrams (time-frequency representations of the input sound) at different resolutions to obtain the temporal and contextual modulation cues. To account for the diversity of the cochleagrams, researchers designed a parallel neural network architecture called “long short-term memory” (LSTM), which modeled the time variations of multi-resolution signals from the cochleagrams and carried out extensive experiments on two datasets of spontaneous speech.

The results were encouraging. The researchers found that MMCG showed a significantly better emotion recognition performance than traditional acoustic-based features and other auditory-based features for both the datasets. Furthermore, the parallel LSTM network demonstrated a superior prediction of dimensional emotions than that with a plain LSTM-based approach.

Prof. Unoki is thrilled and contemplates improving upon the MMCG feature in future research. “Our next goal is to analyze the robustness of environmental noise sources and investigate our feature for other tasks, such as categorical emotion recognition, speech separation, and voice activity detection,” he concludes.

Looks like it may not be too long before emotionally intelligent robots become a reality!

###

Reference

Title of original paper: “Multi-resolution modulation-filtered cochleagram feature for LSTM-based dimensional emotion recognition from speech”

Journal: Neural Networks

DOI: 10.1016/j.neunet.2021.03.027

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate school in Japan. Now, after 30 years of steady progress, JAIST has become one of Japan’s top-ranking universities. JAIST counts with multiple satellite campuses and strives to foster capable leaders with a state-of-the-art education system where diversity is key; about 40% of its alumni are international students. The university has a unique style of graduate education based on a carefully designed coursework-oriented curriculum to ensure that its students have a solid foundation on which to carry out cutting-edge research. JAIST also works closely both with local and overseas communities by promoting industry-academia collaborative research.

About Professor Masashi Unoki from Japan Advanced Institute of Science and Technology, Japan

Masashi Unoki is a Professor at the School of Information Science at Japan Advanced Institute of Science and Technology (JAIST), where he received his M.S. and Ph.D. degrees in 1996 and 1999, respectively. His main research interests lie in auditory motivated signal processing and modeling auditory systems. Prof. Unoki received the Sato Prize from the Acoustical Society of Japan (ASJ) in 1999, 2010, and 2013 for an Outstanding Paper and the Yamashita Taro “Young Researcher” Prize from the Yamashita Taro Research Foundation in 2005. As a senior researcher, he has 420 publications to his credit, with over 2000 citations.

Funding information

The study was funded by a Grant in Aid for Innovative Areas (no. 18H05004) from MEXT, Japan, and was partially supported by the Research Foundation of Education Bureau of Hunan Province, China (grant no. 18A414).

Media Contact
Zhichao PENG
[email protected]

Related Journal Article

http://dx.doi.org/10.1016/j.neunet.2021.03.027

Tags: Hearing/SpeechLanguage/Linguistics/SpeechMedicine/Health
Share12Tweet8Share2ShareShareShare2

Related Posts

Atrial fibrillation after surgery is linked to an increased risk of hospitalization for heart failure

Atrial fibrillation after surgery is linked to an increased risk of hospitalization for heart failure

June 29, 2022
$5.3 million grant supports research into lung cancer recurrence

$5.3 million grant supports research into lung cancer recurrence

June 28, 2022

University of Cincinnati enrolling patients for PTSD clinical trials

June 28, 2022

Double duty: Early research reveals how a single drug delivers twice the impact in fragile X

June 28, 2022
Please login to join discussion

POPULAR NEWS

  • Pacific whiting

    Oregon State University research finds evidence to suggest Pacific whiting skin has anti-aging properties that prevent wrinkles

    37 shares
    Share 15 Tweet 9
  • University of Miami Rosenstiel School selected for National ‘Reefense’ Initiative focusing on Florida and the Caribbean

    35 shares
    Share 14 Tweet 9
  • Saving the Mekong delta from drowning

    37 shares
    Share 15 Tweet 9
  • Sharks may be closer to the city than you think, new study finds

    34 shares
    Share 14 Tweet 9

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Tags

VehiclesVaccineUniversity of WashingtonVirusZoology/Veterinary ScienceUrbanizationVirologyVaccinesWeaponryViolence/CriminalsUrogenital SystemWeather/Storms

Recent Posts

  • Scientists find trigger that sets off metastasis in pancreatic cancer
  • Shedding light on reptilian health: Researchers investigate origins of snake fungal disease in U.S.
  • Dissolving the problem: Organic vapor induces dissolution of molecular salts
  • New kangaroo described – from PNG
  • Contact Us

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Posting....