• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Thursday, October 30, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Hazumi datasets for dialogue systems that recognize human sentiment released

Bioengineer by Bioengineer
October 22, 2020
in Science News
Reading Time: 3 mins read
0
IMAGE
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

IMAGE

Credit: Osaka University

A group of researchers from Osaka University has released multimodal human-system dialogue corpora Hazumi on the Informatics Research Data Depository of the National Institute of Informatics (NII). Currently, two datasets in Japanese are available for research and development (R&D) purposes of multimodal spoken dialogue systems (SDS), in which an artificial intelligence (AI) system speaks while recognizing a user’s states using their multimodal features, including verbal contents, facial expressions, body and head motions, and acoustic features.

Most current communication robots and applications respond using speech to text conversion (automatic speech recognition), whereas humans speak while recognizing the conversational partner’s nonverbal features. Various social signals are used in human conversations. Although human-system spoken dialogue data and human interpretations to them are necessary for R&D of such AI systems, few data have been released because they contain personal information such as facial images.

This group has been engaged in the development of a multimodal dialogue system based on predicting users’ sentiment using Social Signal Processing (SSP) techniques and reinforcement learning. They conducted multimodal computational modeling of sentiment labels that are annotated per exchange of human-system dialogue by using SSP techniques. SSP techniques are used for modeling, analysis, and synthesis of social signals in human-machine interactions. Computer models analyze sensor data by machine learning algorithms to understand a human’s social behavior, predicting the user’s sentiment such as interest in the current topic.

This group collected data from 59 participants when they talked with a system operated with the Wizard of Oz (WoZ) method, in which a virtual agent was manipulated by an operator, called a “Wizard.” The duration of the data was about 15 minutes per participant. Recently, they released the data that they collected in their studies.

The released datasets include both videos of dialogues between the WoZ system and a user (participant) and annotations to them. The annotation is applied to every exchange of dialogue (i.e., a pair of a system utterance and a user utterance) by observers and can be used as reference labels for constructing a system that can adapt to user multimodal behaviors.

The annotated labels include the score (0-2) of (1) the user’s interest label pertaining to the current topic, the score (1-7) of (2) the user’s sentiment label, and the score (1-7) of (3) topic continuance denoting whether the system should continue the current topic or change it. The labels were given to each exchange by annotators. The annotated score also includes the sentiment label given by users themselves. The group obtained consent from the participants concerning the use of their facial images according to the procedures approved by the research ethics committee.

Lead author Professor Komatani says, “AI, especially dialogue systems by which computers talk with various people, requires the ability to respond to user utterance by recognizing the user’s states. Hazumi, in which dialogue system research and SSP are merged, will help the R&D of multimodal dialogue systems and be used as a shared R&D infrastructure for such systems.”

###

The multimodal dialogue corpus (Hazumi) in the Informatics Research Data Repository of the National Institute of Informatics was published at http://doi.org/10.32130/rdata.4.1

About Osaka University

Osaka University was founded in 1931 as one of the seven imperial universities of Japan and is now one of Japan’s leading comprehensive universities with a broad disciplinary spectrum. This strength is coupled with a singular drive for innovation that extends throughout the scientific process, from fundamental research to the creation of applied technology with positive economic impacts. Its commitment to innovation has been recognized in Japan and around the world, being named Japan’s most innovative university in 2015 (Reuters 2015 Top 100) and one of the most innovative institutions in the world in 2017 (Innovative Universities and the Nature Index Innovation 2017). Now, Osaka University is leveraging its role as a Designated National University Corporation selected by the Ministry of Education, Culture, Sports, Science and Technology to contribute to innovation for human welfare, sustainable development of society, and social transformation.

Website: https://resou.osaka-u.ac.jp/en/top

Media Contact
Saori Obayashi
[email protected]

Original Source

https://resou.osaka-u.ac.jp/en/research/2020/20201020_3

Related Journal Article

http://dx.doi.org/10.32130/rdata.4.1

Tags: Computer ScienceMultimedia/Networking/Interface DesignRobotry/Artificial IntelligenceTechnology/Engineering/Computer Science
Share12Tweet8Share2ShareShareShare2

Related Posts

blank

Plant Flavonoids Disrupt Pseudomonas Aeruginosa Biofilms

October 30, 2025

Controlling NMDA Receptors: Conductance and Neurosteroids

October 30, 2025

Exploring Work-Related Stress Factors in Ethiopian Nurses

October 30, 2025

Targeting Neural-Tumor Interactions for Innovative Therapies

October 30, 2025
Please login to join discussion

POPULAR NEWS

  • Sperm MicroRNAs: Crucial Mediators of Paternal Exercise Capacity Transmission

    1290 shares
    Share 515 Tweet 322
  • Stinkbug Leg Organ Hosts Symbiotic Fungi That Protect Eggs from Parasitic Wasps

    311 shares
    Share 124 Tweet 78
  • ESMO 2025: mRNA COVID Vaccines Enhance Efficacy of Cancer Immunotherapy

    200 shares
    Share 80 Tweet 50
  • New Study Suggests ALS and MS May Stem from Common Environmental Factor

    136 shares
    Share 54 Tweet 34

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Plant Flavonoids Disrupt Pseudomonas Aeruginosa Biofilms

Controlling NMDA Receptors: Conductance and Neurosteroids

Exploring Work-Related Stress Factors in Ethiopian Nurses

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 67 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.