• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Monday, March 27, 2023
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Machine learning, from you

Bioengineer by Bioengineer
October 31, 2022
in Science News
Reading Time: 4 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Many computer systems people interact with on a daily basis require knowledge about certain aspects of the world, or models, to work. These systems have to be trained, often needing to learn to recognize objects from video or image data. This data often contains superfluous content that reduces the accuracy of models. So researchers found a way to incorporate natural hand gestures into the teaching process. This way, users can more easily teach machines about objects, and the machines can also learn more effectively.

LookHere

Credit: ©2022 Yatani and Zhou

Many computer systems people interact with on a daily basis require knowledge about certain aspects of the world, or models, to work. These systems have to be trained, often needing to learn to recognize objects from video or image data. This data often contains superfluous content that reduces the accuracy of models. So researchers found a way to incorporate natural hand gestures into the teaching process. This way, users can more easily teach machines about objects, and the machines can also learn more effectively.

You’ve probably heard the term machine learning before, but are you familiar with machine teaching? Machine learning is what happens behind the scenes when a computer uses input data to form models that can later be used to perform useful functions. But machine teaching is the somewhat less explored part of the process, of how the computer gets its input data to begin with. In the case of visual systems, for example ones that can recognize objects, people need to show objects to a computer so it can learn about them. But there are drawbacks to the ways this is typically done that researchers from the University of Tokyo’s Interactive Intelligent Systems Laboratory sought to improve.

“In a typical object training scenario, people can hold an object up to a camera and move it around so a computer can analyze it from all angles to build up a model,” said graduate student Zhongyi Zhou. “However, machines lack our evolved ability to isolate objects from their environments, so the models they make can inadvertently include unnecessary information from the backgrounds of the training images. This often means users must spend time refining the generated models, which can be a rather technical and time-consuming task. We thought there must be a better way of doing this that’s better for both users and computers, and with our new system, LookHere, I believe we have found it.”

Zhou, working with Associate Professor Koji Yatani, created LookHere to address two fundamental problems in machine teaching: firstly, the problem of teaching efficiency, aiming to minimize the users’ time, and required technical knowledge. And secondly, of learning efficiency — how to ensure better learning data for machines to create models from. LookHere achieves these by doing something novel and surprisingly intuitive. It incorporates the hand gestures of users into the way an image is processed before the machine incorporates it into its model, known as HuTics. For example, a user can point to or present an object to the camera in a way that emphasizes its significance compared to the other elements in the scene. This is exactly how people might show objects to each other. And by eliminating extraneous details, thanks to the added emphasis to what’s actually important in the image, the computer gains better input data for its models.

“The idea is quite straightforward, but the implementation was very challenging,” said Zhou. “Everyone is different and there is no standard set of hand gestures. So, we first collected 2,040 example videos of 170 people presenting objects to the camera into HuTics. These assets were annotated to mark what was part of the object and what parts of the image were just the person’s hands. LookHere was trained with HuTics, and when compared to other object recognition approaches, can better determine what parts of an incoming image should be used to build its models. To make sure it’s as accessible as possible, users can use their smartphones to work with LookHere and the actual processing is done on remote servers. We also released our source code and data set so that others can build upon it if they wish.”

Factoring in the reduced demand on users’ time that LookHere affords people, Zhou and Yatani found that it can build models up to 14 times faster than some existing systems. At present, LookHere deals with teaching machines about physical objects and it uses exclusively visual data for input. But in theory, the concept can be expanded to use other kinds of input data such as sound or scientific data. And models made from that data would benefit from similar improvements in accuracy too.

###

Journal article: Zhongyi Zhou and Koji Yatani. “Gesture-aware Interactive Machine Teaching with In-situ Object Annotations”, 35th Annual ACM Symposium on User Interface Software and Technology (UIST ’22). https://doi.org/10.1145/3526113.3545648

Funding: A part of this research was supported by the NII CRIS collaborative research program jointly managed by NII CRIS and LINE Corporation. Co-Design Future Society Fellowship also supports ZZ.

Useful links:
Graduate School of Engineering – https://www.t.u-tokyo.ac.jp/en/soe

Interactive Intelligent Systems Laboratory – https://iis-lab.org/

Research contact:
Associate Professor Koji Yatani
Graduate School of Engineering, The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
[email protected]

Press contact:
Mr. Rohan Mehra
Public Relations Group, The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan
[email protected]

About the University of Tokyo
The University of Tokyo is Japan’s leading university and one of the world’s top research universities. The vast research output of some 6,000 researchers is published in the world’s top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on Twitter at @UTokyo_News_en.



DOI

10.1145/3526113.3545648

Method of Research

Experimental study

Subject of Research

People

Article Title

Gesture-aware Interactive Machine Teaching with In-situ Object Annotations

Article Publication Date

31-Oct-2022

Share12Tweet8Share2ShareShareShare2

Related Posts

O-ClickFC for high-throughput analysis of single-cell lipid metabolism at the organelle level

Novel Click chemistry technology for ultrafast analysis of intracellular lipids

March 27, 2023
Generating and controlling electron spin states with spatially structured polarized light.

Storing information with spins: Creating new structured spin states with spatially structured polarized light

March 27, 2023

In the tropics, woody vines make lightning more deadly for forests

March 27, 2023

James Chappell wins NSF CAREER Award

March 27, 2023

POPULAR NEWS

  • ChatPandaGPT

    Insilico Medicine brings AI-powered “ChatPandaGPT” to its target discovery platform

    66 shares
    Share 26 Tweet 17
  • Northern and southern resident orcas hunt differently, which may help explain the decline of southern orcas

    44 shares
    Share 18 Tweet 11
  • Skipping breakfast may compromise the immune system

    43 shares
    Share 17 Tweet 11
  • Insular dwarfs and giants more likely to go extinct

    35 shares
    Share 14 Tweet 9

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Novel Click chemistry technology for ultrafast analysis of intracellular lipids

Storing information with spins: Creating new structured spin states with spatially structured polarized light

In the tropics, woody vines make lightning more deadly for forests

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 48 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In