• HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Friday, August 12, 2022
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
    • BIOENGINEERING
    • SCIENCE NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • FORUM
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Health

Can artificial intelligence tell a polar bear from a can opener?

Bioengineer by Bioengineer
January 7, 2019
in Health
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

UCLA psychologists’ experiments demonstrate severe limitations of ‘deep learning’ machines

IMAGE

Credit: PLOS Computational Biology/Rubylane.com


How smart is the form of artificial intelligence known as deep learning computer networks, and how closely do these machines mimic the human brain? They have improved greatly in recent years, but still have a long way to go, a team of UCLA cognitive psychologists reports in the journal PLOS Computational Biology.

Supporters have expressed enthusiasm for the use of these networks to do many individual tasks, and even jobs, traditionally performed by people. However, results of the five experiments in this study showed that it’s easy to fool the networks, and the networks’ method of identifying objects using computer vision differs substantially from human vision.

“The machines have severe limitations that we need to understand,” said Philip Kellman, a UCLA distinguished professor of psychology and a senior author of the study. “We’re saying, ‘Wait, not so fast.'”

Machine vision, he said, has drawbacks. In the first experiment, the psychologists showed one of the best deep learning networks, called VGG-19, color images of animals and objects. The images had been altered. For example, the surface of a golf ball was displayed on a teapot; zebra stripes were placed on a camel; and the pattern of a blue and red argyle sock was shown on an elephant. VGG-19 ranked its top choices and chose the correct item as its first choice for only five of 40 objects.

“We can fool these artificial systems pretty easily,” said co-author Hongjing Lu, a UCLA professor of psychology. “Their learning mechanisms are much less sophisticated than the human mind.”

VGG-19 thought there was a 0 percent chance that the elephant was an elephant and only a 0.41 percent chance the teapot was a teapot. Its first choice for the teapot was a golf ball, which shows that the artificial intelligence network looks at the texture of an object more so than its shape, said lead author Nicholas Baker, a UCLA psychology graduate student.

“It’s absolutely reasonable for the golf ball to come up, but alarming that the teapot doesn’t come up anywhere among the choices,” Kellman said. “It’s not picking up shape.”

Humans identify objects primarily from their shape, Kellman said. The researchers suspected the computer networks were using a different method.

In the second experiment, the psychologists showed images of glass figurines to VGG-19 and to a second deep learning network, called AlexNet. VGG-19 performed better on all the experiments in which both networks were tested. Both networks were trained to recognize objects using an image database called ImageNet.

However, both networks did poorly, unable to identify the glass figurines. Neither VGG-19 nor AlexNet correctly identified the figurines as their first choices. An elephant figurine was ranked with almost a 0 percent chance of being an elephant by both networks. Most of the top responses were puzzling to the researchers, such as VGG-19’s choice of “website” for “goose” and “can opener” for “polar bear.” On average, AlexNet ranked the correct answer 328th out of 1,000 choices.

“The machines make very different errors from humans,” Lu said.

In the third experiment, the researchers showed 40 drawings outlined in black, with images in white, to both VGG-19 and AlexNet. These first three experiments were meant to discover whether the devices identified objects by their shape.

The networks again did a poor job of identifying such items as a butterfly, an airplane and a banana.

The goal of the experiments was not to trick the networks, but to learn whether they identify objects in a similar way to humans, or in a different manner, said co-author Gennady Erlikhman, a UCLA postdoctoral scholar in psychology.

In the fourth experiment, the researchers showed both networks 40 images, this time in solid black.

With the black images, the networks did better, producing the correct object label among their top five choices for about 50 percent of the objects. VGG-19, for example, ranked an abacus with a 99.99 percent chance of being an abacus and a cannon with a 61 percent chance of being a cannon. In contrast, VGG-19 and AlexNet each thought there was less than a 1 percent chance that a white hammer (outlined in black) was a hammer.

The researchers think the networks did much better with the black objects because the items lack what Kellman calls “internal contours” — edges that confuse the machines.

In experiment five, the researchers scrambled the images to make them more difficult to recognize, but they preserved pieces of the objects. The researchers selected six images the VGG-19 network got right originally, and scrambled them. Humans found these hard to recognize. VGG-19 got five of the six images right, and was close on the sixth.

As part of the fifth experiment, the researchers tested UCLA undergraduate students, in addition to VGG-19. Ten students were shown objects in black silhouettes — some scrambled to be difficult to recognize and some unscrambled, some objects for just one second, and some for as long as the students wanted to view them. The students correctly identified 92 percent of the unscrambled objects and 23 percent of the scrambled ones with just one second to view them. When the students could see the silhouettes for as long as they wanted, they correctly identified 97 percent of the unscrambled objects and 37 percent of the scrambled objects.

What conclusions do the psychologists draw?

Humans see the entire object, while the artificial intelligence networks identify fragments of the object.

“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

There are dozens of deep learning machines, and the researchers think their findings apply broadly to these devices.

###

The research was supported in part by a grant from the National Science Foundation.

Media Contact
Stuart Wolpert
[email protected]
310-206-0511

Tags: Computer ScienceComputer TheoryInformation Management/Tracking SystemsLearning/Literacy/ReadingneurobiologyRobotry/Artificial IntelligenceTechnology/Engineering/Computer Science
Share12Tweet7Share2ShareShareShare1

Related Posts

Scheme figure of MOPCS in this study

When CRISPR meets optical sensors – a new born of a nucleic acid sensing platform: MOPCS

August 12, 2022
Child getting vaccination

Reframe the pain: Reducing needle anxiety in children

August 11, 2022

UTIA Professor Selected as a 2022 American Phytopathological Society Fellow

August 11, 2022

Bioengineered cornea can restore sight to the blind and visually impaired

August 11, 2022
Please login to join discussion

POPULAR NEWS

  • Picture of the horse specimen.

    Ancient DNA clarifies the early history of American colonial horses

    56 shares
    Share 22 Tweet 14
  • Fatigue, headache among top lingering symptoms months after COVID

    40 shares
    Share 16 Tweet 10
  • Ill-fated ‘Into the Wild’ adventurer was victim of unfortunate timing, Oregon State study suggests

    39 shares
    Share 16 Tweet 10
  • Skin: An additional tool for the versatile elephant trunk

    38 shares
    Share 15 Tweet 10

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Tags

VirusVehiclesVirologyUniversity of WashingtonVaccineUrbanizationWeather/StormsZoology/Veterinary ScienceWeaponryUrogenital SystemViolence/CriminalsVaccines

Recent Posts

  • China claims new world record for strongest steady magnetic field
  • Linked lanthanides shine light on field of crystal engineering
  • Will strong and fast-switching artificial muscle be feasible?
  • ChemDiv and Jacobs University announce strategic partnership in drug discovery
  • Contact Us

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

© 2019 Bioengineer.org - Biotechnology news by Science Magazine - Scienmag.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In