• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Tuesday, June 24, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

New test reveals AI still lacks common sense

Bioengineer by Bioengineer
November 18, 2020
in Science News
Reading Time: 4 mins read
0
ADVERTISEMENT
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Despite advances in natural language processing, AI still doesn’t have the common sense to understand human language, finds a new USC study.

IMAGE

Credit: ADRIANA SANCHEZ.

Natural language processing (NLP) has taken great strides recently–but how much does AI understand of what it reads? Less than we thought, according to researchers at USC’s Department of Computer Science. In a recent paper Assistant Professor Xiang Ren and PhD student Yuchen Lin found that despite advances, AI still doesn’t have the common sense needed to generate plausible sentences.

“Current machine text-generation models can write an article that may be convincing to many humans, but they’re basically mimicking what they have seen in the training phase,” said Lin. “Our goal in this paper is to study the problem of whether current state-of-the-art text-generation models can write sentences to describe natural scenarios in our everyday lives.”

Understanding scenarios in daily life

Specifically, Ren and Lin tested the models’ ability to reason and showed there is a large gap between current text generation models and human performance. Given a set of common nouns and verbs, state-of-the-art NLP computer models were tasked with creating believable sentences describing an everyday scenario. While the models generated grammatically correct sentences, they were often logically incoherent.

For instance, here’s one example sentence generated by a state-of-the-art model using the words “dog, frisbee, throw, catch”:

“Two dogs are throwing frisbees at each other.”

The test is based on the assumption that coherent ideas (in this case: “a person throws a frisbee and a dog catches it,”) can’t be generated without a deeper awareness of common-sense concepts. In other words, common sense is more than just the correct understanding of language–it means you don’t have to explain everything in a conversation. This is a fundamental challenge in the goal of developing generalizable AI–but beyond academia, it’s relevant for consumers, too.

Without an understanding of language, chatbots and voice assistants built on these state-of-the-art natural-language models are vulnerable to failure. It’s also crucial if robots are to become more present in human environments. After all, if you ask a robot for hot milk, you expect it to know you want a cup of mile, not the whole carton.

“We also show that if a generation model performs better on our test, it can also benefit other applications that need commonsense reasoning, such as robotic learning,” said Lin. “Robots need to understand natural scenarios in our daily life before they make reasonable actions to interact with people.”

Joining Lin and Ren on the paper are USC’s Wangchunshu Zhou, Ming Shen, Pei Zhou; Chandra Bhagavatula from the Allen Institute of Artificial Intelligence; and Yejin Choi from the Allen Institute of Artificial Intelligence and Paul G. Allen School of Computer Science & Engineering, University of Washington.

The common sense test

Common-sense reasoning, or the ability to make inferences using basic knowledge about the world–like the fact that dogs cannot throw frisbees to each other–has resisted AI researchers’ efforts for decades. State-of-the-art deep-learning models can now reach around 90% accuracy, so it would seem that NLP has gotten closer to its goal.

But Ren, an expert in natural language processing and Lin, his student, needed more convincing about this statistic’s accuracy. In their paper, published in the Findings of Empirical Methods in Natural Language Processing (EMNLP) conference on Nov. 16, they challenge the effectiveness of the benchmark and, therefore, the level of progress the field has actually made.

“Humans acquire the ability to compose sentences by learning to understand and use common concepts that they recognize in their surrounding environment,” said Lin.

“Acquiring this ability is regarded as a major milestone in human development. But we wanted to test if machines can really acquire such generative commonsense reasoning ability.”

To evaluate different machine models, the pair developed a constrained text generation task called CommonGen, which can be used as a benchmark to test the generative common sense of machines. The researchers presented a dataset consisting of 35,141 concepts associated with 77,449 sentences. They found the even best performing model only achieved an accuracy rate of 31.6% versus 63.5% for humans.

“We were surprised that the models cannot recall the simple commonsense knowledge that ‘a human throwing a frisbee’ should be much more reasonable than a dog doing it,” said Lin. “We find even the strongest model, called the T5, after training with a large dataset, can still make silly mistakes.”

It seems, said the researchers, that previous tests have not sufficiently challenged the models on their common sense abilities, instead mimicking what they have seen in the training phase.

“Previous studies have primarily focused on discriminative common sense,” said Ren. “They test machines with multi-choice questions, where the search space for the machine is small–usually four or five candidates.”

For instance, a typical setting for discriminative common-sense testing is a multiple-choice question answering task, for example: “Where do adults use glue sticks?” A: classroom B: office C: desk drawer.

The answer here, of course, is “B: office.” Even computers can figure this out without much trouble. In contrast, a generative setting is more open-ended, such as the CommonGen task, where a model is asked to generate a natural sentence from given concepts.

Ren explains: “With extensive model training, it is very easy to have a good performance on those tasks. Unlike those discriminative commonsense reasoning tasks, our proposed test focuses on the generative aspect of machine common sense.”

Ren and Lin hope the data set will serve as a new benchmark to benefit future research about introducing common sense to natural language generation. In fact, they even have a leaderboard depicting scores achieved by the various popular models to help other researchers determine their viability for future projects.

“Robots need to understand natural scenarios in our daily life before they make reasonable actions to interact with people,” said Lin.

“By introducing common sense and other domain-specific knowledge to machines, I believe that one day we can see AI agents such as Samantha in the movie Her that generate natural responses and interact with our lives.”

###

Media Contact
Amy Blumenthal
[email protected]

Tags: Algorithms/ModelsComputer ScienceComputer TheoryLanguage/Linguistics/SpeechMathematics/StatisticsRobotry/Artificial IntelligenceTechnology/Engineering/Computer ScienceTheory/Design
Share12Tweet8Share2ShareShareShare2

Related Posts

Pulmonary T2* MRI: New Fetal Lung Assessment Tool?

Pulmonary T2* MRI: New Fetal Lung Assessment Tool?

June 24, 2025
blank

Digital Platform Boosts CPEC Disaster Resilience, Innovation

June 24, 2025

Dispersion-Engineered Metasurfaces: Debye Relaxation Unveiled

June 24, 2025

HSV1 Glycoprotein D Blocks Alpha7 Nicotinic Receptors

June 24, 2025
Please login to join discussion

POPULAR NEWS

  • Green brake lights in the front could reduce accidents

    Study from TU Graz Reveals Front Brake Lights Could Drastically Diminish Road Accident Rates

    161 shares
    Share 64 Tweet 40
  • Pancreatic Cancer Vaccines Eradicate Disease in Preclinical Studies

    72 shares
    Share 29 Tweet 18
  • Enhancing Broiler Growth: Mannanase Boosts Performance with Reduced Soy and Energy

    66 shares
    Share 26 Tweet 17
  • How Scientists Unraveled the Mystery Behind the Gigantic Size of Extinct Ground Sloths—and What Led to Their Demise

    65 shares
    Share 26 Tweet 16

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Pulmonary T2* MRI: New Fetal Lung Assessment Tool?

Digital Platform Boosts CPEC Disaster Resilience, Innovation

Dispersion-Engineered Metasurfaces: Debye Relaxation Unveiled

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.