• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
Thursday, September 28, 2023
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • CONTACT US
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Verbal nonsense reveals limitations of AI chatbots

Bioengineer by Bioengineer
September 14, 2023
in Science News
Reading Time: 4 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

NEW YORK – The era of artificial-intelligence chatbots that seem to understand and use language the way we humans do has begun. Under the hood, these chatbots use large language models, a particular kind of neural network. But a new study shows that large language models remain vulnerable to mistaking nonsense for natural language. To a team of researchers at Columbia University, it’s a flaw that might point toward ways to improve chatbot performance and help reveal how humans process language. 

Chatbot Nonsense Test

Credit: Columbia University’s Zuckerman Institute

NEW YORK – The era of artificial-intelligence chatbots that seem to understand and use language the way we humans do has begun. Under the hood, these chatbots use large language models, a particular kind of neural network. But a new study shows that large language models remain vulnerable to mistaking nonsense for natural language. To a team of researchers at Columbia University, it’s a flaw that might point toward ways to improve chatbot performance and help reveal how humans process language. 

 

In a paper published online today in Nature Machine Intelligence, the scientists describe how they challenged nine different language models with hundreds of pairs of sentences. For each pair, people who participated in the study picked which of the two sentences they thought was more natural, meaning that it was more likely to be read or heard in everyday life. The researchers then tested the models to see if they would rate each sentence pair the same way the humans had. 

 

In head-to-head tests, more sophisticated AIs based on what researchers refer to as transformer neural networks tended to perform better than simpler recurrent neural network models and statistical models that just tally the frequency of word pairs found on the internet or in online databases. But all the models made mistakes, sometimes choosing sentences that sound like nonsense to a human ear. 

 

“That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing,” said Dr. Nikolaus Kriegeskorte, PhD, a principal investigator at Columbia’s Zuckerman Institute and a coauthor on the paper. “That even the best models we studied still can be fooled by nonsense sentences shows that their computations are missing something about the way humans process language.”

 

Consider the following sentence pair that both human participants and the AI’s assessed in the study:

 

That is the narrative we have been sold. 

This is the week you have been dying. 

 

People given these sentences in the study judged the first sentence as more likely to be encountered than the second. But according to BERT, one of the better models, the second sentence is more natural. GPT-2, perhaps the most widely known model, correctly identified the first sentence as more natural, matching the human judgments.

 

“Every model exhibited blind spots, labeling some sentences as meaningful that human participants thought were gibberish,” said senior author Christopher Baldassano, PhD, an assistant professor of psychology at Columbia. “That should give us pause about the extent to which we want AI systems making important decisions, at least for now.” 

 

The good but imperfect performance of many models is one of the study results that most intrigues Dr. Kriegeskorte. “Understanding why that gap exists and why some models outperform others can drive progress with language models,” he said. 

 

Another key question for the research team is whether the computations in AI chatbots can inspire new scientific questions and hypotheses that could guide neuroscientists toward a better understanding of human brains. Might the ways these chatbots work point to something about the circuitry of our brains?

 

Further analysis of the strengths and flaws of various chatbots and their underlying algorithms could help answer that question.

 

“Ultimately, we are interested in understanding how people think,” said Tal Golan, PhD, the paper’s corresponding author who this year segued from a postdoctoral position at Columbia’s Zuckerman Institute to set up his own lab at Ben-Gurion University of the Negev in Israel. “These AI tools are increasingly powerful but they process language differently from the way we do. Comparing their language understanding to ours gives us a new approach to thinking about how we think.”

 

###

 

The paper, “Testing the limits of natural language models for predicting human language judgements,” was published online in Nature Machine Intelligence on September 14, 2023. Its full list of authors includes Tal Golan, Matthew Siegelman,  Nikolaus Kriegeskorte and Christopher Baldassano.

 



Journal

Nature Machine Intelligence

DOI

10.1038/s42256-023-00718-1

Method of Research

Data/statistical analysis

Subject of Research

Not applicable

Article Title

Testing the limits of natural language models for predicting human language judgements

Article Publication Date

14-Sep-2023

COI Statement

The authors declare no competing interests.

Share12Tweet8Share2ShareShareShare2

Related Posts

Big Data

Roundtable on ensuring ethical and equitable artificial intelligence and machine learning practices

September 28, 2023
Sehyun Ju, Qiujie Gong, and Karen Kramer.

How parents’ work stress affects family mealtimes and children’s development

September 28, 2023

Indigenous community-first approach to more ethical microbiome research

September 28, 2023

Watching paint dry — to understand and control the patterns it leaves behind

September 28, 2023

POPULAR NEWS

  • blank

    Microbe Computers

    59 shares
    Share 24 Tweet 15
  • A pioneering study from Politecnico di Milano sheds light on one of the still poorly understood aspects of cancer

    35 shares
    Share 14 Tweet 9
  • Fossil spines reveal deep sea’s past

    34 shares
    Share 14 Tweet 9
  • Scientists go ‘back to the future,’ create flies with ancient genes to study evolution

    75 shares
    Share 30 Tweet 19

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Roundtable on ensuring ethical and equitable artificial intelligence and machine learning practices

How parents’ work stress affects family mealtimes and children’s development

Indigenous community-first approach to more ethical microbiome research

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 56 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In