• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Friday, October 3, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Biology

New insights into how the human brain solves complex decision-making problems

Bioengineer by Bioengineer
January 30, 2020
in Biology
Reading Time: 3 mins read
0
IMAGE
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

IMAGE

Credit: KAIST


A new study on meta reinforcement learning algorithms helps us understand how the human brain learns to adapt to complexity and uncertainty when learning and making decisions. A research team, led by Professor Sang Wan Lee at KAIST jointly with John O’Doherty at Caltech, succeeded in discovering both a computational and neural mechanism for human meta reinforcement learning, opening up the possibility of porting key elements of human intelligence into artificial intelligence algorithms. This study provides a glimpse into how it might ultimately use computational models to reverse engineer human reinforcement learning.

This work was published on Dec 16, 2019 in the journal Nature Communications. The title of the paper is “Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning.”

Human reinforcement learning is an inherently complex and dynamic process, involving goal setting, strategy choice, action selection, strategy modification, cognitive resource allocation etc. This a very challenging problem for humans to solve owing to the rapidly changing and multifaced environment in which humans have to operate. To make matters worse, humans often need to often rapidly make important decisions even before getting the opportunity to collect a lot of information, unlike the case when using deep learning methods to model learning and decision-making in artificial intelligence applications.

In order to solve this problem, the research team used a technique called ‘reinforcement learning theory-based experiment design’ to optimize the three variables of the two-stage Markov decision task – goal, task complexity, and task uncertainty. This experimental design technique allowed the team not only to control confounding factors, but also to create a situation similar to that which occurs in actual human problem solving.

Secondly, the team used a technique called ‘model-based neuroimaging analysis.’ Based on the acquired behavior and fMRI data, more than 100 different types of meta reinforcement learning algorithms were pitted against each other to find a computational model that can explain both behavioral and neural data. Thirdly, for the sake of a more rigorous verification, the team applied an analytical method called ‘parameter recovery analysis,’ which involves high-precision behavioral profiling of both human subjects and computational models.

In this way, the team was able to accurately identify a computational model of meta reinforcement learning, ensuring not only that the model’s apparent behavior is similar to that of humans, but also that the model solves the problem in the same way as humans do.

The team found that people tended to increase planning-based reinforcement learning (called model-based control), in response to increasing task complexity. However, they resorted to a simpler, more resource efficient strategy called model-free control, when both uncertainty and task complexity were high. This suggests that both the task uncertainty and the task complexity interact during the meta control of reinforcement learning. Computational fMRI analyses revealed that task complexity interacts with neural representations of the reliability of the learning strategies in the inferior prefrontal cortex.

These findings significantly advance understanding of the nature of the computations being implemented in the inferior prefrontal cortex during meta reinforcement learning as well as providing insight into the more general question of how the brain resolves uncertainty and complexity in a dynamically changing environment. Identifying the key computational variables that drive prefrontal meta reinforcement learning, can also inform understanding of how this process might be vulnerable to break down in certain psychiatric disorders such as depression and OCD. Furthermore, gaining a computational understanding of how this process can sometimes lead to increased model-free control, can provide insights into how under some situations task performance might break down under conditions of high cognitive load.

Professor Lee said, “This study will be of enormous interest to researchers in both the artificial intelligence and human/computer interaction fields since this holds significant potential for applying core insights gleaned into how human intelligence works with AI algorithms.”

###

This work was funded by the National Institute on Drug Abuse, the National Research Foundation of Korea, the Ministry of Science and ICT, Samsung Research Funding Center of Samsung Electronics.

Media Contact
Younghye Cho
[email protected]
82-423-502-294

Original Source

https://www.kaist.ac.kr/_prog/_board/?code=ed_news&mode=V&no=107681&upr_ntt_no=107681&site_dvs_cd=en&menu_dvs_cd=0601

Related Journal Article

http://dx.doi.org/10.1038/s41467-019-13632-1

Tags: BiologyBiotechnologyCell Biology
Share12Tweet8Share2ShareShareShare2

Related Posts

Conserved Small Sequences Revealed by Yeast Ribo-seq

Conserved Small Sequences Revealed by Yeast Ribo-seq

October 3, 2025
Atlas Reveals Testicular Aging Across Species

Atlas Reveals Testicular Aging Across Species

October 2, 2025

Stem Cell Reports Announces New Additions to Its Editorial Board

October 2, 2025

New Insights on Bluetongue Virus in South Asia

October 2, 2025
Please login to join discussion

POPULAR NEWS

  • New Study Reveals the Science Behind Exercise and Weight Loss

    New Study Reveals the Science Behind Exercise and Weight Loss

    92 shares
    Share 37 Tweet 23
  • New Study Indicates Children’s Risk of Long COVID Could Double Following a Second Infection – The Lancet Infectious Diseases

    84 shares
    Share 34 Tweet 21
  • Physicists Develop Visible Time Crystal for the First Time

    74 shares
    Share 30 Tweet 19
  • How Donor Human Milk Storage Impacts Gut Health in Preemies

    65 shares
    Share 26 Tweet 16

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Cathepsin K Links Glucose Issues and Atherosclerosis

Conserved Small Sequences Revealed by Yeast Ribo-seq

Tackling Multidrug-Resistant Gram-Negative Meningitis in Children

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 60 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.