• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Thursday, November 6, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Biology

Researchers demonstrate ‘mind-reading’ brain-decoding tech

Bioengineer by Bioengineer
October 23, 2017
in Biology
Reading Time: 5 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram
IMAGE

Credit: Purdue University image/Erin Easterling)

WEST LAFAYETTE, Ind. – Researchers have demonstrated how to decode what the human brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos, representing a sort of mind-reading technology.

The advance could aid efforts to improve artificial intelligence and lead to new insights into brain function. Critical to the research is a type of algorithm called a convolutional neural network, which has been instrumental in enabling computers and smartphones to recognize faces and objects.

"That type of network has made an enormous impact in the field of computer vision in recent years," said Zhongming Liu, an assistant professor in Purdue University's Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering. "Our technique uses the neural network to understand what you are seeing."

Convolutional neural networks, a form of "deep-learning" algorithm, have been used to study how the brain processes static images and other visual stimuli. However, the new findings represent the first time such an approach has been used to see how the brain processes movies of natural scenes, a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings, said doctoral student Haiguang Wen.

He is lead author of a new research paper appearing online Oct. 20 in the journal Cerebral Cortex. A YouTube video is available at https://youtu.be/Qh5_uMGXl1g.

The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes. First, the data were used to train the convolutional neural network model to predict the activity in the brain's visual cortex while the subjects were watching the videos. Then they used the model to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.

The model was able to accurately decode the fMRI data into specific image categories. Actual video images were then presented side-by-side with the computer's interpretation of what the person's brain saw based on fMRI data.

"For example, a water animal, the moon, a turtle, a person, a bird in flight," Wen said. "I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs."

The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing. "Neuroscience is trying to map which parts of the brain are responsible for specific functionality," Wen said. "This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. A scene with a car moving in front of a building is dissected into pieces of information by the brain: one location in the brain may represent the car; another location may represent the building.

Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain's visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene."

The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called cross-subject encoding and decoding. This finding is important because it demonstrates the potential for broad applications of such models to study brain function, even for people with visual deficits.

"We think we are entering a new era of machine intelligence and neuroscience where research is focusing on the intersection of these two important fields," Liu said. "Our mission in general is to advance artificial intelligence using brain-inspired concepts. In turn, we want to use artificial intelligence to help us understand the brain. So, we think this is a good strategy to help advance both fields in a way that otherwise would not be accomplished if we approached them separately."

###

A complete list of co-authors is available in the abstract. The research has been funded by the National Institute of Mental Health. The work is affiliated with the Purdue Institute for Integrative Neuroscience. Data reported in this paper also have been made publicly available on the website of Laboratory of Integrated Brain Imaging .

Source: Zhongming Liu, 765-496-1872, [email protected]

IMAGE CAPTION:

From left, doctoral student Haiguang Wen, assistant professor Zhongming Liu and former graduate student Junxing Shi, review fMRI data of brain scans. The work aims to improve artificial intelligence and lead to new insights into brain function. (Purdue University image/Erin Easterling) A publication-quality photo is available at https://news.uns.purdue.edu/images/2017/liu-braindecode.jpg

IMAGE CAPTION:

From left, doctoral student Haiguang Wen and former graduate student Junxing Shi prepare to test graduate student Kuan Han. (Purdue University image/Erin Easterling)

A publication-quality photo is available at https://news.uns.purdue.edu/images/2017/liu-braindecode2.jpg

ABSTRACT

Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision Haiguang Wen2,3, Junxing Shi2,3, Yizhen Zhang2,3, Kun-Han Lu2,3, Jiayue Cao1,3, Zhongming Liu*1,2,3 1Weldon School of Biomedical Engineering, 2School of Electrical and Computer Engineering Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, Indiana, 47906, USA *Correspondence Zhongming Liu, 765 496 1872, [email protected]

Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.

Media Contact

Emil Venere
[email protected]
765-494-4709
@PurdueUnivNews

http://www.purdue.edu/

Related Journal Article

http://dx.doi.org/10.1093/cercor/bhx268

Share12Tweet8Share2ShareShareShare2

Related Posts

blank

Unraveling Tetracladium Spp.: Ecological Versatility Revealed

November 6, 2025
Alien Nudibranch: Scyphozoan Predation and Nematocyst Dynamics

Alien Nudibranch: Scyphozoan Predation and Nematocyst Dynamics

November 6, 2025

Island reptiles risk extinction before scientific study, warns global review

November 6, 2025

Revamping Genome-Wide Metabolic Model for Streptococcus suis

November 6, 2025
Please login to join discussion

POPULAR NEWS

  • Sperm MicroRNAs: Crucial Mediators of Paternal Exercise Capacity Transmission

    1299 shares
    Share 519 Tweet 324
  • Stinkbug Leg Organ Hosts Symbiotic Fungi That Protect Eggs from Parasitic Wasps

    313 shares
    Share 125 Tweet 78
  • ESMO 2025: mRNA COVID Vaccines Enhance Efficacy of Cancer Immunotherapy

    206 shares
    Share 82 Tweet 52
  • New Study Suggests ALS and MS May Stem from Common Environmental Factor

    138 shares
    Share 55 Tweet 35

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Breakthrough in Quantum Physics: First Successful Demonstration of Entanglement Swapping via Sum-Frequency Generation of Single Photons

New Combination Therapy Shows Promise in Reducing Lifelong Ibrutinib Use for Chronic Lymphocytic Leukemia

APA Poll Uncovers Widespread Stress from Societal Division and Loneliness Across the Nation

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 69 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.