• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Thursday, August 21, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Researchers incorporate computer vision and uncertainty into AI for robotic prosthetics

Bioengineer by Bioengineer
May 27, 2020
in Science News
Reading Time: 3 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

IMAGE

Credit: Edgar Lobaton

Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control, and includes robust artificial intelligence (AI) algorithms that allow the software to better account for uncertainty.

“Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on,” says Edgar Lobaton, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.”

The researchers focused on distinguishing between six different terrains that require adjustments in a robotic prosthetic’s behavior: tile, brick, concrete, grass, “upstairs” and “downstairs.”

“If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision – it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” says Boxuan Zhong, lead author of the paper and a recent Ph.D. graduate from NC State.

The new “environmental context” framework incorporates both hardware and software elements. The researchers designed the framework for use with any lower-limb robotic exoskeleton or robotic prosthetic device, but with one additional piece of hardware: a camera. In their study, the researchers used cameras worn on eyeglasses and cameras mounted on the lower-limb prosthesis itself. The researchers evaluated how the AI was able to make use of computer vision data from both types of camera, separately and when used together.

“Incorporating computer vision into control software for wearable robotics is an exciting new area of research,” says Helen Huang, a co-author of the paper. “We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well – particularly for near-term predictions, such as what the terrain would be like for the next step or two.” Huang is the Jackson Family Distinguished Professor of Biomedical Engineering in the Joint Department of Biomedical Engineering at NC State and the University of North Carolina at Chapel Hill.

The most significant advance, however, is to the AI itself.

“We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making,” Lobaton says. “This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system.”

To train the AI system, researchers connected the cameras to able-bodied individuals, who then walked through a variety of indoor and outdoor environments. The researchers then did a proof-of-concept evaluation by having a person with lower-limb amputation wear the cameras while traversing the same environments.

“We found that the model can be appropriately transferred so the system can operate with subjects from different populations,” Lobaton says. “That means that the AI worked well even thought it was trained by one group of people and used by somebody different.”

However, the new framework has not yet been tested in a robotic device.

“We are excited to incorporate the framework into the control system for working robotic prosthetics – that’s the next step,” Huang says.

“And we’re also planning to work on ways to make the system more efficient, in terms of requiring less visual data input and less data processing,” says Zhong.

###

The paper, “Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification,” is published in IEEE Transactions on Automation Science and Engineering. The paper was co-authored by Rafael da Silva, a Ph.D. student at NC State; and Minhan Li, a Ph.D. student in the Joint Department of Biomedical Engineering.

The work was done with support from the National Science Foundation under grants 1552828, 1563454 and 1926998.

Media Contact
Matt Shipman
[email protected]

Original Source

https://news.ncsu.edu/2020/05/prosthetics-computer-vision-uncertainty/

Related Journal Article

http://dx.doi.org/10.1109/TASE.2020.2993399

Tags: Algorithms/ModelsBiomedical/Environmental/Chemical EngineeringComputer ScienceElectrical Engineering/ElectronicsMedicine/HealthMultimedia/Networking/Interface DesignRehabilitation/Prosthetics/Plastic SurgeryRobotry/Artificial IntelligenceSoftware EngineeringTechnology/Engineering/Computer Science
Share12Tweet8Share2ShareShareShare2

Related Posts

Proximity Screening Boosts Graphene’s Electronic Quality

Proximity Screening Boosts Graphene’s Electronic Quality

August 21, 2025
Revolutionary Laser Technique Simplifies Production of High-Performance Alloy Films

Revolutionary Laser Technique Simplifies Production of High-Performance Alloy Films

August 21, 2025

New Study Reveals 40% Decline in Leisure Reading Over Two Decades

August 21, 2025

TCF1 and LEF1 Sustain B-1a Cell Function

August 21, 2025
Please login to join discussion

POPULAR NEWS

  • blank

    Molecules in Focus: Capturing the Timeless Dance of Particles

    141 shares
    Share 56 Tweet 35
  • Neuropsychiatric Risks Linked to COVID-19 Revealed

    81 shares
    Share 32 Tweet 20
  • Modified DASH Diet Reduces Blood Sugar Levels in Adults with Type 2 Diabetes, Clinical Trial Finds

    60 shares
    Share 24 Tweet 15
  • Predicting Colorectal Cancer Using Lifestyle Factors

    47 shares
    Share 19 Tweet 12

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Proximity Screening Boosts Graphene’s Electronic Quality

Revolutionary Laser Technique Simplifies Production of High-Performance Alloy Films

New Study Reveals 40% Decline in Leisure Reading Over Two Decades

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.