• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Wednesday, August 20, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

CMU method makes more data available for training self-driving cars

Bioengineer by Bioengineer
June 17, 2020
in Science News
Reading Time: 3 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Additional data boosts accuracy of tracking other cars, pedestrians

IMAGE

Credit: Carnegie Mellon University

PITTSBURGH–For safety’s sake, a self-driving car must accurately track the movement of pedestrians, bicycles and other vehicles around it. Training those tracking systems may now be more effective thanks to a new method developed at Carnegie Mellon University.

Generally speaking, the more road and traffic data available for training tracking systems, the better the results. And the CMU researchers have found a way to unlock a mountain of autonomous driving data for this purpose.

“Our method is much more robust than previous methods because we can train on much larger datasets,” said Himangi Mittal, a research intern working with David Held, assistant professor in CMU’s Robotics Institute.

Most autonomous vehicles navigate primarily based on a sensor called a lidar, a laser device that generates 3D information about the world surrounding the car. This 3D information isn’t images, but a cloud of points. One way the vehicle makes sense of this data is by using a technique known as scene flow. This involves calculating the speed and trajectory of each 3D point. Groups of points moving together are interpreted via scene flow as vehicles, pedestrians or other moving objects.

In the past, state-of-the-art methods for training such a system have required the use of labeled datasets — sensor data that has been annotated to track each 3D point over time. Manually labeling these datasets is laborious and expensive, so, not surprisingly, little labeled data exists. As a result, scene flow training is instead often performed with simulated data, which is less effective, and then fine-tuned with the small amount of labeled real-world data that exists.

Mittal, Held and robotics Ph.D. student Brian Okorn took a different approach, using unlabeled data to perform scene flow training. Because unlabeled data is relatively easy to generate by mounting a lidar on a car and driving around, there’s no shortage of it.

The key to their approach was to develop a way for the system to detect its own errors in scene flow. At each instant, the system tries to predict where each 3D point is going and how fast it’s moving. In the next instant, it measures the distance between the point’s predicted location and the actual location of the point nearest that predicted location. This distance forms one type of error to be minimized.

The system then reverses the process, starting with the predicted point location and working backward to map back to where the point originated. At this point, it measures the distance between the predicted position and the actual origination point, and the resulting distance forms the second type of error.

The system then works to correct those errors.

“It turns out that to eliminate both of those errors, the system actually needs to learn to do the right thing, without ever being told what the right thing is,” Held said.

As convoluted as that might sound, Okorn found that it worked well. The researchers calculated that scene flow accuracy using a training set of synthetic data was only 25%. When the synthetic data was fine-tuned with a small amount of real-world labeled data, the accuracy increased to 31%. When they added a large amount of unlabeled data to train the system using their approach, scene flow accuracy jumped to 46%.

The research team presented their method at the Computer Vision and Pattern Recognition (CVPR) conference, which was held virtually June 14-19. The CMU Argo AI Center for Autonomous Vehicle Research supported this research, with additional support from a NASA Space Technology Research Fellowship.

###

Media Contact
Byron Spice
[email protected]

Tags: Computer ScienceRobotry/Artificial IntelligenceTechnology/Engineering/Computer ScienceVehicles
Share12Tweet8Share2ShareShareShare2

Related Posts

Why Mental Health Guidance Can Increase Your To-Do List

Why Mental Health Guidance Can Increase Your To-Do List

August 20, 2025
Pilot Study Unveils How Music Therapy Eases Pain Following Pancreatic Surgery

Pilot Study Unveils How Music Therapy Eases Pain Following Pancreatic Surgery

August 20, 2025

UCLA and UC Santa Barbara’s BioPACIFIC MIP Secures Renewed NSF Funding to Propel AI-Driven Biobased Materials Innovation

August 20, 2025

Agriculture Emerges Gradually During the Neolithic Era

August 20, 2025
Please login to join discussion

POPULAR NEWS

  • blank

    Molecules in Focus: Capturing the Timeless Dance of Particles

    141 shares
    Share 56 Tweet 35
  • Neuropsychiatric Risks Linked to COVID-19 Revealed

    80 shares
    Share 32 Tweet 20
  • Modified DASH Diet Reduces Blood Sugar Levels in Adults with Type 2 Diabetes, Clinical Trial Finds

    60 shares
    Share 24 Tweet 15
  • Predicting Colorectal Cancer Using Lifestyle Factors

    47 shares
    Share 19 Tweet 12

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Why Mental Health Guidance Can Increase Your To-Do List

Pilot Study Unveils How Music Therapy Eases Pain Following Pancreatic Surgery

UCLA and UC Santa Barbara’s BioPACIFIC MIP Secures Renewed NSF Funding to Propel AI-Driven Biobased Materials Innovation

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.