• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Sunday, January 11, 2026
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News

Machine learning at speed

Bioengineer by Bioengineer
April 12, 2021
in Science News
Reading Time: 2 mins read
0
IMAGE
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

IMAGE

Credit: © 2021 KAUST; Anastasia Serin.

Inserting lightweight optimization code in high-speed network devices has enabled a KAUST-led collaboration to increase the speed of machine learning on parallelized computing systems five-fold.

This “in-network aggregation” technology, developed with researchers and systems architects at Intel, Microsoft and the University of Washington, can provide dramatic speed improvements using readily available programmable network hardware.

The fundamental benefit of artificial intelligence (AI) that gives it so much power to “understand” and interact with the world is the machine-learning step, in which the model is trained using large sets of labeled training data. The more data the AI is trained on, the better the model is likely to perform when exposed to new inputs.

The recent burst of AI applications is largely due to better machine learning and the use of larger models and more diverse datasets. Performing the machine-learning computations, however, is an enormously taxing task that increasingly relies on large arrays of computers running the learning algorithm in parallel.

“How to train deep-learning models at a large scale is a very challenging problem,” says Marco Canini from the KAUST research team. “The AI models can consist of billions of parameters, and we can use hundreds of processors that need to work efficiently in parallel. In such systems, communication among processors during incremental model updates easily becomes a major performance bottleneck.”

The team found a potential solution in new network technology developed by Barefoot Networks, a division of Intel.

“We use Barefoot Networks’ new programmable dataplane networking hardware to offload part of the work performed during distributed machine-learning training,” explains Amedeo Sapio, a KAUST alumnus who has since joined the Barefoot Networks team at Intel. “Using this new programmable networking hardware, rather than just the network, to move data means that we can perform computations along the network paths.”

The key innovation of the team’s SwitchML platform is to allow the network hardware to perform the data aggregation task at each synchronization step during the model update phase of the machine-learning process. Not only does this offload part of the computational load, it also significantly reduces the amount of data transmission.

“Although the programmable switch dataplane can do operations very quickly, the operations it can do are limited,” says Canini. “So our solution had to be simple enough for the hardware and yet flexible enough to solve challenges such as limited onboard memory capacity. SwitchML addresses this challenge by co-designing the communication network and the distributed training algorithm, achieving an acceleration of up to 5.5 times compared to the state-of-the-art approach.”

###

Media Contact
Michael Cusack
[email protected]

Original Source

https://discovery.kaust.edu.sa/en/article/1077/machine-learning-at-speed

Tags: Algorithms/ModelsCalculations/Problem-SolvingComputer ScienceMultimedia/Networking/Interface DesignSoftware EngineeringTechnology/Engineering/Computer Science
Share12Tweet8Share2ShareShareShare2

Related Posts

AI-Driven Insights into E-Commerce Consumer Behavior

AI-Driven Insights into E-Commerce Consumer Behavior

January 11, 2026

Empowering Hong Kong Teens: Mental Health Leadership Training

January 11, 2026

Self-Care and Efficacy in Older Adults’ Health

January 11, 2026

Risk Factors for Psychological Symptoms in Older Turks

January 11, 2026
Please login to join discussion

POPULAR NEWS

  • Enhancing Spiritual Care Education in Nursing Programs

    154 shares
    Share 62 Tweet 39
  • PTSD, Depression, Anxiety in Childhood Cancer Survivors, Parents

    146 shares
    Share 58 Tweet 37
  • Robotic Ureteral Reconstruction: A Novel Approach

    60 shares
    Share 24 Tweet 15
  • Impact of Vegan Diet and Resistance Exercise on Muscle Volume

    47 shares
    Share 19 Tweet 12

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

AI-Driven Insights into E-Commerce Consumer Behavior

Empowering Hong Kong Teens: Mental Health Leadership Training

Self-Care and Efficacy in Older Adults’ Health

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 71 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.