• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Wednesday, August 13, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Health

Scientists voice concerns, call for transparency and reproducibility in AI research

Bioengineer by Bioengineer
October 14, 2020
in Health
Reading Time: 3 mins read
0
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients.

IMAGE

Credit: The Princess Margaret Cancer Foundation

TORONTO, CANADA —International scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients.

In an article published in Nature on October 14, 2020, scientists at Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications.

“Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from,” says Dr. Benjamin Haibe-Kains, Senior Scientist at Princess Margaret Cancer Centre and first author of the article. “But in computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress.”

The authors voiced their concern about the lack of transparency and reproducibility in AI research after a Google Health study by McKinney et al., published in a prominent scientific journal in January 2020, claimed an artificial intelligence (AI) system could outperform human radiologists in both robustness and speed for breast cancer screening. The study made waves in the scientific community and created a buzz with the public, with headlines appearing in BBC News, CBC, CNBC.

A closer examination raised some concerns: the study lacked a sufficient description of the methods used, including their code and models. The lack of transparency prohibited researchers from learning exactly how the model works and how they could apply it to their own institutions.

“On paper and in theory, the McKinney et al. study is beautiful,” says Dr. Haibe-Kains, “But if we can’t learn from it then it has little to no scientific value.”

According to Dr. Haibe-Kains, who is jointly appointed as Associate Professor in Medical Biophysics at the University of Toronto and affiliate at the Vector Institute for Artificial Intelligence, this is just one example of a problematic pattern in computational research.

“Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated,” explains Dr. Haibe-Kains. “Journals are vulnerable to the ‘hype’ of AI and may lower the standards for accepting papers that don’t include all the materials required to make the study reproducible–often in contradiction to their own guidelines.”

This can actually slow down the translation of AI models into clinical settings. Researchers are not able to learn how the model works and replicate it in a thoughtful way. In some cases, it could lead to unwarranted clinical trials, because a model that works on one group of patients or in one institution, may not be appropriate for another.

In the article titled Transparency and reproducibility in artificial intelligence, the authors offer numerous frameworks and platforms that allow safe and effective sharing to uphold the three pillars of open science to make AI research more transparent and reproducible: sharing data, sharing computer code and sharing predictive models.

“We have high hopes for the utility of AI for our cancer patients,” says Dr. Haibe-Kains. “Sharing and building upon our discoveries–that’s real scientific impact.”

###

Competing Interests: Michael M. Hoffman received a GPU Grant from Nvidia. Benjamin Haibe-Kains is a scientific advisor for Altis Labs. Chris McIntosh holds an equity position in Bridge7Oncology and receives royalties from RaySearch Laboratories.

About Princess Margaret Cancer Centre:

Princess Margaret Cancer Centre has achieved an international reputation as a global leader in the fight against cancer and delivering personalized cancer medicine. The Princess Margaret, one of the top five international cancer research centres, is a member of the University Health Network, which also includes Toronto General Hospital, Toronto Western Hospital, Toronto Rehabilitation Institute and the Michener Institute for Education at UHN. All are research hospitals affiliated with the University of Toronto. For more information: http://www.theprincessmargaret.ca

Media Contact
Katie Sullivan
[email protected]

Tags: Algorithms/ModelsBiomechanics/BiophysicscancerClinical TrialsHealth CareHealth Care Systems/ServicesMedical/Scientific EthicsSystems/Chaos/Pattern Formation/ComplexityTechnology/Engineering/Computer Science
Share12Tweet8Share2ShareShareShare2

Related Posts

blank

Imaging Single Extracellular Vesicles with RCA-Expansion

August 13, 2025
AXL Inhibitor TP-0903 Induces Neuroblastoma Apoptosis

AXL Inhibitor TP-0903 Induces Neuroblastoma Apoptosis

August 13, 2025

Study Finds Women with Down Syndrome May Experience Faster Progression of Alzheimer’s Disease Than Men

August 13, 2025

Dopamine Signals Trigger Skin Invasion in Nematodes

August 13, 2025
Please login to join discussion

POPULAR NEWS

  • blank

    Molecules in Focus: Capturing the Timeless Dance of Particles

    140 shares
    Share 56 Tweet 35
  • Neuropsychiatric Risks Linked to COVID-19 Revealed

    79 shares
    Share 32 Tweet 20
  • Modified DASH Diet Reduces Blood Sugar Levels in Adults with Type 2 Diabetes, Clinical Trial Finds

    58 shares
    Share 23 Tweet 15
  • Overlooked Dangers: Debunking Common Myths About Skin Cancer Risk in the U.S.

    61 shares
    Share 24 Tweet 15

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

KAIST Creates Bioelectrosynthesis Platform Enabling Switch-Like Precision Control of Cellular Signaling

Multi-Omics Reveal RCC Immunotherapy Markers

Polymer Connectivity Controls Solid-State Electrophotocatalysis

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.