• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Wednesday, September 17, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Biology

How artificial intelligence can explain its decisions

Bioengineer by Bioengineer
September 2, 2022
in Biology
Reading Time: 3 mins read
0
Bochum researchers
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Artificial intelligence (AI) can be trained to recognise whether a tissue image contains a tumour. However, exactly how it makes its decision has remained a mystery until now. A team from the Research Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum is developing a new approach that will render an AI’s decision transparent and thus trustworthy. The researchers led by Professor Axel Mosig describe the approach in the journal Medical Image Analysis, published online on 24 August 2022.

Bochum researchers

Credit: RUB, Marquard

Artificial intelligence (AI) can be trained to recognise whether a tissue image contains a tumour. However, exactly how it makes its decision has remained a mystery until now. A team from the Research Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum is developing a new approach that will render an AI’s decision transparent and thus trustworthy. The researchers led by Professor Axel Mosig describe the approach in the journal Medical Image Analysis, published online on 24 August 2022.

For the study, bioinformatics scientist Axel Mosig cooperated with Professor Andrea Tannapfel, head of the Institute of Pathology, oncologist Professor Anke Reinacher-Schick from the Ruhr-Universität’s St. Josef Hospital, and biophysicist and PRODI founding director Professor Klaus Gerwert. The group developed a neural network, i.e. an AI, that can classify whether a tissue sample contains tumour or not. To this end, they fed the AI a large number of microscopic tissue images, some of which contained tumours, while others were tumour-free.

“Neural networks are initially a black box: it’s unclear which identifying features a network learns from the training data,” explains Axel Mosig. Unlike human experts, they lack the ability to explain their decisions. “However, for medical applications in particular, it’s important that the AI is capable of explanation and thus trustworthy,” adds bioinformatics scientist David Schuhmacher, who collaborated on the study.

AI is based on falsifiable hypotheses

The Bochum team’s explainable AI is therefore based on the only kind of meaningful statements known to science: on falsifiable hypotheses. If a hypothesis is false, this fact must be demonstrable through an experiment. Artificial intelligence usually follows the principle of inductive reasoning: using concrete observations, i.e. the training data, the AI creates a general model on the basis of which it evaluates all further observations.

The underlying problem had been described by philosopher David Hume 250 years ago and can be easily illustrated: No matter how many white swans we observe, we could never conclude from this data that all swans are white and that no black swans exist whatsoever. Science therefore makes use of so-called deductive logic. In this approach, a general hypothesis is the starting point. For example, the hypothesis that all swans are white is falsified when a black swan is spotted.

Activation map shows where the tumour is detected

“At first glance, inductive AI and the deductive scientific method seem almost incompatible,” says Stephanie Schörner, a physicist who likewise contributed to the study. But the researchers found a way. Their novel neural network not only provides a classification of whether a tissue sample contains a tumour or is tumour-free, it also generates an activation map of the microscopic tissue image.

The activation map is based on a falsifiable hypothesis, namely that the activation derived from the neural network corresponds exactly to the tumour regions in the sample. Site-specific molecular methods can be used to test this hypothesis.

“Thanks to the interdisciplinary structures at PRODI, we have the best prerequisites for incorporating the hypothesis-based approach into the development of trustworthy biomarker AI in the future, for example to be able to distinguish between certain therapy-relevant tumour subtypes,” concludes Axel Mosig.



Journal

Medical Image Analysis

DOI

10.1016/j.media.2022.102594

Article Title

A framework for falsifiable explanations of machine learning models with an application in computational pathology

Share12Tweet8Share2ShareShareShare2

Related Posts

Decoding Danger: How Australian Lizards Evolved to Outrun Wildfires

Decoding Danger: How Australian Lizards Evolved to Outrun Wildfires

September 17, 2025
blank

Optimizing Selenium Intake to Improve Sperm Quality in Broilers

September 17, 2025

Sodium Selenite Boosts Fermentation in Alfalfa Silage

September 17, 2025

Disease Experts Collaborate with Florida Museum of Natural History to Develop West Nile Virus Forecast

September 16, 2025

POPULAR NEWS

  • blank

    Breakthrough in Computer Hardware Advances Solves Complex Optimization Challenges

    154 shares
    Share 62 Tweet 39
  • New Drug Formulation Transforms Intravenous Treatments into Rapid Injections

    117 shares
    Share 47 Tweet 29
  • Physicists Develop Visible Time Crystal for the First Time

    67 shares
    Share 27 Tweet 17
  • Scientists Achieve Ambient-Temperature Light-Induced Heterolytic Hydrogen Dissociation

    48 shares
    Share 19 Tweet 12

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Treating Anal Lesions Lowers Invasive Cancer Risk in HIV

Exploring Mild Cognitive Impairment and Cancer in Seniors

Individual vs. Group Early Start Denver Model Effectiveness

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.