• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Saturday, February 7, 2026
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Biology

How artificial intelligence can explain its decisions

Bioengineer by Bioengineer
September 2, 2022
in Biology
Reading Time: 3 mins read
0
Bochum researchers
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

Artificial intelligence (AI) can be trained to recognise whether a tissue image contains a tumour. However, exactly how it makes its decision has remained a mystery until now. A team from the Research Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum is developing a new approach that will render an AI’s decision transparent and thus trustworthy. The researchers led by Professor Axel Mosig describe the approach in the journal Medical Image Analysis, published online on 24 August 2022.

Bochum researchers

Credit: RUB, Marquard

Artificial intelligence (AI) can be trained to recognise whether a tissue image contains a tumour. However, exactly how it makes its decision has remained a mystery until now. A team from the Research Center for Protein Diagnostics (PRODI) at Ruhr-Universität Bochum is developing a new approach that will render an AI’s decision transparent and thus trustworthy. The researchers led by Professor Axel Mosig describe the approach in the journal Medical Image Analysis, published online on 24 August 2022.

For the study, bioinformatics scientist Axel Mosig cooperated with Professor Andrea Tannapfel, head of the Institute of Pathology, oncologist Professor Anke Reinacher-Schick from the Ruhr-Universität’s St. Josef Hospital, and biophysicist and PRODI founding director Professor Klaus Gerwert. The group developed a neural network, i.e. an AI, that can classify whether a tissue sample contains tumour or not. To this end, they fed the AI a large number of microscopic tissue images, some of which contained tumours, while others were tumour-free.

“Neural networks are initially a black box: it’s unclear which identifying features a network learns from the training data,” explains Axel Mosig. Unlike human experts, they lack the ability to explain their decisions. “However, for medical applications in particular, it’s important that the AI is capable of explanation and thus trustworthy,” adds bioinformatics scientist David Schuhmacher, who collaborated on the study.

AI is based on falsifiable hypotheses

The Bochum team’s explainable AI is therefore based on the only kind of meaningful statements known to science: on falsifiable hypotheses. If a hypothesis is false, this fact must be demonstrable through an experiment. Artificial intelligence usually follows the principle of inductive reasoning: using concrete observations, i.e. the training data, the AI creates a general model on the basis of which it evaluates all further observations.

The underlying problem had been described by philosopher David Hume 250 years ago and can be easily illustrated: No matter how many white swans we observe, we could never conclude from this data that all swans are white and that no black swans exist whatsoever. Science therefore makes use of so-called deductive logic. In this approach, a general hypothesis is the starting point. For example, the hypothesis that all swans are white is falsified when a black swan is spotted.

Activation map shows where the tumour is detected

“At first glance, inductive AI and the deductive scientific method seem almost incompatible,” says Stephanie Schörner, a physicist who likewise contributed to the study. But the researchers found a way. Their novel neural network not only provides a classification of whether a tissue sample contains a tumour or is tumour-free, it also generates an activation map of the microscopic tissue image.

The activation map is based on a falsifiable hypothesis, namely that the activation derived from the neural network corresponds exactly to the tumour regions in the sample. Site-specific molecular methods can be used to test this hypothesis.

“Thanks to the interdisciplinary structures at PRODI, we have the best prerequisites for incorporating the hypothesis-based approach into the development of trustworthy biomarker AI in the future, for example to be able to distinguish between certain therapy-relevant tumour subtypes,” concludes Axel Mosig.



Journal

Medical Image Analysis

DOI

10.1016/j.media.2022.102594

Article Title

A framework for falsifiable explanations of machine learning models with an application in computational pathology

Share12Tweet8Share2ShareShareShare2

Related Posts

New Study Uncovers Mechanism Behind Burn Pit Particulate Matter–Induced Lung Inflammation

New Study Uncovers Mechanism Behind Burn Pit Particulate Matter–Induced Lung Inflammation

February 6, 2026

DeepBlastoid: Advancing Automated and Efficient Evaluation of Human Blastoids with Deep Learning

February 6, 2026

Navigating the Gut: The Role of Formic Acid in the Microbiome

February 6, 2026

AI-Enhanced Optical Coherence Photoacoustic Microscopy Revolutionizes 3D Cancer Model Imaging

February 6, 2026

POPULAR NEWS

  • Robotic Ureteral Reconstruction: A Novel Approach

    Robotic Ureteral Reconstruction: A Novel Approach

    82 shares
    Share 33 Tweet 21
  • Digital Privacy: Health Data Control in Incarceration

    63 shares
    Share 25 Tweet 16
  • Study Reveals Lipid Accumulation in ME/CFS Cells

    57 shares
    Share 23 Tweet 14
  • Breakthrough in RNA Research Accelerates Medical Innovations Timeline

    53 shares
    Share 21 Tweet 13

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Succinate Receptor 1 Limits Blood Cell Formation, Leukemia

Palmitoylation of Tfr1 Drives Platelet Ferroptosis and Exacerbates Liver Damage in Heat Stroke

Oxygen-Enhanced Dual-Section Microneedle Patch Improves Drug Delivery and Boosts Photodynamic and Anti-Inflammatory Treatment for Psoriasis

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 73 other subscribers
  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.