Interpreting and Correcting Medical Image Classification with PIP-Net

07/19/2023
by   Meike Nauta, et al.
0

Part-prototype models are explainable-by-design image classifiers, and a promising alternative to black box AI. This paper explores the applicability and potential of interpretable machine learning, in particular PIP-Net, for automated diagnosis support on real-world medical imaging data. PIP-Net learns human-understandable prototypical image parts and we evaluate its accuracy and interpretability for fracture detection and skin cancer diagnosis. We find that PIP-Net's decision making process is in line with medical classification standards, while only provided with image-level class labels. Because of PIP-Net's unsupervised pretraining of prototypes, data quality problems such as undesired text in an X-ray or labelling errors can be easily identified. Additionally, we are the first to show that humans can manually correct the reasoning of PIP-Net by directly disabling undesired prototypes. We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.

READ FULL TEXT

page 6

page 8

page 9

page 10

page 11

page 12

page 14

research
07/20/2023

Is Grad-CAM Explainable in Medical Images?

Explainable Deep Learning has gained significant attention in the field ...
research
11/02/2020

U-Net and its variants for medical image segmentation: theory and applications

U-net is an image segmentation technique developed primarily for medical...
research
09/30/2022

An Interactive Interpretability System for Breast Cancer Screening with Deep Learning

Deep learning methods, in particular convolutional neural networks, have...
research
08/22/2020

Emergent symbolic language based deep medical image classification

Modern deep learning systems for medical image classification have demon...
research
07/31/2022

INSightR-Net: Interpretable Neural Network for Regression using Similarity-based Comparisons to Prototypical Examples

Convolutional neural networks (CNNs) have shown exceptional performance ...
research
01/15/2020

CheXplain: Enabling Physicians to Explore and UnderstandData-Driven, AI-Enabled Medical Imaging Analysis

The recent development of data-driven AI promises to automate medical di...
research
03/13/2023

Revisiting model self-interpretability in a decision-theoretic way for binary medical image classification

Interpretability is highly desired for deep neural network-based classif...

Please sign up or login with your details

Forgot password? Click here to reset