Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

01/19/2021
by   Kathryn Schutte, et al.
0

As AI-based medical devices are becoming more common in imaging fields like radiology and histology, interpretability of the underlying predictive models is crucial to expand their use in clinical practice. Existing heatmap-based interpretability methods such as GradCAM only highlight the location of predictive features but do not explain how they contribute to the prediction. In this paper, we propose a new interpretability method that can be used to understand the predictions of any black-box model on images, by showing how the input image would be modified in order to produce different predictions. A StyleGAN is trained on medical images to provide a mapping between latent vectors and images. Our method identifies the optimal direction in the latent space to create a change in the model prediction. By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions. We validate our approach on histology and radiology images, and demonstrate its ability to provide meaningful explanations that are more informative than GradCAM heatmaps. Our method reveals the patterns learned by the model, which allows clinicians to build trust in the model's predictions, discover new biomarkers and eventually reveal potential biases.

READ FULL TEXT

page 2

page 4

research
04/11/2022

medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space

Despite the surge of deep learning in the past decade, some users are sk...
research
08/01/2022

What do Deep Neural Networks Learn in Medical Images?

Deep learning is increasingly gaining rapid adoption in healthcare to he...
research
07/19/2023

TbExplain: A Text-based Explanation Method for Scene Classification Models with the Statistical Prediction Correction

The field of Explainable Artificial Intelligence (XAI) aims to improve t...
research
06/05/2023

Interpretable Alzheimer's Disease Classification Via a Contrastive Diffusion Autoencoder

In visual object classification, humans often justify their choices by c...
research
11/22/2016

An unexpected unity among methods for interpreting model predictions

Understanding why a model made a certain prediction is crucial in many d...
research
05/15/2023

Topological Interpretability for Deep-Learning

With the increasing adoption of AI-based systems across everyday life, t...
research
08/12/2023

Learn Single-horizon Disease Evolution for Predictive Generation of Post-therapeutic Neovascular Age-related Macular Degeneration

Most of the existing disease prediction methods in the field of medical ...

Please sign up or login with your details

Forgot password? Click here to reset