Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

01/19/2021
by   Kathryn Schutte, et al.
0

As AI-based medical devices are becoming more common in imaging fields like radiology and histology, interpretability of the underlying predictive models is crucial to expand their use in clinical practice. Existing heatmap-based interpretability methods such as GradCAM only highlight the location of predictive features but do not explain how they contribute to the prediction. In this paper, we propose a new interpretability method that can be used to understand the predictions of any black-box model on images, by showing how the input image would be modified in order to produce different predictions. A StyleGAN is trained on medical images to provide a mapping between latent vectors and images. Our method identifies the optimal direction in the latent space to create a change in the model prediction. By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions. We validate our approach on histology and radiology images, and demonstrate its ability to provide meaningful explanations that are more informative than GradCAM heatmaps. Our method reveals the patterns learned by the model, which allows clinicians to build trust in the model's predictions, discover new biomarkers and eventually reveal potential biases.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

04/11/2022

medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space

Despite the surge of deep learning in the past decade, some users are sk...
12/22/2019

Interpreting Predictive Process Monitoring Benchmarks

Predictive process analytics has recently gained significant attention, ...
06/14/2019

Global and Local Interpretability for Cardiac MRI Classification

Deep learning methods for classifying medical images have demonstrated i...
11/22/2016

An unexpected unity among methods for interpreting model predictions

Understanding why a model made a certain prediction is crucial in many d...
12/06/2020

Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models

Deep neural networks have shown significant promise in comprehending com...
06/25/2021

Projection-wise Disentangling for Fair and Interpretable Representation Learning: Application to 3D Facial Shape Analysis

Confounding bias is a crucial problem when applying machine learning to ...
01/31/2020

Unsupervised deep clustering for predictive texture pattern discovery in medical images

Predictive marker patterns in imaging data are a means to quantify disea...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.