medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space

04/11/2022
by   Amil Dravid, et al.
3

Despite the surge of deep learning in the past decade, some users are skeptical to deploy these models in practice due to their black-box nature. Specifically, in the medical space where there are severe potential repercussions, we need to develop methods to gain confidence in the models' decisions. To this end, we propose a novel medical imaging generative adversarial framework, medXGAN (medical eXplanation GAN), to visually explain what a medical classifier focuses on in its binary predictions. By encoding domain knowledge of medical images, we are able to disentangle anatomical structure and pathology, leading to fine-grained visualization through latent interpolation. Furthermore, we optimize the latent space such that interpolation explains how the features contribute to the classifier's output. Our method outperforms baselines such as Gradient-Weighted Class Activation Mapping (Grad-CAM) and Integrated Gradients in localization and explanatory ability. Additionally, a combination of the medXGAN with Integrated Gradients can yield explanations more robust to noise. The code is available at: https://github.com/avdravid/medXGAN_explanations.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

research
01/19/2021

Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images

As AI-based medical devices are becoming more common in imaging fields l...
research
07/05/2022

GLANCE: Global to Local Architecture-Neutral Concept-based Explanations

Most of the current explainability techniques focus on capturing the imp...
research
05/22/2022

Visual Explanations from Deep Networks via Riemann-Stieltjes Integrated Gradient-based Localization

Neural networks are becoming increasingly better at tasks that involve c...
research
08/28/2023

LatentDR: Improving Model Generalization Through Sample-Aware Latent Degradation and Restoration

Despite significant advances in deep learning, models often struggle to ...
research
06/19/2023

A Lightweight Causal Model for Interpretable Subject-level Prediction

Recent years have seen a growing interest in methods for predicting a va...
research
10/17/2022

Visual Debates

The natural way of obtaining different perspectives on any given topic i...
research
07/05/2022

Hierarchical Symbolic Reasoning in Hyperbolic Space for Deep Discriminative Models

Explanations for black-box models help us understand model decisions as ...

Please sign up or login with your details

Forgot password? Click here to reset