Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Progressive Exaggeration on Chest X-rays

02/18/2021
by   Joseph Paul Cohen, et al.
10

Motivation: Traditional image attribution methods struggle to satisfactorily explain predictions of neural networks. Prediction explanation is important, especially in the medical imaging, for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. Thus, there is a pressing need to develop improved models for model explainability and introspection. Specific Problem: A new approach is to transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption. Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of an input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method. Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15±0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04±1.06 with p=0.57). Accompanying webpage: https://mlmed.org/gifsplanation Source code: https://github.com/mlmed/gifsplanation

READ FULL TEXT

page 3

page 5

page 6

page 7

page 14

page 16

page 17

page 21

research
04/02/2023

The Effect of Counterfactuals on Reading Chest X-rays

This study evaluates the effect of counterfactual explanations on the in...
research
04/16/2022

Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets

Deep learning approaches applied to medical imaging have reached near-hu...
research
04/01/2021

Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features

Neural networks have demonstrated remarkable performance in classificati...
research
02/14/2020

CheXclusion: Fairness gaps in deep chest X-ray classifiers

Machine learning systems have received much attention recently for their...
research
03/16/2021

EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry

Backdoor attack injects malicious behavior to models such that inputs em...
research
10/05/2022

HeartSpot: Privatized and Explainable Data Compression for Cardiomegaly Detection

Advances in data-driven deep learning for chest X-ray image analysis und...
research
09/07/2023

Insights Into the Inner Workings of Transformer Models for Protein Function Prediction

Motivation: We explored how explainable AI (XAI) can help to shed light ...

Please sign up or login with your details

Forgot password? Click here to reset