Explaining the Black-box Smoothly- A Counterfactual Approach

01/11/2021
by   Sumedha Singla, et al.
31

We propose a BlackBox Counterfactual Explainer that is explicitly developed for medical imaging applications. Classical approaches (e.g. saliency maps) assessing feature importance do not explain how and why variations in a particular anatomical region is relevant to the outcome, which is crucial for transparent decision making in healthcare application. Our framework explains the outcome by gradually exaggerating the semantic effect of the given outcome label. Given a query input to a classifier, Generative Adversarial Networks produce a progressive set of perturbations to the query image that gradually changes the posterior probability from its original class to its negation. We design the loss function to ensure that essential and potentially relevant details, such as support devices, are preserved in the counterfactually generated images. We provide an extensive evaluation of different classification tasks on the chest X-Ray images. Our experiments show that a counterfactually generated visual explanation is consistent with the disease's clinical relevant measurements, both quantitatively and qualitatively.

READ FULL TEXT

page 1

page 5

page 7

page 8

page 13

page 14

page 15

page 16

research
11/01/2019

Explanation by Progressive Exaggeration

As machine learning methods see greater adoption and implementation in h...
research
12/14/2020

Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification

Explaining decisions of black-box classifiers is paramount in sensitive ...
research
07/10/2021

Using Causal Analysis for Conceptual Deep Learning Explanation

Model explainability is essential for the creation of trustworthy Machin...
research
10/28/2021

Counterfactual Explanation of Brain Activity Classifiers using Image-to-Image Transfer by Generative Adversarial Network

Deep neural networks (DNNs) can accurately decode task-related informati...
research
06/28/2021

Contrastive Counterfactual Visual Explanations With Overdetermination

A novel explainable AI method called CLEAR Image is introduced in this p...
research
03/27/2023

ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging

In some medical imaging tasks and other settings where only small parts ...
research
06/25/2021

Scene Uncertainty and the Wellington Posterior of Deterministic Image Classifiers

We propose a method to estimate the uncertainty of the outcome of an ima...

Please sign up or login with your details

Forgot password? Click here to reset