Domain aware medical image classifier interpretation by counterfactual impact analysis

07/13/2020
by   Dimitrios Lenis, et al.
0

The success of machine learning methods for computer vision tasks has driven a surge in computer assisted prediction for medicine and biology. Based on a data-driven relationship between input image and pathological classification, these predictors deliver unprecedented accuracy. Yet, the numerous approaches trying to explain the causality of this learned relationship have fallen short: time constraints, coarse, diffuse and at times misleading results, caused by the employment of heuristic techniques like Gaussian noise and blurring, have hindered their clinical adoption. In this work, we discuss and overcome these obstacles by introducing a neural-network based attribution method, applicable to any trained predictor. Our solution identifies salient regions of an input image in a single forward-pass by measuring the effect of local image-perturbations on a predictor's score. We replace heuristic techniques with a strong neighborhood conditioned inpainting approach, avoiding anatomically implausible, hence adversarial artifacts. We evaluate on public mammography data and compare against existing state-of-the-art methods. Furthermore, we exemplify the approach's generalizability by demonstrating results on chest X-rays. Our solution shows, both quantitatively and qualitatively, a significant reduction of localization ambiguity and clearer conveying results, without sacrificing time efficiency.

READ FULL TEXT

page 2

page 4

page 8

research
04/03/2020

Interpreting Medical Image Classifiers by Optimization Based Counterfactual Impact Analysis

Clinical applicability of automated decision support systems depends on ...
research
09/18/2023

Gradpaint: Gradient-Guided Inpainting with Diffusion Models

Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved ...
research
10/23/2021

"One-Shot" Reduction of Additive Artifacts in Medical Images

Medical images may contain various types of artifacts with different pat...
research
07/06/2019

Generative Counterfactual Introspection for Explainable Deep Learning

In this work, we propose an introspection technique for deep neural netw...
research
12/14/2020

Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification

Explaining decisions of black-box classifiers is paramount in sensitive ...
research
03/31/2022

A Temporal Learning Approach to Inpainting Endoscopic Specularities and Its effect on Image Correspondence

Video streams are utilised to guide minimally-invasive surgery and diagn...
research
08/03/2018

Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions

Deep learning models are susceptible to input specific noise, called adv...

Please sign up or login with your details

Forgot password? Click here to reset