Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features

04/01/2021
by   Ashkan Khakzar, et al.
0

Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks' prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network's output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets. The Code is publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2021

Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models

Convolutional neural networks are showing promise in the automatic diagn...
research
10/04/2021

Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information

One principal approach for illuminating a black-box neural network is fe...
research
10/26/2020

Interpreting Uncertainty in Model Predictions For COVID-19 Diagnosis

COVID-19, due to its accelerated spread has brought in the need to use a...
research
02/18/2021

Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Progressive Exaggeration on Chest X-rays

Motivation: Traditional image attribution methods struggle to satisfacto...
research
03/18/2021

Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset

Despite the increasingly successful application of neural networks to ma...
research
12/20/2019

Learned Feature Attribution Priors

Deep learning models have achieved breakthrough successes in domains whe...
research
05/18/2022

The Solvability of Interpretability Evaluation Metrics

Feature attribution methods are popular for explaining neural network pr...

Please sign up or login with your details

Forgot password? Click here to reset