Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units

07/19/2021
by   Woo-Jeoung Nam, et al.
28

As interpretability has been pointed out as the obstacle to the adoption of Deep Neural Networks (DNNs), there is an increasing interest in solving a transparency issue to guarantee the impressive performance. In this paper, we demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image. By utilizing the characteristics of objectness that DNNs have learned, fully decomposing the network prediction visualizes clear localization of target lesion. To verify our work, we conduct our experiments on Chest X-ray diagnosis with publicly accessible datasets. As an intuitive assessment metric for explanations, we report the performance of intersection of Union between visual explanation and bounding box of lesions. Experiment results show that recently proposed attribution methods visualize the more accurate localization for the diagnostic decision compared to the traditionally used CAM. Furthermore, we analyze the inconsistency of intentions between humans and DNNs, which is easily obscured by high performance. By visualizing the relevant factors, it is possible to confirm that the criterion for decision is in line with the learning strategy. Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.

READ FULL TEXT

page 7

page 8

page 11

page 12

research
12/12/2022

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

Even though deep neural networks (DNNs) achieve state-of-the-art results...
research
03/22/2020

Estimating Uncertainty and Interpretability in Deep Learning for Coronavirus (COVID-19) Detection

Deep Learning has achieved state of the art performance in medical imagi...
research
07/13/2021

Scalable, Axiomatic Explanations of Deep Alzheimer's Diagnosis from Heterogeneous Data

Deep Neural Networks (DNNs) have an enormous potential to learn from com...
research
06/10/2019

Generation of Multimodal Justification Using Visual Word Constraint Model for Explainable Computer-Aided Diagnosis

The ambiguity of the decision-making process has been pointed out as the...
research
04/04/2021

Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models

Convolutional neural networks are showing promise in the automatic diagn...
research
11/19/2019

Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

When artificial intelligence is used in the medical sector, interpretabi...
research
01/17/2023

Negative Flux Aggregation to Estimate Feature Attributions

There are increasing demands for understanding deep neural networks' (DN...

Please sign up or login with your details

Forgot password? Click here to reset