Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

11/19/2019
by   Erico Tjoa, et al.
0

When artificial intelligence is used in the medical sector, interpretability is a crucial factor to consider. Diagnosis based on machine decision produced by a black-box neural network, sometimes lacking clear rationale, is unlikely to be clinically adopted for fear of potentially dire consequences arising from unexplained misdiagnosis. In this work, we implement Layer-wise Relevance Propagation (LRP), a visual interpretability method applied on 3D U-Net for lesion segmentation using the small dataset of multi-modal images provided by ISLES 2017 competition. We demonstrate that LRP modifications could provide more sensible visual explanations to an otherwise highly noise-skewed output and quantify them using inclusivity coefficients. We also link amplitude of modified signals to useful information content. High amplitude signals appear to constitute the noise that undermines the interpretability capacity of LRP. Furthermore, mathematical framework for possible analysis of function approximation is developed by analogy.

READ FULL TEXT

page 2

page 5

page 6

research
09/05/2020

Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network

The practical application of deep neural networks are still limited by t...
research
02/05/2021

Convolutional Neural Network Interpretability with General Pattern Theory

Ongoing efforts to understand deep neural networks (DNN) have provided m...
research
11/01/2021

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods

Artificial Intelligence has emerged as a useful aid in numerous clinical...
research
07/09/2018

Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals

Interpretability of deep neural networks is a recently emerging area of ...
research
04/10/2023

Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

Early detection of melanoma is crucial for preventing severe complicatio...
research
07/16/2023

SHAMSUL: Simultaneous Heatmap-Analysis to investigate Medical Significance Utilizing Local interpretability methods

The interpretability of deep neural networks has become a subject of gre...
research
07/19/2021

Improving Interpretability of Deep Neural Networks in Medical Diagnosis by Investigating the Individual Units

As interpretability has been pointed out as the obstacle to the adoption...

Please sign up or login with your details

Forgot password? Click here to reset