Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

11/19/2019 ∙ by Erico Tjoa, et al. ∙ 0

When artificial intelligence is used in the medical sector, interpretability is a crucial factor to consider. Diagnosis based on machine decision produced by a black-box neural network, sometimes lacking clear rationale, is unlikely to be clinically adopted for fear of potentially dire consequences arising from unexplained misdiagnosis. In this work, we implement Layer-wise Relevance Propagation (LRP), a visual interpretability method applied on 3D U-Net for lesion segmentation using the small dataset of multi-modal images provided by ISLES 2017 competition. We demonstrate that LRP modifications could provide more sensible visual explanations to an otherwise highly noise-skewed output and quantify them using inclusivity coefficients. We also link amplitude of modified signals to useful information content. High amplitude signals appear to constitute the noise that undermines the interpretability capacity of LRP. Furthermore, mathematical framework for possible analysis of function approximation is developed by analogy.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.