Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods

11/01/2021
by   Zohaib Salahuddin, et al.
22

Artificial Intelligence has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process. Therefore, there is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.

READ FULL TEXT

page 2

page 9

page 12

page 16

research
12/05/2019

Deep learning with noisy labels: exploring techniques and remedies in medical image analysis

Supervised training of deep learning models requires large labeled datas...
research
12/21/2021

INTRPRT: A Systematic Review of and Guidelines for Designing and Validating Transparent AI in Medical Image Analysis

Transparency in Machine Learning (ML), attempts to reveal the working me...
research
05/31/2018

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

There has recently been a surge of work in explanatory artificial intell...
research
05/25/2022

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

Learning harmful shortcuts such as spurious correlations and biases prev...
research
11/19/2019

Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

When artificial intelligence is used in the medical sector, interpretabi...
research
07/16/2023

SHAMSUL: Simultaneous Heatmap-Analysis to investigate Medical Significance Utilizing Local interpretability methods

The interpretability of deep neural networks has become a subject of gre...
research
10/09/2020

Explaining Clinical Decision Support Systems in Medical Imaging using Cycle-Consistent Activation Maximization

Clinical decision support using deep neural networks has become a topic ...

Please sign up or login with your details

Forgot password? Click here to reset