SHAMSUL: Simultaneous Heatmap-Analysis to investigate Medical Significance Utilizing Local interpretability methods

07/16/2023
by   Mahbub Ul Alam, et al.
0

The interpretability of deep neural networks has become a subject of great interest within the medical and healthcare domain. This attention stems from concerns regarding transparency, legal and ethical considerations, and the medical significance of predictions generated by these deep neural networks in clinical decision support systems. To address this matter, our study delves into the application of four well-established interpretability methods: Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive exPlanations (SHAP), Gradient-weighted Class Activation Mapping (Grad-CAM), and Layer-wise Relevance Propagation (LRP). Leveraging the approach of transfer learning with a multi-label-multi-class chest radiography dataset, we aim to interpret predictions pertaining to specific pathology classes. Our analysis encompasses both single-label and multi-label predictions, providing a comprehensive and unbiased assessment through quantitative and qualitative investigations, which are compared against human expert annotation. Notably, Grad-CAM demonstrates the most favorable performance in quantitative evaluation, while the LIME heatmap segmentation visualization exhibits the highest level of medical significance. Our research highlights the strengths and limitations of these interpretability methods and suggests that a multimodal-based approach, incorporating diverse sources of information beyond chest radiography images, could offer additional insights for enhancing interpretability in the medical domain.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

page 10

page 11

page 12

research
04/12/2023

Towards Evaluating Explanations of Vision Transformers for Medical Imaging

As deep learning models increasingly find applications in critical domai...
research
07/03/2019

Towards Interpretable Deep Extreme Multi-label Learning

Many Machine Learning algorithms, such as deep neural networks, have lon...
research
11/01/2021

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods

Artificial Intelligence has emerged as a useful aid in numerous clinical...
research
02/18/2021

From Extreme Multi-label to Multi-class: A Hierarchical Approach for Automated ICD-10 Coding Using Phrase-level Attention

Clinical coding is the task of assigning a set of alphanumeric codes, re...
research
11/19/2019

Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks

When artificial intelligence is used in the medical sector, interpretabi...
research
08/17/2022

Data-Efficient Vision Transformers for Multi-Label Disease Classification on Chest Radiographs

Radiographs are a versatile diagnostic tool for the detection and assess...

Please sign up or login with your details

Forgot password? Click here to reset