One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images

07/11/2021
by   Weina Jin, et al.
3

Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model's prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications.

READ FULL TEXT

page 1

page 5

research
03/12/2022

Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?

Being able to explain the prediction to clinical end-users is a necessit...
research
07/06/2022

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

When using medical images for diagnosis, either by clinicians or artific...
research
10/09/2020

Explaining Clinical Decision Support Systems in Medical Imaging using Cycle-Consistent Activation Maximization

Clinical decision support using deep neural networks has become a topic ...
research
02/03/2020

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

Convolutional neural networks (CNNs) offer great machine learning perfor...
research
06/07/2022

Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

Saliency methods calculate how important each input feature is to a mach...
research
02/16/2022

Guidelines and evaluation for clinical explainable AI on medical image analysis

Explainable artificial intelligence (XAI) is essential for enabling clin...
research
11/06/2022

ViT-CX: Causal Explanation of Vision Transformers

Despite the popularity of Vision Transformers (ViTs) and eXplainable AI ...

Please sign up or login with your details

Forgot password? Click here to reset