Guidelines and evaluation for clinical explainable AI on medical image analysis

02/16/2022
by   Weina Jin, et al.
11

Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. The evaluated 16 commonly-used heatmap XAI techniques were not suitable for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation for clinically viable XAI.

READ FULL TEXT

page 8

page 10

page 15

research
03/12/2022

Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?

Being able to explain the prediction to clinical end-users is a necessit...
research
10/04/2018

Developing Design Guidelines for Precision Oncology Reports

Precision oncology tests that profile tumors to identify clinically acti...
research
06/08/2020

Evaluation Criteria for Instance-based Explanation

Explaining predictions made by complex machine learning models helps use...
research
12/10/2020

AI Driven Knowledge Extraction from Clinical Practice Guidelines: Turning Research into Practice

Background and Objectives: Clinical Practice Guidelines (CPGs) represent...
research
07/11/2021

One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images

Being able to explain the prediction to clinical end-users is a necessit...
research
01/01/2023

EvidenceCap: Towards trustworthy medical image segmentation via evidential identity cap

Medical image segmentation (MIS) is essential for supporting disease dia...
research
12/03/2013

Use of the C4.5 machine learning algorithm to test a clinical guideline-based decision support system

Well-designed medical decision support system (DSS) have been shown to i...

Please sign up or login with your details

Forgot password? Click here to reset