Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

05/05/2021
by   Samanta Knapič, et al.
45

In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' trust in the black box predictions. We implemented two post-hoc interpretable machine learning methods LIME and SHAP and the alternative explanation approach CIU, centered on the Contextual Value and Utility (CIU). The produced explanations were evaluated using human evaluation. We conducted three user studies based on the explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in the web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n=20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We have found that, as hypothesized, the CIU explainable method performed better than both LIME and SHAP methods in terms of increasing support for human decision-making as well as being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that can with future improvements in implementation be generalized on different medical data sets and can provide great decision-support for medical experts.

READ FULL TEXT

page 8

page 10

page 11

page 12

research
09/07/2022

Explainable Artificial Intelligence to Detect Image Spam Using Convolutional Neural Network

Image spam threat detection has continually been a popular area of resea...
research
08/11/2021

Logic Explained Networks

The large and still increasing popularity of deep learning clashes with ...
research
08/22/2023

Exploration of the Rashomon Set Assists Trustworthy Explanations for Medical Data

The machine learning modeling process conventionally culminates in selec...
research
05/30/2020

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

The significant advances in autonomous systems together with an immensel...
research
07/07/2019

A Human-Grounded Evaluation of SHAP for Alert Processing

In the past years, many new explanation methods have been proposed to ac...
research
02/05/2019

XOC: Explainable Observer-Classifier for Explainable Binary Decisions

When deep neural networks optimize highly complex functions, it is not a...
research
02/16/2019

Outlining the Design Space of Explainable Intelligent Systems for Medical Diagnosis

The adoption of intelligent systems creates opportunities as well as cha...

Please sign up or login with your details

Forgot password? Click here to reset