An Empirical Comparison of Explainable Artificial Intelligence Methods for Clinical Data: A Case Study on Traumatic Brain Injury

08/13/2022
by   Amin Nayebi, et al.
0

A longstanding challenge surrounding deep learning algorithms is unpacking and understanding how they make their decisions. Explainable Artificial Intelligence (XAI) offers methods to provide explanations of internal functions of algorithms and reasons behind their decisions in ways that are interpretable and understandable to human users. . Numerous XAI approaches have been developed thus far, and a comparative analysis of these strategies seems necessary to discern their relevance to clinical prediction models. To this end, we first implemented two prediction models for short- and long-term outcomes of traumatic brain injury (TBI) utilizing structured tabular as well as time-series physiologic data, respectively. Six different interpretation techniques were used to describe both prediction models at the local and global levels. We then performed a critical analysis of merits and drawbacks of each strategy, highlighting the implications for researchers who are interested in applying these methodologies. The implemented methods were compared to one another in terms of several XAI characteristics such as understandability, fidelity, and stability. Our findings show that SHAP is the most stable with the highest fidelity but falls short of understandability. Anchors, on the other hand, is the most understandable approach, but it is only applicable to tabular data and not time series data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2022

WindowSHAP: An Efficient Framework for Explaining Time-series Classifiers based on Shapley Values

Unpacking and comprehending how deep learning algorithms make decisions ...
research
09/14/2022

Explainable AI for clinical and remote health applications: a survey on tabular and time series data

Nowadays Artificial Intelligence (AI) has become a fundamental component...
research
01/13/2022

Flood Prediction and Analysis on the Relevance of Features using Explainable Artificial Intelligence

This paper presents flood prediction models for the state of Kerala in I...
research
07/29/2019

The Challenge of Imputation in Explainable Artificial Intelligence Models

Explainable models in Artificial Intelligence are often employed to ensu...
research
08/10/2022

Trustworthy Visual Analytics in Clinical Gait Analysis: A Case Study for Patients with Cerebral Palsy

Three-dimensional clinical gait analysis is essential for selecting opti...
research
05/25/2022

TSEM: Temporally Weighted Spatiotemporal Explainable Neural Network for Multivariate Time Series

Deep learning has become a one-size-fits-all solution for technical and ...
research
03/12/2023

Challenges facing the explainability of age prediction models: case study for two modalities

The prediction of age is a challenging task with various practical appli...

Please sign up or login with your details

Forgot password? Click here to reset