Explainability of Traditional and Deep Learning Models on Longitudinal Healthcare Records

11/22/2022
by   Lin Lee Cheong, et al.
0

Recent advances in deep learning have led to interest in training deep learning models on longitudinal healthcare records to predict a range of medical events, with models demonstrating high predictive performance. Predictive performance is necessary but insufficient, however, with explanations and reasoning from models required to convince clinicians for sustained use. Rigorous evaluation of explainability is often missing, as comparisons between models (traditional versus deep) and various explainability methods have not been well-studied. Furthermore, ground truths needed to evaluate explainability can be highly subjective depending on the clinician's perspective. Our work is one of the first to evaluate explainability performance between and within traditional (XGBoost) and deep learning (LSTM with Attention) models on both a global and individual per-prediction level on longitudinal healthcare data. We compared explainability using three popular methods: 1) SHapley Additive exPlanations (SHAP), 2) Layer-Wise Relevance Propagation (LRP), and 3) Attention. These implementations were applied on synthetically generated datasets with designed ground-truths and a real-world medicare claims dataset. We showed that overall, LSTMs with SHAP or LRP provides superior explainability compared to XGBoost on both the global and local level, while LSTM with dot-product attention failed to produce reasonable ones. With the explosion of the volume of healthcare data and deep learning progress, the need to evaluate explainability will be pivotal towards successful adoption of deep learning models in healthcare settings.

READ FULL TEXT
research
11/26/2020

Explaining Deep Learning Models for Structured Data using Layer-Wise Relevance Propagation

Trust and credibility in machine learning models is bolstered by the abi...
research
03/16/2021

Predicting Opioid Use Disorder from Longitudinal Healthcare Data using Multi-stream Transformer

Opioid Use Disorder (OUD) is a public health crisis costing the US billi...
research
06/06/2022

Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images

A key concern in integrating machine learning models in medicine is the ...
research
07/12/2021

Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff

The healthcare domain is one of the most exciting application areas for ...
research
03/21/2023

Unlocking Layer-wise Relevance Propagation for Autoencoders

Autoencoders are a powerful and versatile tool often used for various pr...
research
05/14/2021

Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations

Deep Learning of neural networks has progressively become more prominent...
research
06/09/2022

Xplique: A Deep Learning Explainability Toolbox

Today's most advanced machine-learning models are hardly scrutable. The ...

Please sign up or login with your details

Forgot password? Click here to reset