WindowSHAP: An Efficient Framework for Explaining Time-series Classifiers based on Shapley Values

by   Amin Nayebi, et al.

Unpacking and comprehending how deep learning algorithms make decisions has been a persistent challenge for researchers and end-users. Explaining time-series predictive models is useful for clinical applications with high stakes to understand the behavior of prediction models. However, existing approaches to explain such models are frequently unique to architectures and data where the features do not have a time-varying component. In this paper, we introduce WindowSHAP, a model-agnostic framework for explaining time-series classifiers using Shapley values. We intend for WindowSHAP to mitigate the computational complexity of calculating Shapley values for long time-series data as well as improve the quality of explanations. WindowSHAP is based on partitioning a sequence into time windows. Under this framework, we present three distinct algorithms of Stationary, Sliding and Dynamic WindowSHAP, each evaluated against baseline approaches, KernelSHAP and TimeSHAP, using perturbation and sequence analyses metrics. We applied our framework to clinical time-series data from both a specialized clinical domain (Traumatic Brain Injury - TBI) as well as a broad clinical domain (critical care medicine). The experimental results demonstrate that, based on the two quantitative metrics, our framework is superior at explaining clinical time-series classifiers, while also reducing the complexity of computations. We show that for time-series data with 120 time steps (hours), merging 10 adjacent time points can reduce the CPU time of WindowSHAP by 80 KernelSHAP. We also show that our Dynamic WindowSHAP algorithm focuses more on the most important time steps and provides more understandable explanations. As a result, WindowSHAP not only accelerates the calculation of Shapley values for time-series data, but also delivers more understandable explanations with higher quality.


page 12

page 18


Explainable AI for clinical and remote health applications: a survey on tabular and time series data

Nowadays Artificial Intelligence (AI) has become a fundamental component...

An Empirical Comparison of Explainable Artificial Intelligence Methods for Clinical Data: A Case Study on Traumatic Brain Injury

A longstanding challenge surrounding deep learning algorithms is unpacki...

A windowed correlation based feature selection method to improve time series prediction of dengue fever cases

The performance of data-driven prediction models depends on the availabi...

Learning Perturbations to Explain Time Series Predictions

Explaining predictions based on multivariate time series data carries th...

Explaining Deep Classification of Time-Series Data with Learned Prototypes

The emergence of deep learning networks raises a need for algorithms to ...

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Explaining AI systems is fundamental both to the development of high per...

Explaining a machine learning decision to physicians via counterfactuals

Machine learning models perform well on several healthcare tasks and can...

Please sign up or login with your details

Forgot password? Click here to reset