TSInsight: A local-global attribution framework for interpretability in time-series data

04/06/2020
by   Shoaib Ahmed Siddiqui, et al.
0

With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time-series data has been neglected with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant i.e. serves as a feature attribution method to boost interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with 9 other commonly used attribution methods on 8 different time-series datasets to validate its efficacy. Evaluation results show that TSInsight naturally achieves output space contraction, therefore, is an effective tool for the interpretability of deep time-series models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2021

Time Series Model Attribution Visualizations as Explanations

Attributions are a common local explanation technique for deep learning ...
research
06/05/2023

Time Interpret: a Unified Model Interpretability Library for Time Series

We introduce , a library designed as an extension of Captum, with a spec...
research
05/23/2023

Interpretation of Time-Series Deep Models: A Survey

Deep learning models developed for time-series associated tasks have bec...
research
02/16/2022

TimeREISE: Time-series Randomized Evolving Input Sample Explanation

Deep neural networks are one of the most successful classifiers across d...
research
02/11/2022

InterpretTime: a new approach for the systematic evaluation of neural-network interpretability in time series classification

We present a novel approach to evaluate the performance of interpretabil...
research
10/23/2020

A Review of Deep Learning Methods for Irregularly Sampled Medical Time Series Data

Irregularly sampled time series (ISTS) data has irregular temporal inter...
research
11/09/2019

DeVLearn: A Deep Visual Learning Framework for Localizing Temporary Faults in Power Systems

Frequently recurring transient faults in a transmission network may be i...

Please sign up or login with your details

Forgot password? Click here to reset