AMEE: A Robust Framework for Explanation Evaluation in Time Series Classification

06/08/2023
by   Thu-Trang Nguyen, et al.
0

This paper aims to provide a framework to quantitatively evaluate and rank explanation methods for the time series classification task, which deals with a prevalent data type in critical domains such as healthcare and finance. The recent surge of research interest in explanation methods for time series classification has provided a great variety of explanation techniques. Nevertheless, when these explanation techniques disagree on a specific problem, it remains unclear which of them to use. Comparing the explanations to find the right answer is non-trivial. Two key challenges remain: how to quantitatively and robustly evaluate the informativeness (i.e., relevance for the classification task) of a given explanation method, and how to compare explanation methods side-by-side. We propose AMEE, a Model-Agnostic Explanation Evaluation framework for quantifying and comparing multiple saliency-based explanations for time series classification. Perturbation is added to the input time series guided by the saliency maps (i.e., importance weights for each point in the time series). The impact of perturbation on classification accuracy is measured and used for explanation evaluation. The results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This allows us to objectively quantify and rank different explanation methods. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of UCR benchmark datasets, as well as a real-world dataset with known expert ground truth.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2023

On the Consistency and Robustness of Saliency Explanations for Time Series Classification

Interpretable machine learning and explainable artificial intelligence h...
research
08/29/2023

Evaluating Explanation Methods for Multivariate Time Series Classification

Multivariate time series classification is an important computational ta...
research
07/11/2023

A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI

Explainable Artificial Intelligence (XAI) has gained significant attenti...
research
10/11/2022

Class-Specific Explainability for Deep Time Series Classifiers

Explainability helps users trust deep learning solutions for time series...
research
11/20/2022

TSEXPLAIN: Explaining Aggregated Time Series by Surfacing Evolving Contributors

Aggregated time series are generated effortlessly everywhere, e.g., "tot...
research
03/16/2020

Towards Ground Truth Evaluation of Visual Explanations

Several methods have been proposed to explain the decisions of neural ne...
research
06/05/2019

Evaluating Explainers via Perturbation

Due to high complexity of many modern machine learning models such as de...

Please sign up or login with your details

Forgot password? Click here to reset