Motif-guided Time Series Counterfactual Explanations

11/08/2022
by   Peiyu Li, et al.
0

With the rising need of interpretable machine learning methods, there is a necessity for a rise in human effort to provide diverse explanations of the influencing factors of the model decisions. To improve the trust and transparency of AI-based systems, the EXplainable Artificial Intelligence (XAI) field has emerged. The XAI paradigm is bifurcated into two main categories: feature attribution and counterfactual explanation methods. While feature attribution methods are based on explaining the reason behind a model decision, counterfactual explanation methods discover the smallest input changes that will result in a different decision. In this paper, we aim at building trust and transparency in time series models by using motifs to generate counterfactual explanations. We propose Motif-Guided Counterfactual Explanation (MG-CF), a novel model that generates intuitive post-hoc counterfactual explanations that make full use of important motifs to provide interpretive information in decision-making processes. To the best of our knowledge, this is the first effort that leverages motifs to guide the counterfactual explanation generation. We validated our model using five real-world time-series datasets from the UCR repository. Our experimental results show the superiority of MG-CF in balancing all the desirable counterfactual explanations properties in comparison with other competing state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2022

Shapelet-Based Counterfactual Explanations for Multivariate Time Series

As machine learning and deep learning models have become highly prevalen...
research
11/10/2020

Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End

To explain a machine learning model, there are two main approaches: feat...
research
06/01/2022

OmniXAI: A Library for Explainable AI

We introduce OmniXAI, an open-source Python library of eXplainable AI (X...
research
10/09/2020

A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations

Counterfactual explanations are a prominent example of post-hoc interpre...
research
09/03/2021

CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models

We propose CX-ToM, short for counterfactual explanations with theory-of ...
research
11/20/2020

Born Identity Network: Multi-way Counterfactual Map Generation to Explain a Classifier's Decision

There exists an apparent negative correlation between performance and in...
research
10/01/2021

LEMON: Explainable Entity Matching

State-of-the-art entity matching (EM) methods are hard to interpret, and...

Please sign up or login with your details

Forgot password? Click here to reset