DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations With Distribution-Aware Autoencoder Loss

04/19/2021
by   Jokin Labaien, et al.
19

Deep Learning has become a very valuable tool in different fields, and no one doubts the learning capacity of these models. Nevertheless, since Deep Learning models are often seen as black boxes due to their lack of interpretability, there is a general mistrust in their decision-making process. To find a balance between effectiveness and interpretability, Explainable Artificial Intelligence (XAI) is gaining popularity in recent years, and some of the methods within this area are used to generate counterfactual explanations. The process of generating these explanations generally consists of solving an optimization problem for each input to be explained, which is unfeasible when real-time feedback is needed. To speed up this process, some methods have made use of autoencoders to generate instant counterfactual explanations. Recently, a method called Deep Guided Counterfactual Explanations (DGCEx) has been proposed, which trains an autoencoder attached to a classification model, in order to generate straightforward counterfactual explanations. However, this method does not ensure that the generated counterfactual instances are close to the data manifold, so unrealistic counterfactual instances may be generated. To overcome this issue, this paper presents Distribution Aware Deep Guided Counterfactual Explanations (DA-DGCEx), which adds a term to the DGCEx cost function that penalizes out of distribution counterfactual instances.

READ FULL TEXT

page 8

page 9

research
04/25/2022

Integrating Prior Knowledge in Post-hoc Explanations

In the field of eXplainable Artificial Intelligence (XAI), post-hoc inte...
research
12/08/2022

Real-Time Counterfactual Explanations For Robotic Systems With Multiple Continuous Outputs

Although many machine learning methods, especially from the field of dee...
research
01/04/2023

Counterfactual Explanations for Land Cover Mapping in a Multi-class Setting

Counterfactual explanations are an emerging tool to enhance interpretabi...
research
06/04/2021

Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning

Counterfactual instances are a powerful tool to obtain valuable insights...
research
03/28/2023

CREATED: Generating Viable Counterfactual Sequences for Predictive Process Analytics

Predictive process analytics focuses on predicting future states, such a...
research
05/20/2019

CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models

As artificial intelligence plays an increasingly important role in our s...
research
05/10/2023

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

In the field of Explainable Artificial Intelligence (XAI), counterfactua...

Please sign up or login with your details

Forgot password? Click here to reset