The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

07/22/2019
by   Thibault Laugel, et al.
0

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.

READ FULL TEXT

page 2

page 5

research
06/11/2019

Issues with post-hoc counterfactual explanations: a discussion

Counterfactual post-hoc interpretability approaches have been proven to ...
research
04/25/2022

Integrating Prior Knowledge in Post-hoc Explanations

In the field of eXplainable Artificial Intelligence (XAI), post-hoc inte...
research
05/26/2023

Counterfactuals of Counterfactuals: a back-translation-inspired approach to analyse counterfactual editors

In the wake of responsible AI, interpretability methods, which attempt t...
research
05/16/2022

Gradient-based Counterfactual Explanations using Tractable Probabilistic Models

Counterfactual examples are an appealing class of post-hoc explanations ...
research
08/10/2021

Post-hoc Interpretability for Neural NLP: A Survey

Natural Language Processing (NLP) models have become increasingly more c...
research
05/05/2020

Global explanations for discovering bias in data

In the paper, we propose attention-based summarized post-hoc explanation...
research
10/22/2021

ReLACE: Reinforcement Learning Agent for Counterfactual Explanations of Arbitrary Predictive Models

The demand for explainable machine learning (ML) models has been growing...

Please sign up or login with your details

Forgot password? Click here to reset