Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations

by   Pau Rodríguez, et al.

Explainability for machine learning models has gained considerable attention within our research community given the importance of deploying more reliable machine-learning systems. In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction, providing details about the model's decision-making. Current counterfactual methods make ambiguous interpretations as they combine multiple biases of the model and the data in a single counterfactual interpretation of the model's decision. Moreover, these methods tend to generate trivial counterfactuals about the model's decision, as they often suggest to exaggerate or remove the presence of the attribute being classified. For the machine learning practitioner, these types of counterfactuals offer little value, since they provide no new information about undesired model or data biases. In this work, we propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model's prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. We will publish the code.


page 4

page 5

page 11

page 13

page 16


Convex optimization for actionable & plausible counterfactual explanations

Transparency is an essential requirement of machine learning based decis...

Latent-CF: A Simple Baseline for Reverse Counterfactual Explanations

In the environment of fair lending laws and the General Data Protection ...

Feature Attributions and Counterfactual Explanations Can Be Manipulated

As machine learning models are increasingly used in critical decision-ma...

STEEX: Steering Counterfactual Explanations with Semantics

As deep learning models are increasingly used in safety-critical applica...

δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

To interpret uncertainty estimates from differentiable probabilistic mod...

Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates

To interpret uncertainty estimates from differentiable probabilistic mod...

Counterfactual Explanations via Latent Space Projection and Interpolation

Counterfactual explanations represent the minimal change to a data sampl...