Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations

03/18/2021
by   Pau Rodríguez, et al.
26

Explainability for machine learning models has gained considerable attention within our research community given the importance of deploying more reliable machine-learning systems. In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction, providing details about the model's decision-making. Current counterfactual methods make ambiguous interpretations as they combine multiple biases of the model and the data in a single counterfactual interpretation of the model's decision. Moreover, these methods tend to generate trivial counterfactuals about the model's decision, as they often suggest to exaggerate or remove the presence of the attribute being classified. For the machine learning practitioner, these types of counterfactuals offer little value, since they provide no new information about undesired model or data biases. In this work, we propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model's prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. We will publish the code.

READ FULL TEXT

page 4

page 5

page 11

page 13

page 16

research
05/17/2021

Convex optimization for actionable & plausible counterfactual explanations

Transparency is an essential requirement of machine learning based decis...
research
12/16/2020

Latent-CF: A Simple Baseline for Reverse Counterfactual Explanations

In the environment of fair lending laws and the General Data Protection ...
research
05/04/2023

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations

This work introduces interpretable regional descriptors, or IRDs, for lo...
research
06/23/2021

Feature Attributions and Counterfactual Explanations Can Be Manipulated

As machine learning models are increasingly used in critical decision-ma...
research
12/05/2021

Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates

To interpret uncertainty estimates from differentiable probabilistic mod...
research
12/02/2021

Counterfactual Explanations via Latent Space Projection and Interpolation

Counterfactual explanations represent the minimal change to a data sampl...
research
06/11/2020

Getting a CLUE: A Method for Explaining Uncertainty Estimates

Both uncertainty estimation and interpretability are important factors f...

Please sign up or login with your details

Forgot password? Click here to reset