Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates

12/05/2021
by   Dan Ley, et al.
0

To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain, identifying a single, on-manifold change to the input such that the model becomes more certain in its prediction. We broaden the exploration to examine δ-CLUE, the set of potential CLUEs within a δ ball of the original input in latent space. We study the diversity of such sets and find that many CLUEs are redundant; as such, we propose DIVerse CLUE (∇-CLUE), a set of CLUEs which each propose a distinct explanation as to how one can decrease the uncertainty associated with an input. We then further propose GLobal AMortised CLUE (GLAM-CLUE), a distinct and novel method which learns amortised mappings on specific groups of uncertain inputs, taking them and efficiently transforming them in a single function call into inputs for which a model will be certain. Our experiments show that δ-CLUE, ∇-CLUE, and GLAM-CLUE all address shortcomings of CLUE and provide beneficial explanations of uncertainty estimates to practitioners.

READ FULL TEXT

page 2

page 10

page 13

research
04/13/2021

δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

To interpret uncertainty estimates from differentiable probabilistic mod...
research
06/11/2020

Getting a CLUE: A Method for Explaining Uncertainty Estimates

Both uncertainty estimation and interpretability are important factors f...
research
01/21/2023

Bayesian Hierarchical Models for Counterfactual Estimation

Counterfactual explanations utilize feature perturbations to analyze the...
research
03/18/2021

Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations

Explainability for machine learning models has gained considerable atten...
research
06/20/2023

MoleCLUEs: Optimizing Molecular Conformers by Minimization of Differentiable Uncertainty

Structure-based models in the molecular sciences can be highly sensitive...
research
02/06/2023

Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs

Contrastively trained encoders have recently been proven to invert the d...
research
03/03/2020

Explaining Groups of Points in Low-Dimensional Representations

A common workflow in data exploration is to learn a low-dimensional repr...

Please sign up or login with your details

Forgot password? Click here to reset