Diffusion Visual Counterfactual Explanations

10/21/2022
by   Maximilian Augustin, et al.
0

Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image classifier. They are 'small' but 'realistic' semantic changes of the image changing the classifier decision. Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts, or are limited to image classification problems with few classes. In this paper, we overcome this by generating Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process. Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification. Second, our cone regularization via an adversarially robust model ensures that the diffusion process does not converge to trivial non-semantic changes, but instead produces realistic images of the target class which achieve high confidence by the classifier.

READ FULL TEXT

page 9

page 17

page 20

page 23

page 25

page 27

page 28

page 29

research
05/16/2022

Sparse Visual Counterfactual Explanations in Image Space

Visual counterfactual explanations (VCEs) in image space are an importan...
research
03/29/2022

Diffusion Models for Counterfactual Explanations

Counterfactual explanations have shown promising results as a post-hoc f...
research
11/17/2021

STEEX: Steering Counterfactual Explanations with Semantics

As deep learning models are increasingly used in safety-critical applica...
research
10/09/2019

Removing input features via a generative model to explain their attributions to classifier's decisions

Interpretability methods often measure the contribution of an input feat...
research
11/29/2021

DeDUCE: Generating Counterfactual Explanations Efficiently

When an image classifier outputs a wrong class label, it can be helpful ...
research
03/24/2022

Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals

A visual counterfactual explanation replaces image regions in a query im...
research
05/15/2023

Common Diffusion Noise Schedules and Sample Steps are Flawed

We discover that common diffusion noise schedules do not enforce the las...

Please sign up or login with your details

Forgot password? Click here to reset