Explaining in Style: Training a GAN to explain a classifier in StyleSpace

04/27/2021
by   Oran Lang, et al.
12

Image classification models can depend on multiple different semantic attributes of the image. An explanation of the decision of the classifier needs to both discover and visualize these properties. Here we present StylEx, a method for doing this, by training a generative model to specifically explain multiple attributes that underlie classifier decisions. A natural source for such attributes is the StyleSpace of StyleGAN, which is known to generate semantically meaningful dimensions in the image. However, because standard GAN training is not dependent on the classifier, it may not represent these attributes which are important for the classifier decision, and the dimensions of StyleSpace may represent irrelevant attributes. To overcome this, we propose a training procedure for a StyleGAN, which incorporates the classifier model, in order to learn a classifier-specific StyleSpace. Explanatory attributes are then selected from this space. These can be used to visualize the effect of changing multiple attributes per image, thus providing image-specific explanations. We apply StylEx to multiple domains, including animals, leaves, faces and retinal images. For these, we show how an image can be modified in different ways to change its classifier output. Our results show that the method finds attributes that align well with semantic ones, generate meaningful image-specific explanations, and are human-interpretable as measured in user-studies.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

research
06/10/2023

Two-Stage Holistic and Contrastive Explanation of Image Classification

The need to explain the output of a deep neural network classifier is no...
research
05/26/2019

Why do These Match? Explaining the Behavior of Image Similarity Models

Explaining a deep learning model can help users understand its behavior ...
research
06/15/2022

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Deep learning models have achieved remarkable success in different areas...
research
06/01/2023

Using generative AI to investigate medical imagery models and datasets

AI models have shown promise in many medical imaging tasks. However, our...
research
09/18/2019

Semantically Interpretable Activation Maps: what-where-how explanations within CNNs

A main issue preventing the use of Convolutional Neural Networks (CNN) i...
research
09/18/2020

Contextual Semantic Interpretability

Convolutional neural networks (CNN) are known to learn an image represen...
research
11/03/2020

MAIRE – A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers

The paper introduces a novel framework for extracting model-agnostic hum...

Please sign up or login with your details

Forgot password? Click here to reset