Latent Space Explanation by Intervention

12/09/2021
by   Itai Gat, et al.
0

The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output. While this property serves to fit the training data well, it also obscures the mechanism that drives prediction. This study aims to reveal hidden concepts by employing an intervention mechanism that shifts the predicted class based on discrete variational autoencoders. An explanatory model then visualizes the encoded information from any hidden layer and its corresponding intervened representation. By the assessment of differences between the original representation and the intervened representation, one can determine the concepts that can alter the class, hence providing interpretability. We demonstrate the effectiveness of our approach on CelebA, where we show various visualizations for bias in the data and suggest different interventions to reveal and change bias.

READ FULL TEXT

page 1

page 5

page 6

page 7

research
07/18/2023

Interpretable Timbre Synthesis using Variational Autoencoders Regularized on Timbre Descriptors

Controllable timbre synthesis has been a subject of research for several...
research
02/15/2021

Compression phase is not necessary for generalization in representation learning

The outstanding performance of deep learning in various fields has been ...
research
02/05/2020

Concept Whitening for Interpretable Image Recognition

What does a neural network encode about a concept as we traverse through...
research
02/25/2023

Bayesian Neural Networks Tend to Ignore Complex and Sensitive Concepts

In this paper, we focus on mean-field variational Bayesian Neural Networ...
research
06/03/2022

Latent Topology Induction for Understanding Contextualized Representations

In this work, we study the representation space of contextualized embedd...
research
03/06/2023

NxPlain: Web-based Tool for Discovery of Latent Concepts

The proliferation of deep neural networks in various domains has seen an...
research
02/28/2023

A Closer Look at the Intervention Procedure of Concept Bottleneck Models

Concept bottleneck models (CBMs) are a class of interpretable neural net...

Please sign up or login with your details

Forgot password? Click here to reset