Where and What? Examining Interpretable Disentangled Representations

04/07/2021
by   Xinqi Zhu, et al.
0

Capturing interpretable variations has long been one of the goals in disentanglement learning. However, unlike the independence assumption, interpretability has rarely been exploited to encourage disentanglement in the unsupervised setting. In this paper, we examine the interpretability of disentangled representations by investigating two questions: where to be interpreted and what to be interpreted? A latent code is easily to be interpreted if it would consistently impact a certain subarea of the resulting generated image. We thus propose to learn a spatial mask to localize the effect of each individual latent dimension. On the other hand, interpretability usually comes from latent dimensions that capture simple and basic variations in data. We thus impose a perturbation on a certain dimension of the latent code, and expect to identify the perturbation along this dimension from the generated images so that the encoding of simple variations can be enforced. Additionally, we develop an unsupervised model selection method, which accumulates perceptual distance scores along axes in the latent space. On various datasets, our models can learn high-quality disentangled representations without supervision, showing the proposed modeling of interpretability is an effective proxy for achieving unsupervised disentanglement.

READ FULL TEXT

page 8

page 13

page 14

page 15

page 19

page 20

page 21

page 22

research
07/25/2020

Learning Disentangled Representations with Latent Variation Predictability

Latent traversal is a popular approach to visualize the disentangled lat...
research
12/04/2018

A Spectral Regularizer for Unsupervised Disentanglement

Generative models that learn to associate variations in the output along...
research
05/29/2019

A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning

Disentangled representations have recently been shown to improve data ef...
research
02/17/2020

Learning Group Structure and Disentangled Representations of Dynamical Environments

Discovering the underlying structure of a dynamical environment involves...
research
08/26/2019

Theory and Evaluation Metrics for Learning Disentangled Representations

We make two theoretical contributions to disentanglement learning by (a)...
research
03/30/2021

Unsupervised Disentanglement of Linear-Encoded Facial Semantics

We propose a method to disentangle linear-encoded facial semantics from ...
research
05/02/2023

Learning Disentangled Semantic Spaces of Explanations via Invertible Neural Networks

Disentangling sentence representations over continuous spaces can be a c...

Please sign up or login with your details

Forgot password? Click here to reset