Learning Disentangled Representations with Latent Variation Predictability

07/25/2020
by   Xinqi Zhu, et al.
13

Latent traversal is a popular approach to visualize the disentangled latent representations. Given a bunch of variations in a single unit of the latent representation, it is expected that there is a change in a single factor of variation of the data while others are fixed. However, this impressive experimental observation is rarely explicitly encoded in the objective function of learning disentangled representations. This paper defines the variation predictability of latent disentangled representations. Given image pairs generated by latent codes varying in a single dimension, this varied dimension could be closely correlated with these image pairs if the representation is well disentangled. Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs. We further develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations. The proposed variation predictability is a general constraint that is applicable to the VAE and GAN frameworks for boosting disentanglement of latent representations. Experiments show that the proposed variation predictability correlates well with existing ground-truth-required metrics and the proposed algorithm is effective for disentanglement learning.

READ FULL TEXT

page 2

page 12

page 13

research
02/21/2021

Do Generative Models Know Disentanglement? Contrastive Learning is All You Need

Disentangled generative models are typically trained with an extra regul...
research
11/15/2017

DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images

Disentangling factors of variation has always been a challenging problem...
research
04/07/2021

Where and What? Examining Interpretable Disentangled Representations

Capturing interpretable variations has long been one of the goals in dis...
research
02/26/2020

Representation Learning Through Latent Canonicalizations

We seek to learn a representation on a large annotated data source that ...
research
08/17/2021

Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation

Unsupervised disentanglement learning is a crucial issue for understandi...
research
08/31/2021

Disentanglement Analysis with Partial Information Decomposition

Given data generated from multiple factors of variation that cooperative...
research
07/15/2022

Partial Disentanglement via Mechanism Sparsity

Disentanglement via mechanism sparsity was introduced recently as a prin...

Please sign up or login with your details

Forgot password? Click here to reset