Revisiting Factorizing Aggregated Posterior in Learning Disentangled Representations

09/12/2020
by   Ze Cheng, et al.
5

In the problem of learning disentangled representations, one of the promising methods is to factorize aggregated posterior by penalizing the total correlation of sampled latent variables. However, this well-motivated strategy has a blind spot: there is a disparity between the sampled latent representation and its corresponding mean representation. In this paper, we provide a theoretical explanation that low total correlation of sampled representation cannot guarantee low total correlation of the mean representation. Indeed, we prove that for the multivariate normal distributions, the mean representation with arbitrarily high total correlation can have a corresponding sampled representation with bounded total correlation. We also propose a method to eliminate this disparity. Experiments show that our model can learn a mean representation with much lower total correlation, hence a factorized mean representation. Moreover, we offer a detailed explanation of the limitations of factorizing aggregated posterior – factor disintegration. Our work indicates a potential direction for future research of disentangled learning.

READ FULL TEXT

page 6

page 8

page 18

page 19

page 20

page 21

page 22

page 23

09/26/2021

Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders

The ability of Variational Autoencoders to learn disentangled representa...
12/30/2019

Disentangled Representation Learning with Wasserstein Total Correlation

Unsupervised learning of disentangled representations involves uncoverin...
02/14/2018

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term me...
08/22/2020

WeLa-VAE: Learning Alternative Disentangled Representations Using Weak Labels

Learning disentangled representations without supervision or inductive b...
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
09/21/2020

Improving Robustness and Generality of NLP Models Using Disentangled Representations

Supervised neural networks, which first map an input x to a single repre...
07/14/2020

Spectrum-Guided Adversarial Disparity Learning

It has been a significant challenge to portray intraclass disparity prec...