Revisiting Factorizing Aggregated Posterior in Learning Disentangled Representations

09/12/2020
by   Ze Cheng, et al.
5

In the problem of learning disentangled representations, one of the promising methods is to factorize aggregated posterior by penalizing the total correlation of sampled latent variables. However, this well-motivated strategy has a blind spot: there is a disparity between the sampled latent representation and its corresponding mean representation. In this paper, we provide a theoretical explanation that low total correlation of sampled representation cannot guarantee low total correlation of the mean representation. Indeed, we prove that for the multivariate normal distributions, the mean representation with arbitrarily high total correlation can have a corresponding sampled representation with bounded total correlation. We also propose a method to eliminate this disparity. Experiments show that our model can learn a mean representation with much lower total correlation, hence a factorized mean representation. Moreover, we offer a detailed explanation of the limitations of factorizing aggregated posterior – factor disintegration. Our work indicates a potential direction for future research of disentangled learning.

READ FULL TEXT

page 6

page 8

page 18

page 19

page 20

page 21

page 22

page 23

research
09/26/2021

Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders

The ability of Variational Autoencoders to learn disentangled representa...
research
12/30/2019

Disentangled Representation Learning with Wasserstein Total Correlation

Unsupervised learning of disentangled representations involves uncoverin...
research
02/14/2018

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term me...
research
08/22/2020

WeLa-VAE: Learning Alternative Disentangled Representations Using Weak Labels

Learning disentangled representations without supervision or inductive b...
research
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
research
10/17/2022

Break The Spell Of Total Correlation In betaTCVAE

This paper proposes a way to break the spell of total correlation in bet...
research
06/27/2023

Enhancing Representation Learning on High-Dimensional, Small-Size Tabular Data: A Divide and Conquer Method with Ensembled VAEs

Variational Autoencoders and their many variants have displayed impressi...

Please sign up or login with your details

Forgot password? Click here to reset