Disentangled Representation Learning with Wasserstein Total Correlation

12/30/2019
by   Yijun Xiao, et al.
0

Unsupervised learning of disentangled representations involves uncovering of different factors of variations that contribute to the data generation process. Total correlation penalization has been a key component in recent methods towards disentanglement. However, Kullback-Leibler (KL) divergence-based total correlation is metric-agnostic and sensitive to data samples. In this paper, we introduce Wasserstein total correlation in both variational autoencoder and Wasserstein autoencoder settings to learn disentangled latent representations. A critic is adversarially trained along with the main objective to estimate the Wasserstein total correlation term. We discuss the benefits of using Wasserstein distance over KL divergence to measure independence and conduct quantitative and qualitative experiments on several data sets. Moreover, we introduce a new metric to measure disentanglement. We show that the proposed approach has comparable performances on disentanglement with smaller sacrifices in reconstruction abilities.

READ FULL TEXT
research
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
research
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
research
09/12/2020

Revisiting Factorizing Aggregated Posterior in Learning Disentangled Representations

In the problem of learning disentangled representations, one of the prom...
research
02/14/2018

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term me...
research
01/21/2021

Blocked and Hierarchical Disentangled Representation From Information Theory Perspective

We propose a novel and theoretical model, blocked and hierarchical varia...
research
05/08/2020

Variance Constrained Autoencoding

Recent state-of-the-art autoencoder based generative models have an enco...
research
11/02/2017

Variational Inference of Disentangled Latent Concepts from Unlabeled Observations

Disentangled representations, where the higher level data generative fac...

Please sign up or login with your details

Forgot password? Click here to reset