Learning disentangled representations with the Wasserstein Autoencoder

10/07/2020
by   Benoit Gaujac, et al.
0

Disentangled representation learning has undoubtedly benefited from objective function surgery. However, a delicate balancing act of tuning is still required in order to trade off reconstruction fidelity versus disentanglement. Building on previous successes of penalizing the total correlation in the latent variables, we propose TCWAE (Total Correlation Wasserstein Autoencoder). Working in the WAE paradigm naturally enables the separation of the total-correlation term, thus providing disentanglement control over the learned representation, while offering more flexibility in the choice of reconstruction cost. We propose two variants using different KL estimators and perform extensive quantitative comparisons on data sets with known generative factors, showing competitive results relative to state-of-the-art techniques. We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.

READ FULL TEXT

page 8

page 14

page 18

page 19

page 20

page 23

page 24

research
12/30/2019

Disentangled Representation Learning with Wasserstein Total Correlation

Unsupervised learning of disentangled representations involves uncoverin...
research
10/25/2020

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

Current autoencoder-based disentangled representation learning methods a...
research
02/14/2018

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term me...
research
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
research
05/08/2020

Variance Constrained Autoencoding

Recent state-of-the-art autoencoder based generative models have an enco...
research
09/12/2020

Revisiting Factorizing Aggregated Posterior in Learning Disentangled Representations

In the problem of learning disentangled representations, one of the prom...
research
06/27/2019

Tuning-Free Disentanglement via Projection

In representation learning and non-linear dimension reduction, there is ...

Please sign up or login with your details

Forgot password? Click here to reset