Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

10/25/2020
by   Akash Srivastava, et al.
8

Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors. This approach introduces a trade-off between disentangled representation learning and reconstruction quality since the model does not have enough capacity to learn correlated latent variables that capture detail information present in most image data. To overcome this trade-off, we present a novel multi-stage modelling approach where the disentangled factors are first learned using a preexisting disentangled representation learning method (such as β-TCVAE); then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables, adding detail information while maintaining conditioning on the previously learned disentangled factors. Taken together, our multi-stage modelling approach results in a single, coherent probabilistic model that is theoretically justified by the principal of D-separation and can be realized with a variety of model classes including likelihood-based models such as variational autoencoders, implicit models such as generative adversarial networks, and tractable models like normalizing flows or mixtures of Gaussians. We demonstrate that our multi-stage model has much higher reconstruction quality than current state-of-the-art methods with equivalent disentanglement performance across multiple standard benchmarks.

READ FULL TEXT

page 15

page 16

page 21

page 25

page 26

page 28

page 31

page 33

01/13/2020

High-Fidelity Synthesis with Disentangled Representation

Learning disentangled representation of data without supervision is an i...
02/22/2019

FAVAE: Sequence Disentanglement using Information Bottleneck Principle

We propose the factorized action variational autoencoder (FAVAE), a stat...
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
08/26/2020

Orientation-Disentangled Unsupervised Representation Learning for Computational Pathology

Unsupervised learning enables modeling complex images without the need f...
06/27/2019

Tuning-Free Disentanglement via Projection

In representation learning and non-linear dimension reduction, there is ...
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
07/09/2021

InfoVAEGAN : learning joint interpretable representations by information maximization and maximum likelihood

Learning disentangled and interpretable representations is an important ...