Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

10/25/2020
by   Akash Srivastava, et al.
8

Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors. This approach introduces a trade-off between disentangled representation learning and reconstruction quality since the model does not have enough capacity to learn correlated latent variables that capture detail information present in most image data. To overcome this trade-off, we present a novel multi-stage modelling approach where the disentangled factors are first learned using a preexisting disentangled representation learning method (such as β-TCVAE); then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables, adding detail information while maintaining conditioning on the previously learned disentangled factors. Taken together, our multi-stage modelling approach results in a single, coherent probabilistic model that is theoretically justified by the principal of D-separation and can be realized with a variety of model classes including likelihood-based models such as variational autoencoders, implicit models such as generative adversarial networks, and tractable models like normalizing flows or mixtures of Gaussians. We demonstrate that our multi-stage model has much higher reconstruction quality than current state-of-the-art methods with equivalent disentanglement performance across multiple standard benchmarks.

READ FULL TEXT

page 15

page 16

page 21

page 25

page 26

page 28

page 31

page 33

research
01/13/2020

High-Fidelity Synthesis with Disentangled Representation

Learning disentangled representation of data without supervision is an i...
research
03/25/2023

Beta-VAE has 2 Behaviors: PCA or ICA?

Beta-VAE is a very classical model for disentangled representation learn...
research
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
research
08/26/2020

Orientation-Disentangled Unsupervised Representation Learning for Computational Pathology

Unsupervised learning enables modeling complex images without the need f...
research
06/27/2019

Tuning-Free Disentanglement via Projection

In representation learning and non-linear dimension reduction, there is ...
research
04/25/2018

Unsupervised Disentangled Representation Learning with Analogical Relations

Learning the disentangled representation of interpretable generative fac...
research
11/02/2017

Variational Inference of Disentangled Latent Concepts from Unlabeled Observations

Disentangled representations, where the higher level data generative fac...

Please sign up or login with your details

Forgot password? Click here to reset