Tuning-Free Disentanglement via Projection

06/27/2019
by   Yue Bai, et al.
5

In representation learning and non-linear dimension reduction, there is a huge interest to learn the 'disentangled' latent variables, where each sub-coordinate almost uniquely controls a facet of the observed data. While many regularization approaches have been proposed on variational autoencoders, heuristic tuning is required to balance between disentanglement and loss in reconstruction accuracy -- due to the unsupervised nature, there is no principled way to find an optimal weight for regularization. Motivated to completely bypass regularization, we consider a projection strategy: modifying the canonical Gaussian encoder, we add a layer of scaling and rotation to the Gaussian mean, such that the marginal correlations among latent sub-coordinates become exactly zero. This achieves a theoretically maximal disentanglement, as guaranteed by zero cross-correlation between one latent sub-coordinate and the observed varying with the rest. Unlike regularizations, the extra projection layer does not impact the flexibility of the previous encoder layers, leading to almost no loss in expressiveness. This approach is simple to implement in practice. Our numerical experiments demonstrate very good performance, with no tuning required.

READ FULL TEXT

page 6

page 8

research
09/15/2021

Disentangling Generative Factors of Physical Fields Using Variational Autoencoders

The ability to extract generative parameters from high-dimensional field...
research
04/23/2021

Eccentric Regularization: Minimizing Hyperspherical Energy without explicit projection

Several regularization methods have recently been introduced which force...
research
03/25/2021

Full Encoder: Make Autoencoders Learn Like PCA

While the beta-VAE family is aiming to find disentangled representations...
research
10/25/2020

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

Current autoencoder-based disentangled representation learning methods a...
research
12/22/2017

On Perfect Privacy and Maximal Correlation

The problem of private data disclosure is studied from an information th...
research
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
research
01/24/2016

Synthesis of Gaussian Trees with Correlation Sign Ambiguity: An Information Theoretic Approach

In latent Gaussian trees the pairwise correlation signs between the vari...

Please sign up or login with your details

Forgot password? Click here to reset