Disentangled Representation Learning and Generation with Manifold Optimization

06/12/2020
by   Arun Pandey, et al.
34

Disentanglement is an enjoyable property in representation learning which increases the interpretability of generative models such as Variational Auto-Encoders (VAE), Generative Adversarial Models and their many variants. In the context of latent space models, this work presents a representation learning framework that explicitly promotes disentanglement thanks to the combination of an auto-encoder with Principal Component Analysis (PCA) in latent space. The proposed objective is the sum of an auto-encoder error term along with a PCA reconstruction error in the feature space. This has an interpretation of a Restricted Kernel Machine with an interconnection matrix on the Stiefel manifold. The construction encourages a matching between the principal directions in latent space and the directions of orthogonal variation in data space. The training algorithm involves a stochastic optimization method on the Stiefel manifold, which increases only marginally the computing time compared to an analogous VAE. Our theoretical discussion and various experiments show that the proposed model improves over many VAE variants along with special emphasis on disentanglement learning.

READ FULL TEXT

page 2

page 6

page 7

page 14

page 15

research
11/08/2018

Disentangling Latent Factors with Whitening

After the success of deep generative models in image generation tasks, l...
research
01/21/2021

Knowledge Generation – Variational Bayes on Knowledge Graphs

This thesis is a proof of concept for the potential of Variational Auto-...
research
05/28/2021

Latent Space Exploration Using Generative Kernel PCA

Kernel PCA is a powerful feature extractor which recently has seen a ref...
research
09/30/2022

GM-VAE: Representation Learning with VAE on Gaussian Manifold

We propose a Gaussian manifold variational auto-encoder (GM-VAE) whose l...
research
10/13/2019

Bayesian Neural Decoding Using A Diversity-Encouraging Latent Representation Learning Method

It is well established that temporal organization is critical to memory,...
research
06/05/2021

Local Disentanglement in Variational Auto-Encoders Using Jacobian L_1 Regularization

There have been many recent advances in representation learning; however...
research
10/01/2021

Unsupervised Belief Representation Learning in Polarized Networks with Information-Theoretic Variational Graph Auto-Encoders

This paper develops a novel unsupervised algorithm for belief representa...

Please sign up or login with your details

Forgot password? Click here to reset