Disentangling Latent Space for VAE by Label Relevant/Irrelevant Dimensions

12/22/2018
by   Zhilin Zheng, et al.
8

VAE requires the standard Gaussian distribution as a prior in the latent space. Since all codes tend to follow the same prior, it often suffers the so-called "posterior collapse". To avoid this, this paper introduces the class specific distribution for the latent code. But different from CVAE, we present a method for disentangling the latent space into the label relevant and irrelevant dimensions, z_s and z_u, for a single input. We apply two separated encoders to map the input into z_s and z_u respectively, and then give the concatenated code to the decoder to reconstruct the input. The label irrelevant code z_u represent the common characteristics of all inputs, hence they are constrained by the standard Gaussian, and their encoder is trained in amortized variational inference way, like VAE. While z_s is assumed to follow the Gaussian mixture distribution in which each component corresponds to a particular class. The parameters for the Gaussian components in z_s encoder are optimized by the label supervision in a global stochastic way. In theory, we show that our method is actually equivalent to adding a KL divergence term on the joint distribution of z_s and the class label c, and it can directly increase the mutual information between z_s and the label c. Our model can also be extended to GAN by adding a discriminator in the pixel domain so that it produces high quality and diverse images.

READ FULL TEXT

page 6

page 7

page 8

page 14

page 15

research
05/23/2021

EXoN: EXplainable encoder Network

We propose a new semi-supervised learning method of Variational AutoEnco...
research
10/29/2019

Disentangling the Spatial Structure and Style in Conditional VAE

This paper aims to disentangle the latent space in cVAE into the spatial...
research
09/30/2022

GM-VAE: Representation Learning with VAE on Gaussian Manifold

We propose a Gaussian manifold variational auto-encoder (GM-VAE) whose l...
research
06/29/2020

VAE-KRnet and its applications to variational Bayes

In this work, we have proposed a generative model for density estimation...
research
05/18/2020

HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network

We propose a framework called HyperVAE for encoding distributions of dis...
research
02/04/2023

PartitionVAE – a human-interpretable VAE

VAEs, or variational autoencoders, are autoencoders that explicitly lear...
research
11/19/2017

Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space

This paper explores image caption generation using conditional variation...

Please sign up or login with your details

Forgot password? Click here to reset