Variance Constrained Autoencoding

05/08/2020
by   D. T. Braithwaite, et al.
14

Recent state-of-the-art autoencoder based generative models have an encoder-decoder structure and learn a latent representation with a pre-defined distribution that can be sampled from. Implementing the encoder networks of these models in a stochastic manner provides a natural and common approach to avoid overfitting and enforce a smooth decoder function. However, we show that for stochastic encoders, simultaneously attempting to enforce a distribution constraint and minimising an output distortion leads to a reduction in generative and reconstruction quality. In addition, attempting to enforce a latent distribution constraint is not reasonable when performing disentanglement. Hence, we propose the variance-constrained autoencoder (VCAE), which only enforces a variance constraint on the latent distribution. Our experiments show that VCAE improves upon Wasserstein Autoencoder and the Variational Autoencoder in both reconstruction and generative quality on MNIST and CelebA. Moreover, we show that VCAE equipped with a total correlation penalty term performs equivalently to FactorVAE at learning disentangled representations on 3D-Shapes while being a more principled approach.

READ FULL TEXT

page 9

page 16

page 17

page 19

research
02/23/2023

Causally Disentangled Generative Variational AutoEncoder

We propose a new supervised learning method for Variational AutoEncoder ...
research
04/05/2022

LatentGAN Autoencoder: Learning Disentangled Latent Distribution

In autoencoder, the encoder generally approximates the latent distributi...
research
02/13/2022

A Group-Equivariant Autoencoder for Identifying Spontaneously Broken Symmetries in the Ising Model

We introduce the group-equivariant autoencoder (GE-autoencoder) – a nove...
research
12/30/2019

Disentangled Representation Learning with Wasserstein Total Correlation

Unsupervised learning of disentangled representations involves uncoverin...
research
10/07/2020

Learning disentangled representations with the Wasserstein Autoencoder

Disentangled representation learning has undoubtedly benefited from obje...
research
08/21/2004

Using Stochastic Encoders to Discover Structure in Data

In this paper a stochastic generalisation of the standard Linde-Buzo-Gra...
research
01/21/2019

Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

We present a simple neural rendering architecture that helps variational...

Please sign up or login with your details

Forgot password? Click here to reset