To Regularize or Not To Regularize? The Bias Variance Trade-off in Regularized AEs

06/10/2020
by   Arnab Kumar Mondal, et al.
0

Regularized Auto-Encoders (AE) form a rich class of methods within the landscape of neural generative models. They effectively model the joint-distribution between the data and a latent space using an Encoder-Decoder combination, with regularization imposed in terms of a prior over the latent space. Despite their advantages such as stability in training, the performance of AE based models has not reached that of the other models such as GANs. While several reasons including the presence of conflicting terms in the objective, distributional choices imposed on the Encoder and the Decoder, and dimensionality of the latent space have been identified as possible causes for the suboptimal performance, the role of the regularization (prior distribution) imposed has not been studied systematically. Motivated by this, we examine the effect of the latent prior on the generation quality of the AE models in this paper. We show that there is no single fixed prior which is optimal for all data distributions, given a Gaussian Decoder. Further, with finite data, we show that there exists a bias-variance trade-off that comes with prior imposition. As a remedy, we optimize a generalized ELBO objective, with an additional state space over the latent prior. We implicitly learn this flexible prior jointly with the AE training using an adversarial learning technique, which facilitates operation on different points of the bias-variance curve. Our experiments on multiple datasets show that the proposed method is the new state-of-the-art for AE based generative models.

READ FULL TEXT

page 8

page 18

page 19

page 20

research
12/10/2019

Towards Latent Space Optimality for Auto-Encoder Based Generative Models

The field of neural generative models is dominated by the highly success...
research
07/16/2021

ScRAE: Deterministic Regularized Autoencoders with Flexible Priors for Clustering Single-cell Gene Expression Data

Clustering single-cell RNA sequence (scRNA-seq) data poses statistical a...
research
11/06/2017

Optimal transport maps for distribution preserving operations on latent spaces of Generative Models

Generative models such as Variational Auto Encoders (VAEs) and Generativ...
research
07/17/2023

Complexity Matters: Rethinking the Latent Space for Generative Modeling

In generative modeling, numerous successful approaches leverage a low-di...
research
11/19/2019

SimVAE: Simulator-Assisted Training forInterpretable Generative Models

This paper presents a simulator-assisted training method (SimVAE) for va...
research
03/31/2020

Cross Scene Prediction via Modeling Dynamic Correlation using Latent Space Shared Auto-Encoders

This work addresses on the following problem: given a set of unsynchroni...
research
04/27/2019

A Deep Generative Model for Graph Layout

As different layouts can characterize different aspects of the same grap...

Please sign up or login with your details

Forgot password? Click here to reset