Generalizing Variational Autoencoders with Hierarchical Empirical Bayes

07/20/2020
by   Wei Cheng, et al.
11

Variational Autoencoders (VAEs) have experienced recent success as data-generating models by using simple architectures that do not require significant fine-tuning of hyperparameters. However, VAEs are known to suffer from over-regularization which can lead to failure to escape local maxima. This phenomenon, known as posterior collapse, prevents learning a meaningful latent encoding of the data. Recent methods have mitigated this issue by deterministically moment-matching an aggregated posterior distribution to an aggregate prior. However, abandoning a probabilistic framework (and thus relying on point estimates) can both lead to a discontinuous latent space and generate unrealistic samples. Here we present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models. Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization. Second, we show that assuming a general dependency structure between variables in the latent space produces better convergence onto the mean-field assumption for improved posterior inference. Overall, HEBAE is more robust to a wide-range of hyperparameter initializations than an analogous VAE. Using data from MNIST and CelebA, we illustrate the ability of HEBAE to generate higher quality samples based on FID score than existing autoencoder-based approaches.

READ FULL TEXT

page 7

page 8

page 12

research
11/01/2022

Improving Variational Autoencoders with Density Gap-based Regularization

Variational autoencoders (VAEs) are one of the powerful unsupervised lea...
research
10/06/2020

NCP-VAE: Variational Autoencoders with Noise Contrastive Priors

Variational autoencoders (VAEs) are one of the powerful likelihood-based...
research
10/12/2022

Auto-Encoding Goodness of Fit

For generative autoencoders to learn a meaningful latent representation ...
research
09/29/2022

Training β-VAE by Aggregating a Learned Gaussian Posterior with a Decoupled Decoder

The reconstruction loss and the Kullback-Leibler divergence (KLD) loss i...
research
03/22/2023

Encoding Binary Concepts in the Latent Space of Generative Models for Enhancing Data Representation

Binary concepts are empirically used by humans to generalize efficiently...
research
03/29/2019

From Variational to Deterministic Autoencoders

Variational Autoencoders (VAEs) provide a theoretically-backed framework...
research
06/07/2020

Where Bayes tweaks Gauss: Conditionally Gaussian priors for stable multi-dipole estimation

We present a very simple yet powerful generalization of a previously des...

Please sign up or login with your details

Forgot password? Click here to reset