Resampled Priors for Variational Autoencoders

10/26/2018
by   Matthias Bauer, et al.
18

We propose Learned Accept/Reject Sampling (LARS), a method for constructing richer priors using rejection sampling with a learned acceptance function. This work is motivated by recent analyses of the VAE objective, which pointed out that commonly used simple priors can lead to underfitting. As the distribution induced by LARS involves an intractable normalizing constant, we show how to estimate it and its gradients efficiently. We demonstrate that LARS priors improve VAE performance on several standard datasets both when they are learned jointly with the rest of the model and when they are fitted to a pretrained model. Finally, we show that LARS can be combined with existing methods for defining flexible priors for an additional boost in performance.

READ FULL TEXT

page 7

page 18

page 19

page 20

page 22

research
01/06/2021

Cauchy-Schwarz Regularized Autoencoder

Recent work in unsupervised learning has focused on efficient inference ...
research
03/04/2020

Deterministic Decoding for Discrete Data in Variational Autoencoders

Variational autoencoders are prominent generative models for modeling di...
research
06/29/2021

Diffusion Priors In Variational Autoencoders

Among likelihood-based approaches for deep generative modelling, variati...
research
01/10/2013

Markov Chain Monte Carlo using Tree-Based Priors on Model Structure

We present a general framework for defining priors on model structure an...
research
09/21/2022

Approximate sampling and estimation of partition functions using neural networks

We consider the closely related problems of sampling from a distribution...
research
10/05/2020

Bigeminal Priors Variational auto-encoder

Variational auto-encoders (VAEs) are an influential and generally-used c...
research
06/16/2021

Knowledge-Adaptation Priors

Humans and animals have a natural ability to quickly adapt to their surr...

Please sign up or login with your details

Forgot password? Click here to reset