Amortized Inference Regularization

05/23/2018
by   Rui Shu, et al.
12

The variational autoencoder (VAE) is a popular model for density estimation and representation learning. Canonically, the variational principle suggests to prefer an expressive inference model so that the variational approximation is accurate. However, it is often overlooked that an overly-expressive inference model can be detrimental to the test set performance of both the amortized posterior approximator and, more importantly, the generative density estimator. In this paper, we leverage the fact that VAEs rely on amortized inference and propose techniques for amortized inference regularization (AIR) that control the smoothness of the inference model. We demonstrate that, by applying AIR, it is possible to improve VAE generalization on both inference and generative performance. Our paper challenges the belief that amortized inference is simply a mechanism for approximating maximum likelihood training and illustrates that regularization of the amortization family provides a new direction for understanding and improving generalization in VAEs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2019

MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders

Variational Autoencoder (VAE), a simple and effective deep generative mo...
research
10/06/2017

Learnable Explicit Density for Continuous Latent Space and Variational Inference

In this paper, we study two aspects of the variational autoencoder (VAE)...
research
09/02/2020

Quasi-symplectic Langevin Variational Autoencoder

Variational autoencoder (VAE) as one of the well investigated generative...
research
09/01/2015

Importance Weighted Autoencoders

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently ...
research
01/28/2022

Any Variational Autoencoder Can Do Arbitrary Conditioning

Arbitrary conditioning is an important problem in unsupervised learning,...
research
02/05/2021

Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach

Variational autoencoder (VAE) is a very successful generative model whos...
research
12/02/2021

HyperSPNs: Compact and Expressive Probabilistic Circuits

Probabilistic circuits (PCs) are a family of generative models which all...

Please sign up or login with your details

Forgot password? Click here to reset