Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach

02/05/2021
by   Minyoung Kim, et al.
0

Variational autoencoder (VAE) is a very successful generative model whose key element is the so called amortized inference network, which can perform test time inference using a single feed forward pass. Unfortunately, this comes at the cost of degraded accuracy in posterior approximation, often underperforming the instance-wise variational optimization. Although the latest semi-amortized approaches mitigate the issue by performing a few variational optimization updates starting from the VAE's amortized inference output, they inherently suffer from computational overhead for inference at test time. In this paper, we address the problem in a completely different way by considering a random inference model, where we model the mean and variance functions of the variational posterior as random Gaussian processes (GP). The motivation is that the deviation of the VAE's amortized posterior distribution from the true posterior can be regarded as random noise, which allows us to take into account the uncertainty in posterior approximation in a principled manner. In particular, our model can quantify the difficulty in posterior approximation by a Gaussian variational density. Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods. We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.

READ FULL TEXT
research
11/17/2020

Recursive Inference for Variational Autoencoders

Inference networks of traditional Variational Autoencoders (VAEs) are ty...
research
07/14/2023

Variational Prediction

Bayesian inference offers benefits over maximum likelihood, but it also ...
research
08/13/2019

Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model

In this paper we introduce the ice-start problem, i.e., the challenge of...
research
09/01/2015

Importance Weighted Autoencoders

The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently ...
research
05/23/2018

Amortized Inference Regularization

The variational autoencoder (VAE) is a popular model for density estimat...
research
03/29/2021

Rapid Risk Minimization with Bayesian Models Through Deep Learning Approximation

In this paper, we introduce a novel combination of Bayesian Models (BMs)...
research
04/17/2021

Convolutional Normalizing Flows for Deep Gaussian Processes

Deep Gaussian processes (DGPs), a hierarchical composition of GP models,...

Please sign up or login with your details

Forgot password? Click here to reset