Training Variational Autoencoders with Buffered Stochastic Variational Inference

02/27/2019
by   Rui Shu, et al.
18

The recognition network in deep latent variable models such as variational autoencoders (VAEs) relies on amortized inference for efficient posterior approximation that can scale up to large datasets. However, this technique has also been demonstrated to select suboptimal variational parameters, often resulting in considerable additional error called the amortization gap. To close the amortization gap and improve the training of the generative model, recent works have introduced an additional refinement step that applies stochastic variational inference (SVI) to improve upon the variational parameters returned by the amortized inference model. In this paper, we propose the Buffered Stochastic Variational Inference (BSVI), a new refinement procedure that makes use of SVI's sequence of intermediate variational proposal distributions and their corresponding importance weights to construct a new generalized importance-weighted lower bound. We demonstrate empirically that training the variational autoencoders with BSVI consistently out-performs SVI, yielding an improved training procedure for VAEs.

READ FULL TEXT
research
02/07/2018

Semi-Amortized Variational Autoencoders

Amortized variational inference (AVI) replaces instance-specific local i...
research
01/10/2018

Inference Suboptimality in Variational Autoencoders

Amortized inference has led to efficient approximate inference for large...
research
07/05/2018

Learning in Variational Autoencoders with Kullback-Leibler and Renyi Integral Bounds

In this paper we propose two novel bounds for the log-likelihood based o...
research
11/30/2017

Variational Deep Q Network

We propose a framework that directly tackles the probability distributio...
research
12/19/2019

Pseudo-Encoded Stochastic Variational Inference

Posterior inference in directed graphical models is commonly done using ...
research
06/21/2023

Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference

We view large language models (LLMs) as stochastic language layers in a ...
research
08/15/2023

Natural Evolution Strategies as a Black Box Estimator for Stochastic Variational Inference

Stochastic variational inference and its derivatives in the form of vari...

Please sign up or login with your details

Forgot password? Click here to reset