Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives

10/09/2018
by   George Tucker, et al.
18

Deep latent variable models have become a popular model choice due to the scalable learning algorithms introduced by (Kingma & Welling, 2013; Rezende et al., 2014). These approaches maximize a variational lower bound on the intractable log likelihood of the observed data. Burda et al. (2015) introduced a multi-sample variational bound, IWAE, that is at least as tight as the standard variational lower bound and becomes increasingly tight as the number of samples increases. Counterintuitively, the typical inference network gradient estimator for the IWAE bound performs poorly as the number of samples increases (Rainforth et al., 2018; Le et al., 2018). Roeder et al. (2017) propose an improved gradient estimator, however, are unable to show it is unbiased. We show that it is in fact biased and that the bias can be estimated efficiently with a second application of the reparameterization trick. The doubly reparameterized gradient (DReG) estimator does not suffer as the number of samples increases, resolving the previously raised issues. The same idea can be used to improve many recently introduced training techniques for latent variable models. In particular, we show that this estimator reduces the variance of the IWAE gradient, the reweighted wake-sleep update (RWS) (Bornschein & Bengio, 2014), and the jackknife variational inference (JVI) gradient (Nowozin, 2018). Finally, we show that this computationally efficient, unbiased drop-in gradient estimator translates to improved performance for all three objectives on several modeling tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2016

Variational inference for Monte Carlo objectives

Recent progress in deep latent variable models has largely been driven b...
research
05/13/2019

Hierarchical Importance Weighted Autoencoders

Importance weighted variational inference (Burda et al., 2015) uses mult...
research
03/21/2017

REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models

Learning in models with discrete latent variables is challenging due to ...
research
07/24/2019

On the relationship between variational inference and adaptive importance sampling

The importance weighted autoencoder (IWAE) (Burda et al., 2016) and rewe...
research
03/23/2015

On some provably correct cases of variational inference for topic models

Variational inference is a very efficient and popular heuristic used in ...
research
11/30/2019

Disentanglement Challenge: From Regularization to Reconstruction

The challenge of learning disentangled representation has recently attra...
research
07/24/2019

On importance-weighted autoencoders

The importance weighted autoencoder (IWAE) (Burda et al., 2016) is a pop...

Please sign up or login with your details

Forgot password? Click here to reset