Using Large Ensembles of Control Variates for Variational Inference

10/30/2018
by   Tomas Geffner, et al.
0

Variational inference is increasingly being addressed with stochastic optimization. In this setting, the gradient's variance plays a crucial role in the optimization procedure, since high variance gradients lead to poor convergence. A popular approach used to reduce gradient's variance involves the use of control variates. Despite the good results obtained, control variates developed for variational inference are typically looked at in isolation. In this paper we clarify the large number of control variates that are available by giving a systematic view of how they are derived. We also present a Bayesian risk minimization framework in which the quality of a procedure for combining control variates is quantified by its effect on optimization convergence rates, which leads to a very simple combination rule. Results show that combining a large number of control variates this way significantly improves the convergence of inference over using the typical gradient estimators or a reduced number of control variates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2020

Approximation Based Variance Reduction for Reparameterization Gradients

Flexible variational distributions improve variational inference but are...
research
10/13/2022

A Dual Control Variate for doubly stochastic optimization and black-box variational inference

In this paper, we aim at reducing the variance of doubly stochastic opti...
research
05/22/2017

Reducing Reparameterization Gradient Variance

Optimization with noisy gradients has become ubiquitous in statistics an...
research
10/05/2018

Differentiable Antithetic Sampling for Variance Reduction in Stochastic Variational Inference

Stochastic optimization techniques are standard in variational inference...
research
06/13/2014

Smoothed Gradients for Stochastic Variational Inference

Stochastic variational inference (SVI) lets us scale up Bayesian computa...
research
11/05/2019

A Rule for Gradient Estimator Selection, with an Application to Variational Inference

Stochastic gradient descent (SGD) is the workhorse of modern machine lea...
research
09/09/2015

Fast Second-Order Stochastic Backpropagation for Variational Inference

We propose a second-order (Hessian or Hessian-free) based optimization m...

Please sign up or login with your details

Forgot password? Click here to reset