Reducing Reparameterization Gradient Variance

05/22/2017
by   Andrew C. Miller, et al.
0

Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the "reparameterization trick," represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to use more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on non-conjugate multi-level hierarchical models and a Bayesian neural net where we observed gradient variance reductions of multiple orders of magnitude (20-2,000x).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2019

Multi-level Monte Carlo Variational Inference

In many statistics and machine learning frameworks, stochastic optimizat...
research
07/04/2018

Quasi-Monte Carlo Variational Inference

Many machine learning problems involve Monte Carlo gradient estimators. ...
research
02/10/2023

Achieving acceleration despite very noisy gradients

We present a novel momentum-based first order optimization method (AGNES...
research
10/30/2018

Using Large Ensembles of Control Variates for Variational Inference

Variational inference is increasingly being addressed with stochastic op...
research
10/07/2016

The Generalized Reparameterization Gradient

The reparameterization gradient has become a widely used method to obtai...
research
09/09/2018

Variance Reduction in Monte Carlo Counterfactual Regret Minimization (VR-MCCFR) for Extensive Form Games using Baselines

Learning strategies for imperfect information games from samples of inte...
research
09/22/2016

(Bandit) Convex Optimization with Biased Noisy Gradient Oracles

Algorithms for bandit convex optimization and online learning often rely...

Please sign up or login with your details

Forgot password? Click here to reset