Quasi-Newton Quasi-Monte Carlo for variational Bayes

04/07/2021
by   Sifan Liu, et al.
0

Many machine learning problems optimize an objective that must be measured with noise. The primary method is a first order stochastic gradient descent using one or more Monte Carlo (MC) samples at each step. There are settings where ill-conditioning makes second order methods such as L-BFGS more effective. We study the use of randomized quasi-Monte Carlo (RQMC) sampling for such problems. When MC sampling has a root mean squared error (RMSE) of O(n^-1/2) then RQMC has an RMSE of o(n^-1/2) that can be close to O(n^-3/2) in favorable settings. We prove that improved sampling accuracy translates directly to improved optimization. In our empirical investigations for variational Bayes, using RQMC with stochastic L-BFGS greatly speeds up the optimization, and sometimes finds a better parameter value than MC does.

READ FULL TEXT
research
09/27/2021

Unbiased MLMC-based variational Bayes for likelihood-free inference

Variational Bayes (VB) is a popular tool for Bayesian inference in stati...
research
06/19/2021

Rayleigh-Gauss-Newton optimization with enhanced sampling for variational Monte Carlo

Variational Monte Carlo (VMC) is an approach for computing ground-state ...
research
08/02/2019

Why Simple Quadrature is just as good as Monte Carlo

We motive and calculate Newton-Cotes quadrature integration variance and...
research
07/16/2018

Density estimation by Randomized Quasi-Monte Carlo

We consider the problem of estimating the density of a random variable X...
research
06/09/2017

A randomized Halton algorithm in R

Randomized quasi-Monte Carlo (RQMC) sampling can bring orders of magnitu...
research
05/12/2022

Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo sampling

The Plackett-Luce (PL) model is ubiquitous in learning-to-rank (LTR) bec...

Please sign up or login with your details

Forgot password? Click here to reset