Online Variance Reduction with Mixtures

03/29/2019
by   Zalán Borsos, et al.
20

Adaptive importance sampling for stochastic optimization is a promising approach that offers improved convergence through variance reduction. In this work, we propose a new framework for variance reduction that enables the use of mixtures over predefined sampling distributions, which can naturally encode prior knowledge about the data. While these sampling distributions are fixed, the mixture weights are adapted during the optimization process. We propose VRM, a novel and efficient adaptive scheme that asymptotically recovers the best mixture weights in hindsight and can also accommodate sampling distributions over sets of points. We empirically demonstrate the versatility of VRM in a range of applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2018

Online Variance Reduction for Stochastic Optimization

Modern stochastic optimization methods often rely on uniform sampling wh...
research
03/23/2021

Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes

Reducing the variance of the gradient estimator is known to improve the ...
research
08/08/2017

Stochastic Optimization with Bandit Sampling

Many stochastic optimization algorithms work by estimating the gradient ...
research
10/29/2021

Adaptive Importance Sampling meets Mirror Descent: a Bias-variance tradeoff

Adaptive importance sampling is a widely spread Monte Carlo technique th...
research
03/23/2021

Stochastic Reweighted Gradient Descent

Despite the strong theoretical guarantees that variance-reduced finite-s...
research
05/05/2020

Variance Reduction for Sequential Sampling in Stochastic Programming

This paper investigates the variance reduction techniques Antithetic Var...
research
06/10/2020

Bandit Samplers for Training Graph Neural Networks

Several sampling algorithms with variance reduction have been proposed f...

Please sign up or login with your details

Forgot password? Click here to reset