Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization

06/04/2022
by   Takashi Goda, et al.
0

We study stochastic gradient descent for solving conditional stochastic optimization problems, in which an objective to be minimized is given by a parametric nested expectation with an outer expectation taken with respect to one random variable and an inner conditional expectation with respect to the other random variable. The gradient of such a parametric nested expectation is again expressed as a nested expectation, which makes it hard for the standard nested Monte Carlo estimator to be unbiased. In this paper, we show under some conditions that a multilevel Monte Carlo gradient estimator is unbiased and has finite variance and finite expected computational cost, so that the standard theory from stochastic optimization for a parametric (non-nested) expectation directly applies. We also discuss a special case for which yet another unbiased gradient estimator with finite variance and cost can be constructed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2020

Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs

In this paper we propose an efficient stochastic optimization algorithm ...
research
01/14/2020

Efficient Debiased Variational Bayes by Multilevel Monte Carlo Methods

Variational Bayes is a method to find a good approximation of the poster...
research
04/22/2019

Unbiased Multilevel Monte Carlo: Stochastic Optimization, Steady-state Simulation, Quantiles, and Other Applications

We present general principles for the design and analysis of unbiased Mo...
research
11/20/2017

Unbiased Simulation for Optimizing Stochastic Function Compositions

In this paper, we introduce an unbiased gradient simulation algorithms f...
research
07/11/2020

Solving Bayesian Risk Optimization via Nested Stochastic Gradient Estimation

In this paper, we aim to solve Bayesian Risk Optimization (BRO), which i...
research
12/22/2020

Unbiased Gradient Estimation for Distributionally Robust Learning

Seeking to improve model generalization, we consider a new approach base...
research
05/14/2013

Estimating or Propagating Gradients Through Stochastic Neurons

Stochastic neurons can be useful for a number of reasons in deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset