Unbiased Gradient Estimation for Distributionally Robust Learning

12/22/2020
by   Soumyadip Ghosh, et al.
0

Seeking to improve model generalization, we consider a new approach based on distributionally robust learning (DRL) that applies stochastic gradient descent to the outer minimization problem. Our algorithm efficiently estimates the gradient of the inner maximization problem through multi-level Monte Carlo randomization. Leveraging theoretical results that shed light on why standard gradient estimators fail, we establish the optimal parameterization of the gradient estimators of our approach that balances a fundamental tradeoff between computation time and statistical variance. Numerical experiments demonstrate that our DRL approach yields significant benefits over previous work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2022

Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization

We study stochastic gradient descent for solving conditional stochastic ...
research
02/01/2019

Multi-level Monte Carlo Variational Inference

In many statistics and machine learning frameworks, stochastic optimizat...
research
05/18/2020

Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs

In this paper we propose an efficient stochastic optimization algorithm ...
research
08/24/2020

Stochastic Gradient Descent Works Really Well for Stress Minimization

Stress minimization is among the best studied force-directed graph layou...
research
05/04/2016

Multi Level Monte Carlo methods for a class of ergodic stochastic differential equations

We develop a framework that allows the use of the multi-level Monte Carl...
research
12/16/2019

Incorporating Unlabeled Data into Distributionally Robust Learning

We study a robust alternative to empirical risk minimization called dist...
research
01/24/2019

Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop

The stochastic variance-reduced gradient method (SVRG) and its accelerat...

Please sign up or login with your details

Forgot password? Click here to reset