Large-Scale Methods for Distributionally Robust Optimization

10/12/2020
by   Daniel Lévy, et al.
0

We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and χ^2 divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independent of training set size and number of parameters, making them suitable for large-scale applications. For χ^2 uncertainty sets these are the first such guarantees in the literature, and for CVaR our guarantees scale linearly in the uncertainty level rather than quadratically as in previous work. We also provide lower bounds proving the worst-case optimality of our algorithms for CVaR and a penalized version of the χ^2 problem. Our primary technical contributions are novel bounds on the bias of batch robust risk estimation and the variance of a multilevel Monte Carlo gradient estimator due to [Blanchet Glynn, 2015]. Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9–36 times more efficient than full-batch methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2020

Worst-case sensitivity

We introduce the notion of Worst-Case Sensitivity, defined as the worst-...
research
02/28/2020

The estimation error of general first order methods

Modern large-scale statistical models require to estimate thousands to m...
research
10/24/2021

Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis

Distributionally robust optimization (DRO) is a widely-used approach to ...
research
08/20/2019

Sensitivity estimation of conditional value at risk using randomized quasi-Monte Carlo

Conditional value at risk (CVaR) is a popular measure for quantifying po...
research
04/26/2022

Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD

We provide sharp path-dependent generalization and excess error guarante...
research
03/18/2021

Modeling the Second Player in Distributionally Robust Optimization

Distributionally robust optimization (DRO) provides a framework for trai...
research
10/07/2022

Scaling Forward Gradient With Local Losses

Forward gradient learning computes a noisy directional gradient and is a...

Please sign up or login with your details

Forgot password? Click here to reset