Stochastic Approximation Approaches to Group Distributionally Robust Optimization

02/18/2023
by   Moshe Y. Vardi, et al.
0

This paper investigates group distributionally robust optimization (GDRO), with the purpose to learn a model that performs well over m different distributions. First, we formulate GDRO as a stochastic convex-concave saddle-point problem, and demonstrate that stochastic mirror descent (SMD), using m samples in each iteration, achieves an O(m (log m)/ϵ^2) sample complexity for finding an ϵ-optimal solution, which matches the Ω(m/ϵ^2) lower bound up to a logarithmic factor. Then, we make use of techniques from online learning to reduce the number of samples required in each round from m to 1, keeping the same sample complexity. Specifically, we cast GDRO as a two-players game where one player simply performs SMD and the other executes an online algorithm for non-oblivious multi-armed bandits. Next, we consider a more practical scenario where the number of samples that can be drawn from each distribution is different, and propose a novel formulation of weighted DRO, which allows us to derive distribution-dependent convergence rates. Denote by n_i the sample budget for the i-th distribution, and assume n_1 ≥ n_2 ≥⋯≥ n_m. In the first approach, we incorporate non-uniform sampling into SMD such that the sample budget is satisfied in expectation, and prove the excess risk of the i-th distribution decreases at an O(√(n_1 log m)/n_i) rate. In the second approach, we use mini-batches to meet the budget exactly and also reduce the variance in stochastic gradients, and then leverage stochastic mirror-prox algorithm, which can exploit small variances, to optimize a carefully designed weighted DRO problem. Under appropriate conditions, it attains an O((log m)/√(n_i)) convergence rate, which almost matches the optimal O(√(1/n_i)) rate of only learning from the i-th distribution with n_i samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2022

On-Demand Sampling: Learning Optimally from Multiple Distributions

Social and real-world considerations such as robustness, fairness, socia...
research
02/21/2017

Stochastic Canonical Correlation Analysis

We tightly analyze the sample complexity of CCA, provide a learning algo...
research
08/24/2020

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

In this paper, we study smooth stochastic multi-level composition optimi...
research
02/04/2022

Optimal Spend Rate Estimation and Pacing for Ad Campaigns with Budgets

Online ad platforms offer budget management tools for advertisers that a...
research
08/08/2017

Stochastic Optimization with Bandit Sampling

Many stochastic optimization algorithms work by estimating the gradient ...
research
06/04/2020

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Asynchronous Q-learning aims to learn the optimal action-value function ...
research
11/16/2021

Online Estimation and Optimization of Utility-Based Shortfall Risk

Utility-Based Shortfall Risk (UBSR) is a risk metric that is increasingl...

Please sign up or login with your details

Forgot password? Click here to reset