DeepAI AI Chat
Log In Sign Up

Stochastic Constrained DRO with a Complexity Independent of Sample Size

by   Qi Qi, et al.

Distributionally Robust Optimization (DRO), as a popular method to train robust models against distribution shift between training and test sets, has received tremendous attention in recent years. In this paper, we propose and analyze stochastic algorithms that apply to both non-convex and convex losses for solving Kullback Leibler divergence constrained DRO problem. Compared with existing methods solving this problem, our stochastic algorithms not only enjoy competitive if not better complexity independent of sample size but also just require a constant batch size at every iteration, which is more practical for broad applications. We establish a nearly optimal complexity bound for finding an ϵ stationary solution for non-convex losses and an optimal complexity for finding an ϵ optimal solution for convex losses. Empirical studies demonstrate the effectiveness of the proposed algorithms for solving non-convex and convex constrained DRO problems.


page 1

page 2

page 3

page 4


Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel Optimization

In this paper, we consider non-convex stochastic bilevel optimization (S...

Distributed Stochastic Consensus Optimization with Momentum for Nonconvex Nonsmooth Problems

While many distributed optimization algorithms have been proposed for so...

Dual perspective method for solving the point in a polygon problem

A novel method has been introduced to solve a point inclusion in a polyg...

Interior Point Methods with Adversarial Networks

We present a new methodology, called IPMAN, that combines interior point...

Constrained Learning with Non-Convex Losses

Though learning has become a core technology of modern information proce...

A Social Spider Algorithm for Solving the Non-convex Economic Load Dispatch Problem

Economic Load Dispatch (ELD) is one of the essential components in power...

Accelerated Stochastic Subgradient Methods under Local Error Bound Condition

In this paper, we propose two accelerated stochastic subgradient method...