Optimal Regularized Online Convex Allocation by Adaptive Re-Solving

09/01/2022
by   Wanteng Ma, et al.
0

This paper introduces a dual-based algorithm framework for solving the regularized online resource allocation problems, which have cumulative convex rewards, hard resource constraints, and a non-separable regularizer. Under a strategy of adaptively updating the resource constraints, the proposed framework only requests an approximate solution to the empirical dual problem up to a certain accuracy, and yet delivers an optimal logarithmic regret under a locally strongly convex assumption. Surprisingly, a delicate analysis of dual objective function enables us to eliminate the notorious loglog factor in regret bound. The flexible framework renders renowned and computationally fast algorithms immediately applicable, e.g., dual gradient descent and stochastic gradient descent. A worst-case square-root regret lower bound is established if the resource constraints are not adaptively updated during dual optimization, which underscores the critical role of adaptive dual variable update. Comprehensive numerical experiments and real data application demonstrate the merits of proposed algorithm framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2020

Regularized Online Allocation Problems: Fairness and Beyond

Online allocation problems with resource constraints have a rich history...
research
11/18/2020

The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems

Online allocation problems with resource constraints are central problem...
research
05/31/2023

Parameter-free projected gradient descent

We consider the problem of minimizing a convex function over a closed co...
research
06/30/2019

Efficient Online Convex Optimization with Adaptively Minimax Optimal Dynamic Regret

We introduce an online convex optimization algorithm using projected sub...
research
07/31/2023

Fast stochastic dual coordinate descent algorithms for linearly constrained convex optimization

The problem of finding a solution to the linear system Ax = b with certa...
research
04/19/2019

Minimax Optimal Online Stochastic Learning for Sequences of Convex Functions under Sub-Gradient Observation Failures

We study online convex optimization under stochastic sub-gradient observ...

Please sign up or login with your details

Forgot password? Click here to reset