Penalised FTRL With Time-Varying Constraints

04/05/2022
by   Douglas J. Leith, et al.
0

In this paper we extend the classical Follow-The-Regularized-Leader (FTRL) algorithm to encompass time-varying constraints, through adaptive penalization. We establish sufficient conditions for the proposed Penalized FTRL algorithm to achieve O(√(t)) regret and violation with respect to strong benchmark X̂^max_t. Lacking prior knowledge of the constraints, this is probably the largest benchmark set that we can reasonably hope for. Our sufficient conditions are necessary in the sense that when they are violated there exist examples where O(√(t)) regret and violation is not achieved. Compared to the best existing primal-dual algorithms, Penalized FTRL substantially extends the class of problems for which O(√(t)) regret and violation performance is achievable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2022

Lazy Lagrangians with Predictions for Online Learning

We consider the general problem of online convex optimization with time-...
research
11/15/2021

Simultaneously Achieving Sublinear Regret and Constraint Violations for Online Convex Optimization with Time-varying Constraints

In this paper, we develop a novel virtual-queue-based online algorithm f...
research
05/27/2021

An Online Learning Approach to Optimizing Time-Varying Costs of AoI

We consider systems that require timely monitoring of sources over a com...
research
08/16/2011

Stability Conditions for Online Learnability

Stability is a general notion that quantifies the sensitivity of a learn...
research
06/28/2023

Online Game with Time-Varying Coupled Inequality Constraints

In this paper, online game is studied, where at each time, a group of pl...
research
12/05/2019

Turnpike in optimal shape design

We introduce and study the turnpike property for time-varying shapes, wi...

Please sign up or login with your details

Forgot password? Click here to reset