Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

12/23/2015
by   Rodolphe Jenatton, et al.
0

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated over a finite number of rounds T , but can be violated in intermediate rounds. For some user-defined trade-off parameter β ∈ (0, 1), the proposed algorithm achieves cumulative regret bounds of O(T^maxβ,1--β) and O(T^(1--β/2)) for the loss and the constraint violations respectively. Our results hold for convex losses and can handle arbitrary convex constraints without requiring knowledge of the number of rounds in advance. Our contributions improve over the best known cumulative regret bounds by Mahdavi, et al. (2012) that are respectively O(T^1/2) and O(T^3/4) for general convex domains, and respectively O(T^2/3) and O(T^2/3) when further restricting to polyhedral domains. We supplement the analysis with experiments validating the performance of our algorithm in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2021

Regret and Cumulative Constraint Violation Analysis for Online Convex Optimization with Long Term Constraints

This paper considers online convex optimization with long term constrain...
research
04/08/2016

A Low Complexity Algorithm with O(√(T)) Regret and Finite Constraint Violations for Online Convex Optimization with Long Term Constraints

This paper considers online convex optimization over a complicated const...
research
02/19/2018

Online Convex Optimization for Cumulative Constraints

We propose an algorithm for online convex optimization which examines a ...
research
02/17/2016

Online optimization and regret guarantees for non-additive long-term constraints

We consider online optimization in the 1-lookahead setting, where the ob...
research
07/05/2021

Robust Online Convex Optimization in the Presence of Outliers

We consider online convex optimization when a number k of data points ar...
research
12/31/2020

Optimizing Optimizers: Regret-optimal gradient descent algorithms

The need for fast and robust optimization algorithms are of critical imp...

Please sign up or login with your details

Forgot password? Click here to reset