Optimistic and Adaptive Lagrangian Hedging

01/23/2021
by   Ryan D'Orazio, et al.
0

In online learning an algorithm plays against an environment with losses possibly picked by an adversary at each round. The generality of this framework includes problems that are not adversarial, for example offline optimization, or saddle point problems (i.e. min max optimization). However, online algorithms are typically not designed to leverage additional structure present in non-adversarial problems. Recently, slight modifications to well-known online algorithms such as optimism and adaptive step sizes have been used in several domains to accelerate online learning – recovering optimal rates in offline smooth optimization, and accelerating convergence to saddle points or social welfare in smooth games. In this work we introduce optimism and adaptive stepsizes to Lagrangian hedging, a class of online algorithms that includes regret-matching, and hedge (i.e. multiplicative weights). Our results include: a general general regret bound; a path length regret bound for a fixed smooth loss, applicable to an optimistic variant of regret-matching and regret-matching+; optimistic regret bounds for Φ regret, a framework that includes external, internal, and swap regret; and optimistic bounds for a family of algorithms that includes regret-matching+ as a special case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2021

Online Multiobjective Minimax Optimization and Applications

We introduce a simple but general online learning framework, in which at...
research
11/14/2010

Online Learning: Beyond Regret

We study online learnability of a wide class of problems, extending the ...
research
11/13/2018

Community Exploration: From Offline Optimization to Online Learning

We introduce the community exploration problem that has many real-world ...
research
02/13/2023

Achieving Better Regret against Strategic Adversaries

We study online learning problems in which the learner has extra knowled...
research
06/14/2012

On Local Regret

Online learning aims to perform nearly as well as the best hypothesis in...
research
05/18/2022

The Multisecretary problem with many types

We study the multisecretary problem with capacity to hire up to B out of...
research
05/11/2019

Fast and Furious Learning in Zero-Sum Games: Vanishing Regret with Non-Vanishing Step Sizes

We show for the first time, to our knowledge, that it is possible to rec...

Please sign up or login with your details

Forgot password? Click here to reset