Best-Case Lower Bounds in Online Learning

06/23/2021
by   Cristóbal Guzmán, et al.
0

Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest improvement an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow The Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary predictions, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness

Stochastic and adversarial data are two widely studied settings in onlin...
research
02/11/2020

Online Learning with Imperfect Hints

We consider a variant of the classical online linear optimization proble...
research
07/18/2023

Oracle Efficient Online Multicalibration and Omniprediction

A recent line of work has shown a surprising connection between multical...
research
09/29/2021

Minimal Expected Regret in Linear Quadratic Control

We consider the problem of online learning in Linear Quadratic Control s...
research
03/27/2023

Learning Rate Schedules in the Presence of Distribution Shift

We design learning rate schedules that minimize regret for SGD-based onl...
research
11/18/2015

Online learning in repeated auctions

Motivated by online advertising auctions, we consider repeated Vickrey a...
research
02/06/2019

Equal Opportunity in Online Classification with Partial Feedback

We study an online classification problem with partial feedback in which...

Please sign up or login with your details

Forgot password? Click here to reset