Lipschitz and Comparator-Norm Adaptivity in Online Learning

02/27/2020
by   Zakaria Mhammedi, et al.
0

We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present two versions: the first adapts to the squared norms of both comparator and gradients separately using O(d) time per round, the second adapts to their squared inner products (which measure variance only in the comparator direction) in time O(d^3) per round. We then generalize two prior reductions to the unbounded setting; one to not need hints, and a second to deal with the range ratio problem (which already arises in prior work). We discuss their optimality in light of prior and new lower bounds. We apply our methods to obtain sharper regret bounds for scale-invariant online prediction with linear models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2021

Isotuning With Applications To Scale-Free Online Learning

We extend and combine several tools of the literature to design fast, ad...
research
02/17/2018

Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

We introduce several new black-box reductions that significantly improve...
research
03/07/2017

Online Learning Without Prior Information

The vast majority of optimization and online learning algorithms today r...
research
10/06/2020

Online Linear Optimization with Many Hints

We study an online linear optimization (OLO) problem in which the learne...
research
02/05/2019

Parameter-free Online Convex Optimization with Sub-Exponential Noise

We consider the problem of unconstrained online convex optimization (OCO...
research
02/27/2019

Lipschitz Adaptivity with Multiple Learning Rates in Online Learning

We aim to design adaptive online learning algorithms that take advantage...
research
02/20/2019

Adaptive scale-invariant online algorithms for learning linear models

We consider online learning with linear models, where the algorithm pred...

Please sign up or login with your details

Forgot password? Click here to reset