Lipschitz Adaptivity with Multiple Learning Rates in Online Learning

02/27/2019
by   Zakaria Mhammedi, et al.
0

We aim to design adaptive online learning algorithms that take advantage of any special structure that might be present in the learning task at hand, with as little manual tuning by the user as possible. A fundamental obstacle that comes up in the design of such adaptive algorithms is to calibrate a so-called step-size or learning rate hyperparameter depending on variance, gradient norms, etc. A recent technique promises to overcome this difficulty by maintaining multiple learning rates in parallel. This technique has been applied in the MetaGrad algorithm for online convex optimization and the Squint algorithm for prediction with expert advice. However, in both cases the user still has to provide in advance a Lipschitz hyperparameter that bounds the norm of the gradients. Although this hyperparameter is typically not available in advance, tuning it correctly is crucial: if it is set too small, the methods may fail completely; but if it is taken too large, performance deteriorates significantly. In the present work we remove this Lipschitz hyperparameter by designing new versions of MetaGrad and Squint that adapt to its optimal value automatically. We achieve this by dynamically updating the set of active learning rates. For MetaGrad, we further improve the computational efficiency of handling constraints on the domain of prediction, and we remove the need to specify the number of rounds in advance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2019

Anytime Online-to-Batch Conversions, Optimism, and Acceleration

A standard way to obtain convergence guarantees in stochastic convex opt...
research
02/10/2020

Adaptive Online Learning with Varying Norms

Given any increasing sequence of norms ·_0,...,·_T-1, we provide an onli...
research
03/07/2017

Online Learning Without Prior Information

The vast majority of optimization and online learning algorithms today r...
research
02/24/2019

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning

We provide algorithms that guarantee regret R_T(u)<Õ(Gu^3 + G(u+1)√(T)) ...
research
02/27/2020

Lipschitz and Comparator-Norm Adaptivity in Online Learning

We study Online Convex Optimization in the unbounded setting where neith...
research
07/03/2023

Trading-Off Payments and Accuracy in Online Classification with Paid Stochastic Experts

We investigate online classification with paid stochastic experts. Here,...
research
06/13/2023

Stepsize Learning for Policy Gradient Methods in Contextual Markov Decision Processes

Policy-based algorithms are among the most widely adopted techniques in ...

Please sign up or login with your details

Forgot password? Click here to reset