Anytime Online-to-Batch Conversions, Optimism, and Acceleration

03/03/2019
by   Ashok Cutkosky, et al.
0

A standard way to obtain convergence guarantees in stochastic convex optimization is to run an online learning algorithm and then output the average of its iterates: the actual iterates of the online learning algorithm do not come with individual guarantees. We close this gap by introducing a black-box modification to any online learning algorithm whose iterates converge to the optimum in stochastic scenarios. We then consider the case of smooth losses, and show that combining our approach with optimistic online learning algorithms immediately yields a fast convergence rate of O(L/T^3/2+σ/√(T)) on L-smooth problems with σ^2 variance in the gradients. Finally, we provide a reduction that converts any adaptive online algorithm into one that obtains the optimal accelerated rate of Õ(L/T^2 + σ/√(T)), while still maintaining Õ(1/√(T)) convergence in the non-smooth setting. Importantly, our algorithms adapt to L and σ automatically: they do not need to know either to obtain these rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2018

Online Adaptive Methods, Universality and Acceleration

We present a novel method for convex unconstrained optimization that, wi...
research
12/13/2012

Learning Sparse Low-Threshold Linear Classifiers

We consider the problem of learning a non-negative linear classifier wit...
research
02/27/2019

Lipschitz Adaptivity with Multiple Learning Rates in Online Learning

We aim to design adaptive online learning algorithms that take advantage...
research
02/24/2019

Combining Online Learning Guarantees

We show how to take any two parameter-free online learning algorithms wi...
research
05/30/2017

Online to Offline Conversions, Universality and Adaptive Minibatch Sizes

We present an approach towards convex optimization that relies on a nove...
research
03/30/2018

Online Regression with Model Selection

Online learning algorithms have a wide variety of applications in large ...
research
07/20/2021

Open Problem: Is There an Online Learning Algorithm That Learns Whenever Online Learning Is Possible?

This open problem asks whether there exists an online learning algorithm...

Please sign up or login with your details

Forgot password? Click here to reset