Matrix-Free Preconditioning in Online Learning

05/29/2019
by   Ashok Cutkosky, et al.
0

We provide an online convex optimization algorithm with regret that interpolates between the regret of an algorithm using an optimal preconditioning matrix and one using a diagonal preconditioning matrix. Our regret bound is never worse than that obtained by diagonal preconditioning, and in certain setting even surpasses that of algorithms with full-matrix preconditioning. Importantly, our algorithm runs in the same time and space complexity as online gradient descent. Along the way we incorporate new techniques that mildly streamline and improve logarithmic factors in prior regret analyses. We conclude by benchmarking our algorithm on synthetic data and deep learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2010

Less Regret via Online Conditioning

We analyze and evaluate an online gradient descent algorithm with adapti...
research
02/10/2020

Adaptive Online Learning with Varying Norms

Given any increasing sequence of norms ·_0,...,·_T-1, we provide an onli...
research
10/06/2020

Online Linear Optimization with Many Hints

We study an online linear optimization (OLO) problem in which the learne...
research
05/22/2019

Convergence Analyses of Online ADAM Algorithm in Convex Setting and Two-Layer ReLU Neural Network

Nowadays, online learning is an appealing learning paradigm, which is of...
research
03/20/2018

Online Learning: Sufficient Statistics and the Burkholder Method

We uncover a fairly general principle in online learning: If regret can ...
research
02/07/2023

Sketchy: Memory-efficient Adaptive Regularization with Frequent Directions

Adaptive regularization methods that exploit more than the diagonal entr...
research
02/24/2019

Combining Online Learning Guarantees

We show how to take any two parameter-free online learning algorithms wi...

Please sign up or login with your details

Forgot password? Click here to reset