Less Regret via Online Conditioning

02/25/2010
by   Matthew Streeter, et al.
0

We analyze and evaluate an online gradient descent algorithm with adaptive per-coordinate adjustment of learning rates. Our algorithm can be thought of as an online version of batch gradient descent with a diagonal preconditioner. This approach leads to regret bounds that are stronger than those of standard online gradient descent for general online convex optimization problems. Experimentally, we show that our algorithm is competitive with state-of-the-art algorithms for large scale machine learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/13/2018

Fast Rates for Online Gradient Descent Without Strong Convexity via Hoffman's Bound

Hoffman's classical result gives a bound on the distance of a point from...
research
05/29/2019

Matrix-Free Preconditioning in Online Learning

We provide an online convex optimization algorithm with regret that inte...
research
04/10/2022

Regret Analysis of Online Gradient Descent-based Iterative Learning Control with Model Mismatch

In Iterative Learning Control (ILC), a sequence of feedforward control a...
research
12/26/2018

Dynamic Online Gradient Descent with Improved Query Complexity: A Theoretical Revisit

We provide a new theoretical analysis framework to investigate online gr...
research
12/31/2020

Optimizing Optimizers: Regret-optimal gradient descent algorithms

The need for fast and robust optimization algorithms are of critical imp...
research
05/22/2019

Convergence Analyses of Online ADAM Algorithm in Convex Setting and Two-Layer ReLU Neural Network

Nowadays, online learning is an appealing learning paradigm, which is of...
research
05/31/2023

Parameter-free projected gradient descent

We consider the problem of minimizing a convex function over a closed co...

Please sign up or login with your details

Forgot password? Click here to reset