Adaptive scale-invariant online algorithms for learning linear models

02/20/2019
by   Michał Kempka, et al.
0

We consider online learning with linear models, where the algorithm predicts on sequentially revealed instances (feature vectors), and is compared against the best linear function (comparator) in hindsight. Popular algorithms in this framework, such as Online Gradient Descent (OGD), have parameters (learning rates), which ideally should be tuned based on the scales of the features and the optimal comparator, but these quantities only become available at the end of the learning process. In this paper, we resolve the tuning problem by proposing online algorithms making predictions which are invariant under arbitrary rescaling of the features. The algorithms have no parameters to tune, do not require any prior knowledge on the scale of the instances or the comparator, and achieve regret bounds matching (up to a logarithmic factor) that of OGD with optimally tuned separate learning rates per dimension, while retaining comparable runtime performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2017

Scale-invariant unconstrained online learning

We consider a variant of online convex optimization in which both the in...
research
01/25/2023

Overcoming Prior Misspecification in Online Learning to Rank

The recent literature on online learning to rank (LTR) has established t...
research
03/14/2016

Online Isotonic Regression

We consider the online version of the isotonic regression problem. Given...
research
08/09/2014

Normalized Online Learning

We introduce online learning algorithms which are independent of feature...
research
10/06/2020

Online Linear Optimization with Many Hints

We study an online linear optimization (OLO) problem in which the learne...
research
02/24/2019

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning

We provide algorithms that guarantee regret R_T(u)<Õ(Gu^3 + G(u+1)√(T)) ...
research
02/27/2020

Lipschitz and Comparator-Norm Adaptivity in Online Learning

We study Online Convex Optimization in the unbounded setting where neith...

Please sign up or login with your details

Forgot password? Click here to reset