Scale-invariant unconstrained online learning

08/23/2017
by   Wojciech Kotłowski, et al.
0

We consider a variant of online convex optimization in which both the instances (input vectors) and the comparator (weight vector) are unconstrained. We exploit a natural scale invariance symmetry in our unconstrained setting: the predictions of the optimal comparator are invariant under any linear transformation of the instances. Our goal is to design online algorithms which also enjoy this property, i.e. are scale-invariant. We start with the case of coordinate-wise invariance, in which the individual coordinates (features) can be arbitrarily rescaled. We give an algorithm, which achieves essentially optimal regret bound in this setup, expressed by means of a coordinate-wise scale-invariant norm of the comparator. We then study general invariance with respect to arbitrary linear transformations. We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scale-invariant norm of the comparator in the worst case. Next, we compliment this result with a positive one, providing an algorithm which "almost" achieves the desired bound, incurring only a logarithmic overhead in terms of the norm of the instances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2019

Adaptive scale-invariant online algorithms for learning linear models

We consider online learning with linear models, where the algorithm pred...
research
06/02/2020

Asymptotically Scale-invariant Multi-resolution Quantization

A multi-resolution quantizer is a sequence of quantizers where the outpu...
research
02/26/2015

A Chaining Algorithm for Online Nonparametric Regression

We consider the problem of online nonparametric regression with arbitrar...
research
03/07/2017

Online Convex Optimization with Unconstrained Domains and Losses

We propose an online convex optimization algorithm (RescaledExp) that ac...
research
02/16/2021

Optimal Algorithms for Private Online Learning in a Stochastic Environment

We consider two variants of private stochastic online learning. The firs...
research
03/15/2023

Relative coordinates are crucial for Ulam's "trick to the train of thought"

Spatial signal processing algorithms often use pre-given coordinate syst...
research
01/08/2021

BN-invariant sharpness regularizes the training model to better generalization

It is arguably believed that flatter minima can generalize better. Howev...

Please sign up or login with your details

Forgot password? Click here to reset