Normalized Online Learning

08/09/2014
by   Stéphane Ross, et al.
0

We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to pre-normalize data, the test-time and test-space complexity are reduced, and the algorithms are more robust.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2023

On the Value of Stochastic Side Information in Online Learning

We study the effectiveness of stochastic side information in determinist...
research
10/24/2021

Online estimation and control with optimal pathlength regret

A natural goal when designing online learning algorithms for non-station...
research
05/28/2022

History-Restricted Online Learning

We introduce the concept of history-restricted no-regret online learning...
research
06/09/2021

ChaCha for Online AutoML

We propose the ChaCha (Champion-Challengers) algorithm for making an onl...
research
02/20/2019

Adaptive scale-invariant online algorithms for learning linear models

We consider online learning with linear models, where the algorithm pred...
research
07/07/2023

Online Network Source Optimization with Graph-Kernel MAB

We propose Grab-UCB, a graph-kernel multi-arms bandit algorithm to learn...
research
01/27/2021

Adversaries in Online Learning Revisited: with applications in Robust Optimization and Adversarial training

We revisit the concept of "adversary" in online learning, motivated by s...

Please sign up or login with your details

Forgot password? Click here to reset