ChaCha for Online AutoML

06/09/2021
by   Qingyun Wu, et al.
0

We propose the ChaCha (Champion-Challengers) algorithm for making an online choice of hyperparameters in online learning settings. ChaCha handles the process of determining a champion and scheduling a set of `live' challengers over time based on sample complexity bounds. It is guaranteed to have sublinear regret after the optimal configuration is added into consideration by an application-dependent oracle based on the champions. Empirically, we show that ChaCha provides good performance across a wide array of datasets when optimizing over featurization and hyperparameter decisions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

An Online Learning Approach to Optimizing Time-Varying Costs of AoI

We consider systems that require timely monitoring of sources over a com...
research
04/20/2018

Online Improper Learning with an Approximation Oracle

We revisit the question of reducing online learning to approximate optim...
research
08/09/2014

Normalized Online Learning

We introduce online learning algorithms which are independent of feature...
research
11/14/2019

A Reduction from Reinforcement Learning to No-Regret Online Learning

We present a reduction from reinforcement learning (RL) to no-regret onl...
research
10/24/2021

Online estimation and control with optimal pathlength regret

A natural goal when designing online learning algorithms for non-station...
research
01/22/2021

Adversarial Laws of Large Numbers and Optimal Regret in Online Classification

Laws of large numbers guarantee that given a large enough sample from so...

Please sign up or login with your details

Forgot password? Click here to reset