Online Agnostic Boosting via Regret Minimization

03/02/2020
by   Nataly Brukhim, et al.
0

Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules. While in statistical learning numerous boosting methods exist both in the realizable and agnostic settings, in online learning they exist only in the realizable case. In this work we provide the first agnostic online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret. Our algorithm is based on an abstract (and simple) reduction to online convex optimization, which efficiently converts an arbitrary online convex optimizer to an online booster. Moreover, this reduction extends to the statistical as well as the online realizable settings, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

Online Agnostic Multiclass Boosting

Boosting is a fundamental approach in machine learning that enjoys both ...
research
07/23/2020

Online Boosting with Bandit Feedback

We consider the problem of online boosting for regression tasks, when on...
research
02/18/2021

Boosting for Online Convex Optimization

We consider the decision-making framework of online convex optimization ...
research
08/27/2023

Online GentleAdaBoost – Technical Report

We study the online variant of GentleAdaboost, where we combine a weak l...
research
10/18/2022

Online Convex Optimization with Unbounded Memory

Online convex optimization (OCO) is a widely used framework in online le...
research
05/10/2011

Generalized Boosting Algorithms for Convex Optimization

Boosting is a popular way to derive powerful learners from simpler hypot...
research
10/24/2008

Online Coordinate Boosting

We present a new online boosting algorithm for adapting the weights of a...

Please sign up or login with your details

Forgot password? Click here to reset