AdaBoost is not an Optimal Weak to Strong Learner

AdaBoost is a classic boosting algorithm for combining multiple inaccurate classifiers produced by a weak learner, to produce a strong learner with arbitrarily high accuracy when given enough training data. Determining the optimal number of samples necessary to obtain a given accuracy of the strong learner, is a basic learning theoretic question. Larsen and Ritzert (NeurIPS'22) recently presented the first provably optimal weak-to-strong learner. However, their algorithm is somewhat complicated and it remains an intriguing question whether the prototypical boosting algorithm AdaBoost also makes optimal use of training samples. In this work, we answer this question in the negative. Concretely, we show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2022

Optimal Weak to Strong Learning

The classic algorithm AdaBoost allows to convert a weak learner, that is...
research
12/05/2022

Bagging is an Optimal PAC Learner

Determining the optimal sample complexity of PAC learning in the realiza...
research
09/04/2022

ProBoost: a Boosting Method for Probabilistic Classifiers

ProBoost, a new boosting algorithm for probabilistic classifiers, is pro...
research
10/25/2021

Quantum Boosting using Domain-Partitioning Hypotheses

Boosting is an ensemble learning method that converts a weak learner int...
research
08/17/2021

Statistically Near-Optimal Hypothesis Selection

Hypothesis Selection is a fundamental distribution learning problem wher...
research
08/27/2023

Online GentleAdaBoost – Technical Report

We study the online variant of GentleAdaboost, where we combine a weak l...
research
01/20/2022

Reproducibility in Learning

We introduce the notion of a reproducible algorithm in the context of le...

Please sign up or login with your details

Forgot password? Click here to reset