Optimal Weak to Strong Learning

06/03/2022
by   Kasper Green Larsen, et al.
0

The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data. We present a new algorithm that constructs a strong learner from a weak learner but uses less training data than AdaBoost and all other weak to strong learners to achieve the same generalization bounds. A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal. Hence, this work settles the sample complexity of the classic problem of constructing a strong learner from a weak learner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

AdaBoost is not an Optimal Weak to Strong Learner

AdaBoost is a classic boosting algorithm for combining multiple inaccura...
research
12/05/2022

Bagging is an Optimal PAC Learner

Determining the optimal sample complexity of PAC learning in the realiza...
research
05/26/2015

Some Open Problems in Optimal AdaBoost and Decision Stumps

The significance of the study of the theoretical and practical propertie...
research
09/04/2022

ProBoost: a Boosting Method for Probabilistic Classifiers

ProBoost, a new boosting algorithm for probabilistic classifiers, is pro...
research
11/01/2020

Measure Theoretic Approach to Nonuniform Learnability

An earlier introduced characterization of nonuniform learnability that a...
research
09/15/2022

Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization

We present a minimax optimal learner for the problem of learning predict...
research
01/20/2022

Reproducibility in Learning

We introduce the notion of a reproducible algorithm in the context of le...

Please sign up or login with your details

Forgot password? Click here to reset