Bagging is an Optimal PAC Learner

12/05/2022
by   Kasper Green Larsen, et al.
0

Determining the optimal sample complexity of PAC learning in the realizable setting was a central open problem in learning theory for decades. Finally, the seminal work by Hanneke (2016) gave an algorithm with a provably optimal sample complexity. His algorithm is based on a careful and structured sub-sampling of the training data and then returning a majority vote among hypotheses trained on each of the sub-samples. While being a very exciting theoretical result, it has not had much impact in practice, in part due to inefficiency, since it constructs a polynomial number of sub-samples of the training data, each of linear size. In this work, we prove the surprising result that the practical and classic heuristic bagging (a.k.a. bootstrap aggregation), due to Breiman (1996), is in fact also an optimal PAC learner. Bagging pre-dates Hanneke's algorithm by twenty years and is taught in most undergraduate machine learning courses. Moreover, we show that it only requires a logarithmic number of sub-samples to reach optimality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2015

The Optimal Sample Complexity of PAC Learning

This work establishes a new upper bound on the number of samples suffici...
research
01/27/2023

AdaBoost is not an Optimal Weak to Strong Learner

AdaBoost is a classic boosting algorithm for combining multiple inaccura...
research
05/24/2020

Proper Learning, Helly Number, and an Optimal SVM Bound

The classical PAC sample complexity bounds are stated for any Empirical ...
research
06/03/2022

Optimal Weak to Strong Learning

The classic algorithm AdaBoost allows to convert a weak learner, that is...
research
02/09/2023

Tree Learning: Optimal Algorithms and Sample Complexity

We study the problem of learning a hierarchical tree representation of d...
research
05/24/2018

Learning convex polytopes with margin

We present a near-optimal algorithm for properly learning convex polytop...
research
04/29/2021

An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning

We address an inherent difficulty in welfare-theoretic fair machine lear...

Please sign up or login with your details

Forgot password? Click here to reset