The Impossibility of Parallelizing Boosting

01/23/2023
by   Amin Karbasi, et al.
0

The aim of boosting is to convert a sequence of weak learners into a strong learner. At their heart, these methods are fully sequential. In this paper, we investigate the possibility of parallelizing boosting. Our main contribution is a strong negative result, implying that significant parallelization of boosting requires an exponential blow-up in the total computing resources needed for training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2020

Improved Quantum Boosting

Boosting is a general method to convert a weak learner (which generates ...
research
06/27/2012

An Online Boosting Algorithm with Theoretical Justifications

We study the task of online boosting--combining online weak learners int...
research
06/24/2022

Symbolic-Regression Boosting

Modifying standard gradient boosting by replacing the embedded weak lear...
research
04/04/2023

On algorithmically boosting fixed-point computations

This paper is a thought experiment on exponentiating algorithms. One of ...
research
09/04/2022

ProBoost: a Boosting Method for Probabilistic Classifiers

ProBoost, a new boosting algorithm for probabilistic classifiers, is pro...
research
06/25/2023

Language models are weak learners

A central notion in practical and theoretical machine learning is that o...

Please sign up or login with your details

Forgot password? Click here to reset