The Rate of Convergence of AdaBoost

06/29/2011
by   Indraneel Mukherjee, et al.
0

The AdaBoost algorithm was designed to combine many "weak" hypotheses that perform slightly better than random guessing into a "strong" hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the "exponential loss." Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that at iteration t, the exponential loss of AdaBoost's computed parameter vector will be at most ϵ more than that of any parameter vector of ℓ_1-norm bounded by B in a number of rounds that is at most a polynomial in B and 1/ϵ. We also provide lower bounds showing that a polynomial dependence on these parameters is necessary. Our second result is that within C/ϵ iterations, AdaBoost achieves a value of the exponential loss that is at most ϵ more than the best possible value, where C depends on the dataset. We show that this dependence of the rate on ϵ is optimal up to constant factors, i.e., at least Ω(1/ϵ) rounds are necessary to achieve within ϵ of the optimal exponential loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2020

Exponential tractability of linear weighted tensor product problems in the worst-case setting for arbitrary linear functionals

We study the approximation of compact linear operators defined over cert...
research
07/26/2020

Computing zeta functions of large polynomial systems over finite fields

In this paper, we improve the algorithms of Lauder-Wan <cit.> and Harvey...
research
04/19/2014

Tight bounds for learning a mixture of two gaussians

We consider the problem of identifying the parameters of an unknown mixt...
research
07/25/2019

Improved Bounds for Discretization of Langevin Diffusions: Near-Optimal Rates without Convexity

We present an improved analysis of the Euler-Maruyama discretization of ...
research
01/31/2021

Exponential Savings in Agnostic Active Learning through Abstention

We show that in pool-based active classification without assumptions on ...
research
02/14/2020

Improving accuracy of the fifth-order WENO scheme by using the exponential approximation space

The aim of this study is to develop a novel WENO scheme that improves th...
research
04/09/2018

Contextual Search via Intrinsic Volumes

We study the problem of contextual search, a multidimensional generaliza...

Please sign up or login with your details

Forgot password? Click here to reset