Boosting as Frank-Wolfe

09/22/2022
by   Ryotaro Mitsuboshi, et al.
0

Some boosting algorithms, such as LPBoost, ERLPBoost, and C-ERLPBoost, aim to solve the soft margin optimization problem with the ℓ_1-norm regularization. LPBoost rapidly converges to an ϵ-approximate solution in practice, but it is known to take Ω(m) iterations in the worst case, where m is the sample size. On the other hand, ERLPBoost and C-ERLPBoost are guaranteed to converge to an ϵ-approximate solution in O(1/ϵ^2lnm/ν) iterations. However, the computation per iteration is very high compared to LPBoost. To address this issue, we propose a generic boosting scheme that combines the Frank-Wolfe algorithm and any secondary algorithm and switches one to the other iteratively. We show that the scheme retains the same convergence guarantee as ERLPBoost and C-ERLPBoost. One can incorporate any secondary algorithm to improve in practice. This scheme comes from a unified view of boosting algorithms for soft margin optimization. More specifically, we show that LPBoost, ERLPBoost, and C-ERLPBoost are instances of the Frank-Wolfe algorithm. In experiments on real datasets, one of the instances of our scheme exploits the better updates of the secondary algorithm and performs comparably with LPBoost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2019

Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression

Linear regression in ℓ_p-norm is a canonical optimization problem that a...
research
04/14/2009

Boosting through Optimization of Margin Distributions

Boosting has attracted much research attention in the past decade. The s...
research
03/28/2008

Analysis of boosting algorithms using the smooth margin function

We introduce a useful tool for analyzing boosting algorithms called the ...
research
01/23/2009

On the Dual Formulation of Boosting Algorithms

We study boosting algorithms from a new perspective. We show that the La...
research
10/19/2020

Statistical Guarantees and Algorithmic Convergence Issues of Variational Boosting

We provide statistical guarantees for Bayesian variational boosting by p...
research
09/02/2016

SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques

We present SEBOOST, a technique for boosting the performance of existing...
research
08/20/2015

Efficient Computation of Exact IRV Margins

The margin of victory is easy to compute for many election schemes but d...

Please sign up or login with your details

Forgot password? Click here to reset