Margins, Shrinkage, and Boosting

03/18/2013
by   Matus Telgarsky, et al.
0

This manuscript shows that AdaBoost and its immediate variants can produce approximate maximum margin classifiers simply by scaling step size choices with a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman's empirically successful "shrinkage" procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2020

Implicitly Maximizing Margins with the Hinge Loss

A new loss function is proposed for neural networks on classification ta...
research
10/02/2020

A straightforward line search approach on the expected empirical loss for stochastic deep learning problems

A fundamental challenge in deep learning is that the optimal step sizes ...
research
07/04/2013

AdaBoost and Forward Stagewise Regression are First-Order Convex Optimization Methods

Boosting methods are highly popular and effective supervised learning me...
research
03/28/2008

Analysis of boosting algorithms using the smooth margin function

We introduce a useful tool for analyzing boosting algorithms called the ...
research
06/05/2021

Bandwidth-based Step-Sizes for Non-Convex Stochastic Optimization

Many popular learning-rate schedules for deep neural networks combine a ...
research
06/05/2023

Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking

The backtracking line-search is an effective technique to automatically ...
research
08/19/2023

Dynamic Bilevel Learning with Inexact Line Search

In various domains within imaging and data science, particularly when ad...

Please sign up or login with your details

Forgot password? Click here to reset