Learning Nonlinear Functions Using Regularized Greedy Forest

09/05/2011
by   Rie Johnson, et al.
0

We consider the problem of learning a forest of nonlinear decision rules with general loss functions. The standard methods employ boosted decision trees such as Adaboost for exponential loss and Friedman's gradient boosting for general loss. In contrast to these traditional boosting algorithms that treat a tree learner as a black box, the method we propose directly learns decision forests via fully-corrective regularized greedy search using the underlying forest structure. Our method achieves higher accuracy and smaller models than gradient boosting (and Adaboost with exponential loss) on many datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2022

FedGBF: An efficient vertical federated learning framework via gradient boosting and bagging

Federated learning, conducive to solving data privacy and security probl...
research
10/29/2022

Robust Boosting Forests with Richer Deep Feature Hierarchy

We propose a robust variant of boosting forest to the various adversaria...
research
02/19/2020

Gradient Boosting Neural Networks: GrowNet

A novel gradient boosting framework is proposed where shallow neural net...
research
10/02/2020

Attention augmented differentiable forest for tabular data

Differentiable forest is an ensemble of decision trees with full differe...
research
06/08/2023

Boosting with Tempered Exponential Measures

One of the most popular ML algorithms, AdaBoost, can be derived from the...
research
07/04/2012

Obtaining Calibrated Probabilities from Boosting

Boosted decision trees typically yield good accuracy, precision, and ROC...
research
09/30/2020

Uncovering Feature Interdependencies in Complex Systems with Non-Greedy Random Forests

A "non-greedy" variation of the random forest algorithm is presented to ...

Please sign up or login with your details

Forgot password? Click here to reset