Wide Boosting

07/20/2020
by   Michael Horrell, et al.
0

Gradient boosting (GB) is a popular methodology used to solve prediction problems through minimization of a differentiable loss function, L. GB is especially performant in low and medium dimensional problems. This paper presents a simple adjustment to GB motivated in part by artificial neural networks. Specifically, our adjustment inserts a square or rectangular matrix multiplication between the output of a GB model and the loss, L. This allows the output of a GB model to have increased dimension prior to being fed into the loss and is thus "wider" than standard GB implementations. We provide performance comparisons on several publicly available datasets. Wide Boosting outperforms standard GB in every dataset we try.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2018

CatBoost: gradient boosting with categorical features support

In this paper we present CatBoost, a new open-sourced gradient boosting ...
research
12/18/2019

A Bivariate Dead Band Process Adjustment Policy

A bivariate extension to Box and Jenkins (1963) feedback adjustment prob...
research
08/19/2019

Gradient Boosting Machine: A Survey

In this survey, we discuss several different types of gradient boosting ...
research
11/26/2022

Condensed Gradient Boosting

This paper presents a computationally efficient variant of gradient boos...
research
10/12/2020

A Generalized Stacking for Implementing Ensembles of Gradient Boosting Machines

The gradient boosting machine is one of the powerful tools for solving r...
research
09/24/2019

The column measure and Gradient-Free Gradient Boosting

Sparse model selection by structural risk minimization leads to a set of...
research
08/03/2023

Bringing Chemistry to Scale: Loss Weight Adjustment for Multivariate Regression in Deep Learning of Thermochemical Processes

Flamelet models are widely used in computational fluid dynamics to simul...

Please sign up or login with your details

Forgot password? Click here to reset