TreeGrad: Transferring Tree Ensembles to Neural Networks

04/25/2019
by   Chapman Siu, et al.
0

Gradient Boosting Decision Tree (GBDT) are popular machine learning algorithms with implementations such as LightGBM and in popular machine learning toolkits like Scikit-Learn. Many implementations can only produce trees in an offline manner and in a greedy manner. We explore ways to convert existing GBDT implementations to known neural network architectures with minimal performance loss in order to allow decision splits to be updated in an online manner and provide extensions to allow splits points to be altered as a neural architecture search problem. We provide learning bounds for our neural network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2019

Residual Networks Behave Like Boosting Algorithms

We show that Residual Networks (ResNet) is equivalent to boosting featur...
research
11/26/2018

Automatic Induction of Neural Network Decision Tree Algorithms

This work presents an approach to automatically induction for non-greedy...
research
06/17/2022

Popular decision tree algorithms are provably noise tolerant

Using the framework of boosting, we prove that all impurity-based decisi...
research
04/19/2018

GNAS: A Greedy Neural Architecture Search Method for Multi-Attribute Learning

A key problem in deep multi-attribute learning is to effectively discove...
research
05/25/2022

A Neural Tangent Kernel Formula for Ensembles of Soft Trees with Arbitrary Architectures

A soft tree is an actively studied variant of a decision tree that updat...
research
10/22/2018

Applying Deep Learning To Airbnb Search

The application to search ranking is one of the biggest machine learning...
research
05/15/2022

Optimization of Decision Tree Evaluation Using SIMD Instructions

Decision forest (decision tree ensemble) is one of the most popular mach...

Please sign up or login with your details

Forgot password? Click here to reset