A better method to enforce monotonic constraints in regression and classification trees

11/02/2020
by   Charles Auguste, et al.
0

In this report we present two new ways of enforcing monotone constraints in regression and classification trees. One yields better results than the current LightGBM, and has a similar computation time. The other one yields even better results, but is much slower than the current LightGBM. We also propose a heuristic that takes into account that greedily splitting a tree by choosing a monotone split with respect to its immediate gain is far from optimal. Then, we compare the results with the current implementation of the constraints in the LightGBM library, using the well known Adult public dataset. Throughout the report, we mostly focus on the implementation of our methods that we made for the LightGBM library, even though they are general and could be implemented in any regression or classification tree. The best method we propose (a smarter way to split the tree coupled to a penalization of monotone splits) consistently beats the current implementation of LightGBM. With small or average trees, the loss reduction can be as high as 1 training and decreases to around 0.1 The results would be even better with larger trees. In our experiments, we didn't do a lot of tuning of the regularization parameters, and we wouldn't be surprised to see that increasing the performance of our methods on test sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2017

Simple Compact Monotone Tree Drawings

A monotone drawing of a graph G is a straight-line drawing of G such tha...
research
02/14/2023

Scalable Optimal Multiway-Split Decision Trees with Constraints

There has been a surge of interest in learning optimal decision trees us...
research
08/19/2021

Simple is better: Making Decision Trees faster using random sampling

In recent years, gradient boosted decision trees have become popular in ...
research
10/19/2022

Distributional Adaptive Soft Regression Trees

Random forests are an ensemble method relevant for many problems, such a...
research
01/25/2018

Information gain ratio correction: Improving prediction with more balanced decision tree splits

Decision trees algorithms use a gain function to select the best split d...
research
10/01/2018

Neural Regression Trees

Regression-via-Classification (RvC) is the process of converting a regre...
research
07/25/2021

Relational Boosted Regression Trees

Many tasks use data housed in relational databases to train boosted regr...

Please sign up or login with your details

Forgot password? Click here to reset