Slow-Growing Trees

03/02/2021
by   Philippe Goulet Coulombe, et al.
0

Random Forest's performance can be matched by a single slow-growing tree (SGT), which uses a learning rate to tame CART's greedy algorithm. SGT exploits the view that CART is an extreme case of an iterative weighted least square procedure. Moreover, a unifying view of Boosted Trees (BT) and Random Forests (RF) is presented. Greedy ML algorithms' outcomes can be improved using either "slow learning" or diversification. SGT applies the former to estimate a single deep tree, and Booging (bagging stochastic BT with a high learning rate) uses the latter with additive shallow trees. The performance of this tree ensemble quaternity (Booging, BT, SGT, RF) is assessed on simulated and real regression tasks.

READ FULL TEXT
research
01/21/2021

Crossbreeding in Random Forest

Ensemble learning methods are designed to benefit from multiple learning...
research
01/27/2023

Machine Learning Approach and Extreme Value Theory to Correlated Stochastic Time Series with Application to Tree Ring Data

The main goal of machine learning (ML) is to study and improve mathemati...
research
12/29/2020

Random Planted Forest: a directly interpretable tree ensemble

We introduce a novel interpretable and tree-based algorithm for predicti...
research
09/16/2021

WildWood: a new Random Forest algorithm

We introduce WildWood (WW), a new ensemble algorithm for supervised lear...
research
06/25/2019

AMF: Aggregated Mondrian Forests for Online Learning

Random Forests (RF) is one of the algorithms of choice in many supervise...
research
10/15/2019

Breadth-first, Depth-next Training of Random Forests

In this paper we analyze, evaluate, and improve the performance of train...
research
11/08/2021

There is no Double-Descent in Random Forests

Random Forests (RFs) are among the state-of-the-art in machine learning ...

Please sign up or login with your details

Forgot password? Click here to reset