Slow-Growing Trees

03/02/2021
by   Philippe Goulet Coulombe, et al.
0

Random Forest's performance can be matched by a single slow-growing tree (SGT), which uses a learning rate to tame CART's greedy algorithm. SGT exploits the view that CART is an extreme case of an iterative weighted least square procedure. Moreover, a unifying view of Boosted Trees (BT) and Random Forests (RF) is presented. Greedy ML algorithms' outcomes can be improved using either "slow learning" or diversification. SGT applies the former to estimate a single deep tree, and Booging (bagging stochastic BT with a high learning rate) uses the latter with additive shallow trees. The performance of this tree ensemble quaternity (Booging, BT, SGT, RF) is assessed on simulated and real regression tasks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

01/21/2021

Crossbreeding in Random Forest

Ensemble learning methods are designed to benefit from multiple learning...
12/29/2020

Random Planted Forest: a directly interpretable tree ensemble

We introduce a novel interpretable and tree-based algorithm for predicti...
09/16/2021

WildWood: a new Random Forest algorithm

We introduce WildWood (WW), a new ensemble algorithm for supervised lear...
05/18/2020

Optimal survival trees ensemble

Recent studies have adopted an approach of selecting accurate and divers...
06/25/2019

AMF: Aggregated Mondrian Forests for Online Learning

Random Forests (RF) is one of the algorithms of choice in many supervise...
11/08/2021

There is no Double-Descent in Random Forests

Random Forests (RFs) are among the state-of-the-art in machine learning ...
08/17/2020

To Bag is to Prune

It is notoriously hard to build a bad Random Forest (RF). Concurrently, ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.