DeepAI AI Chat
Log In Sign Up

Slow-Growing Trees

by   Philippe Goulet Coulombe, et al.

Random Forest's performance can be matched by a single slow-growing tree (SGT), which uses a learning rate to tame CART's greedy algorithm. SGT exploits the view that CART is an extreme case of an iterative weighted least square procedure. Moreover, a unifying view of Boosted Trees (BT) and Random Forests (RF) is presented. Greedy ML algorithms' outcomes can be improved using either "slow learning" or diversification. SGT applies the former to estimate a single deep tree, and Booging (bagging stochastic BT with a high learning rate) uses the latter with additive shallow trees. The performance of this tree ensemble quaternity (Booging, BT, SGT, RF) is assessed on simulated and real regression tasks.


Crossbreeding in Random Forest

Ensemble learning methods are designed to benefit from multiple learning...

Random Planted Forest: a directly interpretable tree ensemble

We introduce a novel interpretable and tree-based algorithm for predicti...

WildWood: a new Random Forest algorithm

We introduce WildWood (WW), a new ensemble algorithm for supervised lear...

Optimal survival trees ensemble

Recent studies have adopted an approach of selecting accurate and divers...

AMF: Aggregated Mondrian Forests for Online Learning

Random Forests (RF) is one of the algorithms of choice in many supervise...

There is no Double-Descent in Random Forests

Random Forests (RFs) are among the state-of-the-art in machine learning ...