To Bag is to Prune

08/17/2020
by   Philippe Goulet Coulombe, et al.
0

It is notoriously hard to build a bad Random Forest (RF). Concurrently, RF is perhaps the only standard ML algorithm that blatantly overfits in-sample without any consequence out-of-sample. Standard arguments cannot rationalize this paradox. I propose a new explanation: bootstrap aggregation and model perturbation as implemented by RF automatically prune a (latent) true underlying tree. More generally, there is no need to tune the stopping point of a properly randomized ensemble of greedily optimized base learners. Thus, Boosting and MARS are eligible. I empirically demonstrate the property with simulations and real data by reporting that these new ensembles yield equivalent performance to their tuned counterparts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset