Node harvest

10/12/2009
by   Nicolai Meinshausen, et al.
0

When choosing a suitable technique for regression and classification with multivariate predictor variables, one is often faced with a tradeoff between interpretability and high predictive accuracy. To give a classical example, classification and regression trees are easy to understand and interpret. Tree ensembles like Random Forests provide usually more accurate predictions. Yet tree ensembles are also more difficult to analyze than single trees and are often criticized, perhaps unfairly, as `black box' predictors. Node harvest is trying to reconcile the two aims of interpretability and predictive accuracy by combining positive aspects of trees and tree ensembles. Results are very sparse and interpretable and predictive accuracy is extremely competitive, especially for low signal-to-noise data. The procedure is simple: an initial set of a few thousand nodes is generated randomly. If a new observation falls into just a single node, its prediction is the mean response of all training observation within this node, identical to a tree-like prediction. A new observation falls typically into several nodes and its prediction is then the weighted average of the mean responses across all these nodes. The only role of node harvest is to `pick' the right nodes from the initial large ensemble of nodes by choosing node weights, which amounts in the proposed algorithm to a quadratic programming problem with linear inequality constraints. The solution is sparse in the sense that only very few nodes are selected with a nonzero weight. This sparsity is not explicitly enforced. Maybe surprisingly, it is not necessary to select a tuning parameter for optimal predictive accuracy. Node harvest can handle mixed data and missing values and is shown to be simple to interpret and competitive in predictive accuracy on a variety of data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2019

Fitting Prediction Rule Ensembles to Psychological Research Data: An Introduction and Tutorial

Prediction rule ensembles (PREs) are a relatively new statistical learni...
research
09/25/2015

Evasion and Hardening of Tree Ensemble Classifiers

Classifier evasion consists in finding for a given instance x the neares...
research
11/06/2015

Finding structure in data using multivariate tree boosting

Technology and collaboration enable dramatic increases in the size of ps...
research
07/22/2017

pre: An R Package for Fitting Prediction Rule Ensembles

Prediction rule ensembles (PREs) are sparse collections of rules, offeri...
research
04/17/2022

Multi-Model Ensemble Optimization

Methodology and optimization algorithms for sparse regression are extend...
research
11/20/2019

LionForests: Local Interpretation of Random Forests through Path Selection

Towards a future where machine learning systems will integrate into ever...
research
06/15/2019

Linear Aggregation in Tree-based Estimators

Regression trees and their ensemble methods are popular methods for non-...

Please sign up or login with your details

Forgot password? Click here to reset