Quantifying Uncertainty in Random Forests via Confidence Intervals and Hypothesis Tests

04/25/2014
by   Lucas Mentch, et al.
0

This work develops formal statistical inference procedures for machine learning ensemble methods. Ensemble methods based on bootstrapping, such as bagging and random forests, have improved the predictive accuracy of individual trees, but fail to provide a framework in which distributional results can be easily determined. Instead of aggregating full bootstrap samples, we consider predicting by averaging over trees built on subsamples of the training set and demonstrate that the resulting estimator takes the form of a U-statistic. As such, predictions for individual feature vectors are asymptotically normal, allowing for confidence intervals to accompany predictions. In practice, a subset of subsamples is used for computational speed; here our estimators take the form of incomplete U-statistics and equivalent results are derived. We further demonstrate that this setup provides a framework for testing the significance of features. Moreover, the internal estimation method we develop allows us to estimate the variance parameters and perform these inference procedures at no additional computational cost. Simulations and illustrations on a real dataset are provided.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

Confidence Intervals for the Generalisation Error of Random Forests

Out-of-bag error is commonly used as an estimate of generalisation error...
research
02/18/2022

On Variance Estimation of Random Forests

Ensemble methods, such as random forests, are popular in applications du...
research
12/02/2019

Asymptotic Normality and Variance Estimation For Supervised Ensembles

Ensemble methods based on bootstrapping have improved the predictive acc...
research
11/18/2013

Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife

We study the variability of predictions made by bagged learners and rand...
research
04/16/2019

Scalable and Efficient Hypothesis Testing with Random Forests

Throughout the last decade, random forests have established themselves a...
research
06/07/2021

How to Evaluate Uncertainty Estimates in Machine Learning for Regression?

As neural networks become more popular, the need for accompanying uncert...
research
12/13/2021

Scalable subsampling: computation, aggregation and inference

Subsampling is a general statistical method developed in the 1990s aimed...

Please sign up or login with your details

Forgot password? Click here to reset