The Infinitesimal Jackknife and Combinations of Models

08/31/2022
by   Indrayudh Ghosal, et al.
0

The Infinitesimal Jackknife is a general method for estimating variances of parametric models, and more recently also for some ensemble methods. In this paper we extend the Infinitesimal Jackknife to estimate the covariance between any two models. This can be used to quantify uncertainty for combinations of models, or to construct test statistics for comparing different models or ensembles of models fitted using the same training dataset. Specific examples in this paper use boosted combinations of models like random forests and M-estimators. We also investigate its application on neural networks and ensembles of XGBoost models. We illustrate the efficacy of variance estimates through extensive simulations and its application to the Beijing Housing data, and demonstrate the theoretical consistency of the Infinitesimal Jackknife covariance estimate.

READ FULL TEXT

page 22

page 28

research
07/20/2019

Estimating the Algorithmic Variance of Randomized Ensembles via the Bootstrap

Although the methods of bagging and random forests are some of the most ...
research
04/12/2023

Boosted Prompt Ensembles for Large Language Models

Methods such as chain-of-thought prompting and self-consistency have pus...
research
10/10/2022

Layer Ensembles

Deep Ensembles, as a type of Bayesian Neural Networks, can be used to es...
research
08/01/2022

Accelerated and interpretable oblique random survival forests

The oblique random survival forest (RSF) is an ensemble supervised learn...
research
04/25/2023

Positive definite nonparametric regression using an evolutionary algorithm with application to covariance function estimation

We propose a novel nonparametric regression framework subject to the pos...
research
09/05/2022

Ensemble of Pre-Trained Neural Networks for Segmentation and Quality Detection of Transmission Electron Microscopy Images

Automated analysis of electron microscopy datasets poses multiple challe...
research
05/25/2021

SHAFF: Fast and consistent SHApley eFfect estimates via random Forests

Interpretability of learning algorithms is crucial for applications invo...

Please sign up or login with your details

Forgot password? Click here to reset