Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data

01/03/2020
by   Måns Magnusson, et al.
0

Recently, new methods for model assessment, based on subsampling and posterior approximations, have been proposed for scaling leave-one-out cross-validation (LOO) to large datasets. Although these methods work well for estimating predictive performance for individual models, they are less powerful in model comparison. We propose an efficient method for estimating differences in predictive performance by combining fast approximate LOO surrogates with exact LOO subsampling using the difference estimator and supply proofs with regards to scaling characteristics. The resulting approach can be orders of magnitude more efficient than previous approaches, as well as being better suited to model comparison.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2022

Leave-group-out cross-validation for latent Gaussian models

Evaluating predictive performance is essential after fitting a model and...
research
04/24/2019

Bayesian leave-one-out cross-validation for large data

Model inference, such as model comparison, model checking, and model sel...
research
04/04/2021

Generalised Bayesian Structural Equation Modelling

We propose a generalised framework for Bayesian Structural Equation Mode...
research
12/23/2014

Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

The future predictive performance of a Bayesian model can be estimated u...
research
08/24/2020

Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison

Leave-one-out cross-validation (LOO-CV) is a popular method for comparin...
research
06/01/2018

Return of the Infinitesimal Jackknife

The error or variability of machine learning algorithms is often assesse...

Please sign up or login with your details

Forgot password? Click here to reset