Bayesian leave-one-out cross-validation for large data

04/24/2019
by   Måns Magnusson, et al.
20

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO) is a general approach for assessing the generalizability of a model, but unfortunately, LOO does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO model evaluation for large datasets. We provide both theoretical and empirical results showing good properties for large data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2020

Leave-One-Out Cross-Validation for Bayesian Model Comparison in Large Data

Recently, new methods for model assessment, based on subsampling and pos...
research
02/17/2019

Approximate leave-future-out cross-validation for Bayesian time series models

One of the common goals of time series analysis is to use the observed s...
research
07/07/2008

Catching Up Faster by Switching Sooner: A Prequential Solution to the AIC-BIC Dilemma

Bayesian model averaging, model selection and its approximations such as...
research
03/02/2020

Approximate Cross-validation: Guarantees for Model Assessment and Selection

Cross-validation (CV) is a popular approach for assessing and selecting ...
research
11/30/2018

Large Datasets, Bias and Model Oriented Optimal Design of Experiments

We review recent literature that proposes to adapt ideas from classical ...
research
12/24/2020

Leave Zero Out: Towards a No-Cross-Validation Approach for Model Selection

As the main workhorse for model selection, Cross Validation (CV) has ach...

Please sign up or login with your details

Forgot password? Click here to reset