Cross-validation: what does it estimate and how well does it do it?

by   Stephen Bates, et al.

Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood. Ideally, one would like to think that cross-validation estimates the prediction error for the model at hand, fit to the training data. We prove that this is not the case for the linear model fit by ordinary least squares; rather it estimates the average prediction error of models fit on other unseen training sets drawn from the same population. We further show that this phenomenon occurs for most popular estimates of prediction error, including data splitting, bootstrapping, and Mallow's Cp. Next, the standard confidence intervals for prediction error derived from cross-validation may have coverage far below the desired level. Because each data point is used for both training and testing, there are correlations among the measured accuracies for each fold, and so the usual estimate of variance is too small. We introduce a nested cross-validation scheme to estimate this variance more accurately, and show empirically that this modification leads to intervals with approximately correct coverage in many examples where traditional cross-validation intervals fail. Lastly, our analysis also shows that when producing confidence intervals for prediction accuracy with simple data splitting, one should not re-fit the model on the combined data, since this invalidates the confidence intervals.



page 1

page 2

page 3

page 4


Confidence intervals for the Cox model test error from cross-validation

Cross-validation (CV) is one of the most widely used techniques in stati...

Cross-validation Confidence Intervals for Test Error

This work develops central limit theorems for cross-validation and consi...

Test Error Estimation after Model Selection Using Validation Error

When performing supervised learning with the model selected using valida...

Predictive inference with the jackknife+

This paper introduces the jackknife+, which is a novel method for constr...

A Bayesian approach to type-specific conic fitting

A perturbative approach is used to quantify the effect of noise in data ...

Estimating the Prediction Performance of Spatial Models via Spatial k-Fold Cross Validation

In machine learning one often assumes the data are independent when eval...

Theoretical Analyses of Cross-Validation Error and Voting in Instance-Based Learning

This paper begins with a general theory of error in cross-validation tes...

Code Repositories


Nested cross-validation for accurate confidence intervals for prediction error.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.