A Leisurely Look at Versions and Variants of the Cross Validation Estimator

07/31/2019
by   Waleed A. Yousef, et al.
0

Many versions of cross-validation (CV) exist in the literature; and each version though has different variants. All are used interchangeably by many practitioners; yet, without explanation to the connection or difference among them. This article has three contributions. First, it starts by mathematical formalization of these different versions and variants that estimate the error rate and the Area Under the ROC Curve (AUC) of a classification rule, to show the connection and difference among them. Second, we prove some of their properties and prove that many variants are either redundant or "not smooth". Hence, we suggest to abandon all redundant versions and variants and only keep the leave-one-out, the K-fold, and the repeated K-fold. We show that the latter is the only among the three versions that is "smooth" and hence looks mathematically like estimating the mean performance of the classification rules. However, empirically, for the known phenomenon of "weak correlation", which we explain mathematically and experimentally, it estimates both conditional and mean performance almost with the same accuracy. Third, we conclude the article with suggesting two research points that may answer the remaining question of whether we can come up with a finalist among the three estimators: (1) a comparative study, that is much more comprehensive than those available in literature and conclude no overall winner, is needed to consider a wide range of distributions, datasets, and classifiers including complex ones obtained via the recent deep learning approach. (2) we sketch the path of deriving a rigorous method for estimating the variance of the only "smooth" version, repeated K-fold CV, rather than those ad-hoc methods available in the literature that ignore the covariance structure among the folds of CV.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2019

Estimating the Standard Error of Cross-Validation-Based Estimators of Classification Rules Performance

First, we analyze the variance of the Cross Validation (CV)-based estima...
research
07/30/2019

AUC: Nonparametric Estimators and Their Smoothness

Nonparametric estimation of a statistic, in general, and of the error ra...
research
01/29/2018

Tournament Leave-pair-out Cross-validation for Receiver Operating Characteristic (ROC) Analysis

Receiver operating characteristic (ROC) analysis is widely used for eval...
research
08/24/2020

Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison

Leave-one-out cross-validation (LOO-CV) is a popular method for comparin...
research
06/19/2018

Using J-K fold Cross Validation to Reduce Variance When Tuning NLP Models

K-fold cross validation (CV) is a popular method for estimating the true...
research
06/26/2023

Gain Confidence, Reduce Disappointment: A New Approach to Cross-Validation for Sparse Regression

Ridge regularized sparse regression involves selecting a subset of featu...

Please sign up or login with your details

Forgot password? Click here to reset