Concentration inequalities of the cross-validation estimator for Empirical Risk Minimiser

10/30/2010
by   Matthieu CORNEC, et al.
0

In this article, we derive concentration inequalities for the cross-validation estimate of the generalization error for empirical risk minimizers. In the general setting, we prove sanity-check bounds in the spirit of KR99“bounds showing that the worst-case error of this estimate is not much worse that of training error estimate” . General loss functions and class of predictors with finite VC-dimension are considered. We closely follow the formalism introduced by DUD03 to cover a large variety of cross-validation procedures including leave-one-out cross-validation, k cross-validation (or split sample), and the leave-υ-out cross-validation. In particular, we focus on proving the consistency of the various cross-validation procedures. We point out the interest of each cross-validation procedure in terms of rate of convergence. An estimation curve with transition phases depending on the cross-validation procedure and not only on the percentage of observations in the test sample gives a simple rule on how to choose the cross-validation. An interesting consequence is that the size of the test sample is not required to grow to infinity for the consistency of the cross-validation procedure.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro