Subsampling Bias and The Best-Discrepancy Systematic Cross Validation

07/04/2019
by   Liang Guo, et al.
2

Statistical machine learning models should be evaluated and validated before putting to work. Conventional k-fold Monte Carlo Cross-Validation (MCCV) procedure uses a pseudo-random sequence to partition instances into k subsets, which usually causes subsampling bias, inflates generalization errors and jeopardizes the reliability and effectiveness of cross-validation. Based on ordered systematic sampling theory in statistics and low-discrepancy sequence theory in number theory, we propose a new k-fold cross-validation procedure by replacing a pseudo-random sequence with a best-discrepancy sequence, which ensures low subsampling bias and leads to more precise Expected-Prediction-Error estimates. Experiments with 156 benchmark datasets and three classifiers (logistic regression, decision tree and naive bayes) show that in general, our cross-validation procedure can extrude subsampling bias in the MCCV by lowering the EPE around 7.18 comparison, the stratified MCCV can reduce the EPE and variances of the MCCV around 1.58 around 2.50 The computational time of our cross-validation procedure is just 8.64 MCCV, 8.67 that our approach is more beneficial for datasets characterized by relatively small size and large aspect ratio. This makes our approach particularly pertinent when solving bioscience classification problems. Our proposed systematic subsampling technique could be generalized to other machine learning algorithms that involve random subsampling mechanism.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2018

Optimizing for Generalization in Machine Learning with Cross-Validation Gradients

Cross-validation is the workhorse of modern applied statistics and machi...
research
11/15/2017

Accelerating Cross-Validation in Multinomial Logistic Regression with ℓ_1-Regularization

We develop an approximate formula for evaluating a cross-validation esti...
research
03/23/2018

A Concept Learning Tool Based On Calculating Version Space Cardinality

In this paper, we proposed VeSC-CoL (Version Space Cardinality based Con...
research
01/04/2017

Learning causal effects from many randomized experiments using regularized instrumental variables

Scientific and business practices are increasingly resulting in large co...
research
10/21/2019

Generalised learning of time-series: Ornstein-Uhlenbeck processes

In machine learning, statistics, econometrics and statistical physics, k...
research
05/28/2019

The Theory Behind Overfitting, Cross Validation, Regularization, Bagging, and Boosting: Tutorial

In this tutorial paper, we first define mean squared error, variance, co...
research
11/26/2019

The Early Roots of Statistical Learning in the Psychometric Literature: A review and two new results

Machine and Statistical learning techniques become more and more importa...

Please sign up or login with your details

Forgot password? Click here to reset