Some Theory For Practical Classifier Validation

10/09/2015
by   Eric Bax, et al.
0

We compare and contrast two approaches to validating a trained classifier while using all in-sample data for training. One is simultaneous validation over an organized set of hypotheses (SVOOSH), the well-known method that began with VC theory. The other is withhold and gap (WAG). WAG withholds a validation set, trains a holdout classifier on the remaining data, uses the validation data to validate that classifier, then adds the rate of disagreement between the holdout classifier and one trained using all in-sample data, which is an upper bound on the difference in error rates. We show that complex hypothesis classes and limited training data can make WAG a favorable alternative.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2022

Universal Neyman-Pearson Classification with a Known Hypothesis

We propose a universal classifier for binary Neyman-Pearson classificati...
research
08/18/2019

SPOCC: Scalable POssibilistic Classifier Combination -- toward robust aggregation of classifiers

We investigate a problem in which each member of a group of learners is ...
research
06/23/2020

Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation

In the unsupervised open set domain adaptation (UOSDA), the target domai...
research
12/05/2012

Making Early Predictions of the Accuracy of Machine Learning Applications

The accuracy of machine learning systems is a widely studied research to...
research
08/11/2021

Asymptotic optimality and minimal complexity of classification by random projection

The generalization error of a classifier is related to the complexity of...
research
12/27/2019

Statistical Agnostic Mapping: a Framework in Neuroimaging based on Concentration Inequalities

In the 70s a novel branch of statistics emerged focusing its effort in s...

Please sign up or login with your details

Forgot password? Click here to reset