Streaming algorithms for evaluating noisy judges on unlabeled data – binary classification

06/02/2023
by   Andrés Corrada-Emmanuel, et al.
0

The evaluation of noisy binary classifiers on unlabeled data is treated as a streaming task: given a data sketch of the decisions by an ensemble, estimate the true prevalence of the labels as well as each classifier's accuracy on them. Two fully algebraic evaluators are constructed to do this. Both are based on the assumption that the classifiers make independent errors. The first is based on majority voting. The second, the main contribution of the paper, is guaranteed to be correct. But how do we know the classifiers are independent on any given test? This principal/agent monitoring paradox is ameliorated by exploiting the failures of the independent evaluator to return sensible estimates. A search for nearly error independent trios is empirically carried out on the , , and datasets by using the algebraic failure modes to reject evaluation ensembles as too correlated. The searches are refined by constructing a surface in evaluation space that contains the true value point. The algebra of arbitrarily correlated classifiers permits the selection of a polynomial subset free of any correlation variables. Candidate evaluation ensembles are rejected if their data sketches produce independent estimates too far from the constructed surface. The results produced by the surviving ensembles can sometimes be as good as 1%. But handling even small amounts of correlation remains a challenge. A Taylor expansion of the estimates produced when independence is assumed but the classifiers are, in fact, slightly correlated helps clarify how the independent evaluator has algebraic `blind spots'.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2020

Independence Tests Without Ground Truth for Noisy Learners

Exact ground truth invariant polynomial systems can be written for arbit...
research
03/05/2015

Optimally Combining Classifiers Using Unlabeled Data

We develop a worst-case analysis of aggregation of classifier ensembles ...
research
05/19/2017

Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach

We propose an efficient method to estimate the accuracy of classifiers u...
research
08/26/2022

Confusion Matrices and Accuracy Statistics for Binary Classifiers Using Unlabeled Data: The Diagnostic Test Approach

Medical researchers have solved the problem of estimating the sensitivit...
research
06/15/2020

Algebraic Ground Truth Inference: Non-Parametric Estimation of Sample Errors by AI Algorithms

Binary classification is widely used in ML production systems. Monitorin...
research
08/27/2023

Leveraging Linear Independence of Component Classifiers: Optimizing Size and Prediction Accuracy for Online Ensembles

Ensembles, which employ a set of classifiers to enhance classification a...
research
07/24/2013

When is the majority-vote classifier beneficial?

In his seminal work, Schapire (1990) proved that weak classifiers could ...

Please sign up or login with your details

Forgot password? Click here to reset