Learning to Abstain from Binary Prediction

02/25/2016
by   Akshay Balsubramani, et al.
0

A binary classifier capable of abstaining from making a label prediction has two goals in tension: minimizing errors, and avoiding abstaining unnecessarily often. In this work, we exactly characterize the best achievable tradeoff between these two goals in a general semi-supervised setting, given an ensemble of predictors of varying competence as well as unlabeled data on which we wish to predict or abstain. We give an algorithm for learning a classifier in this setting which trades off its errors with abstentions in a minimax optimal manner, is as efficient as linear learning and prediction, and is demonstrably practical. Our analysis extends to a large class of loss functions and other scenarios, including ensembles comprised of specialists that can themselves abstain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2015

Optimal Binary Classifier Aggregation for General Losses

We address the problem of aggregating an ensemble of predictors with kno...
research
03/05/2015

Optimally Combining Classifiers Using Unlabeled Data

We develop a worst-case analysis of aggregation of classifier ensembles ...
research
05/25/2019

Joint Label Prediction based Semi-Supervised Adaptive Concept Factorization for Robust Data Representation

Constrained Concept Factorization (CCF) yields the enhanced representati...
research
09/01/2020

Semi-Supervised Empirical Risk Minimization: When can unlabeled data improve prediction

We present a general methodology for using unlabeled data to design semi...
research
06/26/2021

Semi-Supervised Deep Ensembles for Blind Image Quality Assessment

Ensemble methods are generally regarded to be better than a single model...
research
07/19/2022

Sample Efficient Learning of Predictors that Complement Humans

One of the goals of learning algorithms is to complement and reduce the ...

Please sign up or login with your details

Forgot password? Click here to reset