Towards optimally abstaining from prediction

05/28/2021
by   Adam Tauman Kalai, et al.
0

A common challenge across all areas of machine learning is that training data is not distributed like test data, due to natural shifts, "blind spots," or adversarial examples. We consider a model where one may abstain from predicting, at a fixed cost. In particular, our transductive abstention algorithm takes labeled training examples and unlabeled test examples as input, and provides predictions with optimal prediction loss guarantees. The loss bounds match standard generalization bounds when test examples are i.i.d. from the training distribution, but add an additional term that is the cost of abstaining times the statistical distance between the train and test distribution (or the fraction of adversarial examples). For linear regression, we give a polynomial-time algorithm based on Celis-Dennis-Tapia optimization algorithms. For binary classification, we show how to efficiently implement it using a proper agnostic learner (i.e., an Empirical Risk Minimizer) for the class of interest. Our work builds on a recent abstention algorithm of Goldwasser, Kalais, and Montasser (2020) for transductive binary classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2018

Improved generalization bounds for robust learning

We consider a model of robust learning in an adversarial environment. Th...
research
07/10/2020

Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

We present a transductive learning algorithm that takes as input trainin...
research
01/24/2019

Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples

State-of-the-art neural networks are vulnerable to adversarial examples;...
research
06/22/2023

Adversarial Resilience in Sequential Prediction via Abstention

We study the problem of sequential prediction in the stochastic setting ...
research
11/09/2019

Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

Adversarial examples are a pervasive phenomenon of machine learning mode...
research
05/25/2018

Adversarial examples from computational constraints

Why are classifiers in high dimension vulnerable to "adversarial" pertur...
research
11/15/2018

Adversarial Examples from Cryptographic Pseudo-Random Generators

In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we arg...

Please sign up or login with your details

Forgot password? Click here to reset