Adversarial Resilience in Sequential Prediction via Abstention

06/22/2023
by   Surbhi Goel, et al.
0

We study the problem of sequential prediction in the stochastic setting with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples. Algorithms designed to handle purely stochastic data tend to fail in the presence of such adversarial examples, often leading to erroneous predictions. This is undesirable in many high-stakes applications such as medical recommendations, where abstaining from predictions on adversarial examples is preferable to misclassification. On the other hand, assuming fully adversarial data leads to very pessimistic bounds that are often vacuous in practice. To capture this motivation, we propose a new model of sequential prediction that sits between the purely stochastic and fully adversarial settings by allowing the learner to abstain from making a prediction at no cost on adversarial examples. Assuming access to the marginal distribution on the non-adversarial examples, we design a learner whose error scales with the VC dimension (mirroring the stochastic setting) of the hypothesis class, as opposed to the Littlestone dimension which characterizes the fully adversarial setting. Furthermore, we design a learner for VC dimension 1 classes, which works even in the absence of access to the marginal distribution. Our key technical contribution is a novel measure for quantifying uncertainty for learning VC classes, which may be of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2022

On Optimal Learning Under Targeted Data Poisoning

Consider the task of learning a hypothesis class ℋ in the presence of an...
research
07/30/2019

Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding

Detecting adversarial examples currently stands as one of the biggest ch...
research
02/10/2020

Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers

We consider the problem of prediction by a machine learning algorithm, c...
research
01/28/2022

Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning

Adversarial examples are inputs for machine learning models that have be...
research
05/28/2021

Towards optimally abstaining from prediction

A common challenge across all areas of machine learning is that training...
research
02/22/2017

Robustness to Adversarial Examples through an Ensemble of Specialists

We are proposing to use an ensemble of diverse specialists, where specia...
research
03/09/2023

Efficient Testable Learning of Halfspaces with Adversarial Label Noise

We give the first polynomial-time algorithm for the testable learning of...

Please sign up or login with your details

Forgot password? Click here to reset