Transductive Robust Learning Guarantees

10/20/2021
by   Omar Montasser, et al.
0

We study the problem of adversarially robust learning in the transductive setting. For classes ℋ of bounded VC dimension, we propose a simple transductive learner that when presented with a set of labeled training examples and a set of unlabeled test examples (both sets possibly adversarially perturbed), it correctly labels the test examples with a robust error rate that is linear in the VC dimension and is adaptive to the complexity of the perturbation set. This result provides an exponential improvement in dependence on VC dimension over the best known upper bound on the robust error in the inductive setting, at the expense of competing with a more restrictive notion of optimal robust error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2022

A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability

We study the problem of semi-supervised learning of an adversarially-rob...
research
07/10/2020

Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

We present a transductive learning algorithm that takes as input trainin...
research
03/02/2022

Adversarially Robust Learning with Tolerance

We study the problem of tolerant adversarial PAC learning with respect t...
research
10/06/2022

On Optimal Learning Under Targeted Data Poisoning

Consider the task of learning a hypothesis class ℋ in the presence of an...
research
03/01/2021

Robust learning under clean-label attack

We study the problem of robust learning under clean-label data-poisoning...
research
04/21/2022

Model-free Learning of Regions of Attraction via Recurrent Sets

We consider the problem of learning an inner approximation of the region...
research
04/07/2018

ε-Coresets for Clustering (with Outliers) in Doubling Metrics

We study the problem of constructing ε-coresets for the (k, z)-clusterin...

Please sign up or login with your details

Forgot password? Click here to reset