Adversarially Robust Learning with Unknown Perturbation Sets

02/03/2021
by   Omar Montasser, et al.
0

We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2020

Efficiently Learning Adversarially Robust Halfspaces with Noise

We study the problem of learning adversarially robust halfspaces in the ...
research
06/25/2023

On Evaluating the Adversarial Robustness of Semantic Segmentation Models

Achieving robustness against adversarial input perturbation is an import...
research
09/15/2022

Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization

We present a minimax optimal learner for the problem of learning predict...
research
04/21/2018

Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size

A key problem in research on adversarial examples is that vulnerability ...
research
06/13/2023

On Achieving Optimal Adversarial Test Error

We first elucidate various fundamental properties of optimal adversarial...
research
06/10/2019

Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective

We study the problem of computing the minimum adversarial perturbation o...
research
02/09/2023

Tree Learning: Optimal Algorithms and Sample Complexity

We study the problem of learning a hierarchical tree representation of d...

Please sign up or login with your details

Forgot password? Click here to reset