Adversarially Robust Learning of Real-Valued Functions

06/26/2022
by   Idan Attias, et al.
0

We study robustness to test-time adversarial attacks in the regression setting with ℓ_p losses and arbitrary perturbation sets. We address the question of which function classes are PAC learnable in this setting. We show that classes of finite fat-shattering dimension are learnable. Moreover, for convex function classes, they are even properly learnable. In contrast, some non-convex function classes provably require improper learning algorithms. We also discuss extensions to agnostic learning. Our main technique is based on a construction of an adversarially robust sample compression scheme of a size determined by the fat-shattering dimension.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2019

VC Classes are Adversarially Robustly Learnable, but Only Improperly

We study the question of learning an adversarially robust predictor. We ...
research
08/12/2023

Multiclass Learnability Does Not Imply Sample Compression

A hypothesis class admits a sample compression scheme, if for every samp...
research
02/06/2023

Find a witness or shatter: the landscape of computable PAC learning

This paper contributes to the study of CPAC learnability – a computable ...
research
07/07/2023

Optimal Learners for Realizable Regression: PAC Learning and Online Learning

In this work, we aim to characterize the statistical complexity of reali...
research
01/06/2023

A Characterization of Multilabel Learnability

We consider the problem of multilabel classification and investigate lea...
research
03/02/2022

Adversarially Robust Learning with Tolerance

We study the problem of tolerant adversarial PAC learning with respect t...
research
04/18/2023

Impossibility of Characterizing Distribution Learning – a simple solution to a long-standing problem

We consider the long-standing question of finding a parameter of a class...

Please sign up or login with your details

Forgot password? Click here to reset