Reducing Adversarially Robust Learning to Non-Robust PAC Learning

10/22/2020
by   Omar Montasser, et al.
0

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class 𝒞 using any non-robust learner 𝒜 for 𝒞. The number of calls to 𝒜 depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2020

Black-box Certification and Learning under Adversarial Perturbations

We formally study the problem of classification under adversarial pertur...
research
06/14/2021

Boosting in the Presence of Massart Noise

We study the problem of boosting the accuracy of a weak learner in the (...
research
11/24/2021

On computable learning of continuous features

We introduce definitions of computable PAC learning for binary classific...
research
03/30/2017

On Fundamental Limits of Robust Learning

We consider the problems of robust PAC learning from distributed and str...
research
11/15/2018

A Spectral View of Adversarially Robust Features

Given the apparent difficulty of learning models that are robust to adve...
research
09/15/2022

Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization

We present a minimax optimal learner for the problem of learning predict...
research
02/10/2022

Monotone Learning

The amount of training-data is one of the key factors which determines t...

Please sign up or login with your details

Forgot password? Click here to reset