Reducing Adversarially Robust Learning to Non-Robust PAC Learning

10/22/2020 ∙ by Omar Montasser, et al. ∙ 0

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class 𝒞 using any non-robust learner 𝒜 for 𝒞. The number of calls to 𝒜 depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.