Black-box Certification and Learning under Adversarial Perturbations

06/30/2020
by   Hassan Ashtiani, et al.
10

We formally study the problem of classification under adversarial perturbations, both from the learner's perspective, and from the viewpoint of a third-party who aims at certifying the robustness of a given black-box classifier. We analyze a PAC-type framework of semi-supervised learning and identify possibility and impossibility results for proper learning of VC-classes in this setting. We further introduce and study a new setting of black-box certification under limited query budget. We analyze this for various classes of predictors and types of perturbation. We also consider the viewpoint of a black-box adversary that aims at finding adversarial examples, showing that the existence of an adversary with polynomial query complexity implies the existence of a robust learner with small sample complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

We study the problem of reducing adversarially robust learning to standa...
research
11/21/2019

Heuristic Black-box Adversarial Attacks on Video Recognition Models

We study the problem of attacking video recognition models in the black-...
research
04/14/2022

Planting Undetectable Backdoors in Machine Learning Models

Given the computational cost and technical expertise required to train m...
research
11/09/2018

Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...
research
10/02/2018

Can Adversarially Robust Learning Leverage Computational Hardness?

Making learners robust to adversarial perturbation at test time (i.e., e...
research
02/10/2021

Adversarial Robustness: What fools you makes you stronger

We prove an exponential separation for the sample complexity between the...
research
11/09/2018

Universal Hard-label Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...

Please sign up or login with your details

Forgot password? Click here to reset