Black-box Smoothing: A Provable Defense for Pretrained Classifiers

03/04/2020
by   Hadi Salman, et al.
7

We present a method for provably defending any pretrained image classifier against ℓ_p adversarial attacks. By prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be ℓ_p-robust to adversarial examples, without modifying the pretrained classifier. The approach applies both to the case where we have full access to the pretrained classifier as well as the case where we only have query access. We refer to this defense as black-box smoothing, and we demonstrate its effectiveness through extensive experimentation on ImageNet and CIFAR-10. Finally, we use our method to provably defend the Azure, Google, AWS, and ClarifAI image classification APIs. Our code replicating all the experiments in the paper can be found at https://github.com/microsoft/blackbox-smoothing .

READ FULL TEXT

page 2

page 7

page 17

page 22

page 23

page 24

page 25

page 26

research
04/28/2022

Randomized Smoothing under Attack: How Good is it in Pratice?

Randomized smoothing is a recent and celebrated solution to certify the ...
research
10/17/2022

DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers

Certified defense using randomized smoothing is a popular technique to p...
research
10/18/2020

Poisoned classifiers are not only backdoored, they are fundamentally broken

Under a commonly-studied "backdoor" poisoning attack against classificat...
research
12/16/2019

Constructing a provably adversarially-robust classifier from a high accuracy one

Modern machine learning models with very high accuracy have been shown t...
research
03/27/2022

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

The lack of adversarial robustness has been recognized as an important i...
research
11/25/2022

Invariance-Aware Randomized Smoothing Certificates

Building models that comply with the invariances inherent to different d...
research
03/29/2021

Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions

In this paper, we propose Selective Output Smoothing Regularization, a n...

Please sign up or login with your details

Forgot password? Click here to reset