Black-box Smoothing: A Provable Defense for Pretrained Classifiers

03/04/2020 ∙ by Hadi Salman, et al. ∙ 7

We present a method for provably defending any pretrained image classifier against ℓ_p adversarial attacks. By prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be ℓ_p-robust to adversarial examples, without modifying the pretrained classifier. The approach applies both to the case where we have full access to the pretrained classifier as well as the case where we only have query access. We refer to this defense as black-box smoothing, and we demonstrate its effectiveness through extensive experimentation on ImageNet and CIFAR-10. Finally, we use our method to provably defend the Azure, Google, AWS, and ClarifAI image classification APIs. Our code replicating all the experiments in the paper can be found at https://github.com/microsoft/blackbox-smoothing .

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 17

page 22

page 23

page 24

page 25

page 26

Code Repositories

denoised-smoothing

Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs


view repo

blackbox-smoothing

Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs


view repo

Denoised-Smoothing-TF

Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.


view repo

breaking-poisoned-classifier

Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.