DeepAI
Log In Sign Up

Black-box Smoothing: A Provable Defense for Pretrained Classifiers

03/04/2020
by   Hadi Salman, et al.
7

We present a method for provably defending any pretrained image classifier against ℓ_p adversarial attacks. By prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be ℓ_p-robust to adversarial examples, without modifying the pretrained classifier. The approach applies both to the case where we have full access to the pretrained classifier as well as the case where we only have query access. We refer to this defense as black-box smoothing, and we demonstrate its effectiveness through extensive experimentation on ImageNet and CIFAR-10. Finally, we use our method to provably defend the Azure, Google, AWS, and ClarifAI image classification APIs. Our code replicating all the experiments in the paper can be found at https://github.com/microsoft/blackbox-smoothing .

READ FULL TEXT

page 2

page 7

page 17

page 22

page 23

page 24

page 25

page 26

04/28/2022

Randomized Smoothing under Attack: How Good is it in Pratice?

Randomized smoothing is a recent and celebrated solution to certify the ...
10/17/2022

DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers

Certified defense using randomized smoothing is a popular technique to p...
10/18/2020

Poisoned classifiers are not only backdoored, they are fundamentally broken

Under a commonly-studied "backdoor" poisoning attack against classificat...
12/16/2019

Constructing a provably adversarially-robust classifier from a high accuracy one

Modern machine learning models with very high accuracy have been shown t...
11/25/2022

Invariance-Aware Randomized Smoothing Certificates

Building models that comply with the invariances inherent to different d...
03/27/2022

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

The lack of adversarial robustness has been recognized as an important i...
03/29/2021

Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions

In this paper, we propose Selective Output Smoothing Regularization, a n...

Code Repositories

denoised-smoothing

Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs


view repo

blackbox-smoothing

Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs


view repo

Denoised-Smoothing-TF

Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.


view repo

breaking-poisoned-classifier

Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"


view repo