Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

12/20/2019
by   Jinyuan Jia, et al.
0

It is well-known that classifiers are vulnerable to adversarial perturbations. To defend against adversarial perturbations, various certified robustness results have been derived. However, existing certified robustnesses are limited to top-1 predictions. In many real-world applications, top-k predictions are more relevant. In this work, we aim to derive certified robustness for top-k predictions. In particular, our certified robustness is based on randomized smoothing, which turns any classifier to a new classifier via adding noise to an input example. We adopt randomized smoothing because it is scalable to large-scale neural networks and applicable to any classifier. We derive a tight robustness in ℓ_2 norm for top-k predictions when using randomized smoothing with Gaussian noise. We find that generalizing the certified robustness from top-1 to top-k predictions faces significant technical challenges. We also empirically evaluate our method on CIFAR10 and ImageNet. For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62.8% when the ℓ_2-norms of the adversarial perturbations are less than 0.5 (=127/255). Our code is publicly available at: <https://github.com/jjy1994/Certify_Topk>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2019

Certified Adversarial Robustness via Randomized Smoothing

Recent work has shown that any classifier which classifies well under Ga...
research
11/15/2020

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

Top-k predictions are used in many real-world applications such as machi...
research
06/16/2022

Double Sampling Randomized Smoothing

Neural networks (NNs) are known to be vulnerable against adversarial per...
research
07/05/2022

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

We study certified robustness of machine learning classifiers against ad...
research
11/17/2019

Smoothed Inference for Adversarially-Trained Models

Deep neural networks are known to be vulnerable to inputs with malicious...
research
06/14/2022

Adversarial Vulnerability of Randomized Ensembles

Despite the tremendous success of deep neural networks across various ta...
research
10/20/2020

Tight Second-Order Certificates for Randomized Smoothing

Randomized smoothing is a popular way of providing robustness guarantees...

Please sign up or login with your details

Forgot password? Click here to reset