DeepAI
Log In Sign Up

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

06/03/2022
by   Raphael Ettedgui, et al.
0

Randomized smoothing is the dominant standard for provable defenses against adversarial examples. Nevertheless, this method has recently been proven to suffer from important information theoretic limitations. In this paper, we argue that these limitations are not intrinsic, but merely a byproduct of current certification methods. We first show that these certificates use too little information about the classifier, and are in particular blind to the local curvature of the decision boundary. This leads to severely sub-optimal robustness guarantees as the dimension of the problem increases. We then show that it is theoretically possible to bypass this issue by collecting more information about the classifier. More precisely, we show that it is possible to approximate the optimal certificate with arbitrary precision, by probing the decision boundary with several noise distributions. Since this process is executed at certification time rather than at test time, it entails no loss in natural accuracy while enhancing the quality of the certificates. This result fosters further research on classifier-specific certification and demonstrates that randomized smoothing is still worth investigating. Although classifier-specific certification may induce more computational cost, we also provide some theoretical insight on how to mitigate it.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/07/2020

Extensions and limitations of randomized smoothing for robustness guarantees

Randomized smoothing, a method to certify a classifier's decision on an ...
03/03/2020

Analyzing Accuracy Loss in Randomized Smoothing Defenses

Recent advances in machine learning (ML) algorithms, especially deep neu...
07/12/2022

Certified Adversarial Robustness via Anisotropic Randomized Smoothing

Randomized smoothing has achieved great success for certified robustness...
11/28/2020

Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation

Randomized smoothing has established state-of-the-art provable robustnes...
11/26/2019

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

The safety and robustness of learning-based decision-making systems are ...
02/07/2020

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Machine learning algorithms are known to be susceptible to data poisonin...
05/27/2022

(De-)Randomized Smoothing for Decision Stump Ensembles

Tree-based models are used in many high-stakes application domains such ...