Analyzing Accuracy Loss in Randomized Smoothing Defenses

by   Yue Gao, et al.

Recent advances in machine learning (ML) algorithms, especially deep neural networks (DNNs), have demonstrated remarkable success (sometimes exceeding human-level performance) on several tasks, including face and speech recognition. However, ML algorithms are vulnerable to adversarial attacks, such test-time, training-time, and backdoor attacks. In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example. Adversarial examples are a concern when deploying ML algorithms in critical contexts, such as information security and autonomous driving. Researchers have responded with a plethora of defenses. One promising defense is randomized smoothing in which a classifier's prediction is smoothed by adding random noise to the input example we wish to classify. In this paper, we theoretically and empirically explore randomized smoothing. We investigate the effect of randomized smoothing on the feasible hypotheses space, and show that for some noise levels the set of hypotheses which are feasible shrinks due to smoothing, giving one reason why the natural accuracy drops after smoothing. To perform our analysis, we introduce a model for randomized smoothing which abstracts away specifics, such as the exact distribution of the noise. We complement our theoretical results with extensive experiments.


page 1

page 2

page 3

page 4


On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

Backdoor attack is a severe security threat to deep neural networks (DNN...

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

The safety and robustness of learning-based decision-making systems are ...

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Machine learning algorithms are known to be susceptible to data poisonin...

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

Randomized smoothing is the dominant standard for provable defenses agai...

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

In this article I describe a research agenda for securing machine learni...

On Need for Topology-Aware Generative Models for Manifold-Based Defenses

ML algorithms or models, especially deep neural networks (DNNs), have sh...