Analyzing Accuracy Loss in Randomized Smoothing Defenses

03/03/2020
by   Yue Gao, et al.
15

Recent advances in machine learning (ML) algorithms, especially deep neural networks (DNNs), have demonstrated remarkable success (sometimes exceeding human-level performance) on several tasks, including face and speech recognition. However, ML algorithms are vulnerable to adversarial attacks, such test-time, training-time, and backdoor attacks. In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example. Adversarial examples are a concern when deploying ML algorithms in critical contexts, such as information security and autonomous driving. Researchers have responded with a plethora of defenses. One promising defense is randomized smoothing in which a classifier's prediction is smoothed by adding random noise to the input example we wish to classify. In this paper, we theoretically and empirically explore randomized smoothing. We investigate the effect of randomized smoothing on the feasible hypotheses space, and show that for some noise levels the set of hypotheses which are feasible shrinks due to smoothing, giving one reason why the natural accuracy drops after smoothing. To perform our analysis, we introduce a model for randomized smoothing which abstracts away specifics, such as the exact distribution of the noise. We complement our theoretical results with extensive experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/26/2020

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

Backdoor attack is a severe security threat to deep neural networks (DNN...
11/26/2019

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

The safety and robustness of learning-based decision-making systems are ...
02/07/2020

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Machine learning algorithms are known to be susceptible to data poisonin...
06/03/2022

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

Randomized smoothing is the dominant standard for provable defenses agai...
05/26/2019

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...
03/14/2019

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

In this article I describe a research agenda for securing machine learni...
09/07/2019

On Need for Topology-Aware Generative Models for Manifold-Based Defenses

ML algorithms or models, especially deep neural networks (DNNs), have sh...