Analyzing Accuracy Loss in Randomized Smoothing Defenses

03/03/2020
by   Yue Gao, et al.
15

Recent advances in machine learning (ML) algorithms, especially deep neural networks (DNNs), have demonstrated remarkable success (sometimes exceeding human-level performance) on several tasks, including face and speech recognition. However, ML algorithms are vulnerable to adversarial attacks, such test-time, training-time, and backdoor attacks. In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example. Adversarial examples are a concern when deploying ML algorithms in critical contexts, such as information security and autonomous driving. Researchers have responded with a plethora of defenses. One promising defense is randomized smoothing in which a classifier's prediction is smoothed by adding random noise to the input example we wish to classify. In this paper, we theoretically and empirically explore randomized smoothing. We investigate the effect of randomized smoothing on the feasible hypotheses space, and show that for some noise levels the set of hypotheses which are feasible shrinks due to smoothing, giving one reason why the natural accuracy drops after smoothing. To perform our analysis, we introduce a model for randomized smoothing which abstracts away specifics, such as the exact distribution of the noise. We complement our theoretical results with extensive experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

On Certifying Robustness against Backdoor Attacks via Randomized Smoothing

Backdoor attack is a severe security threat to deep neural networks (DNN...
research
11/26/2019

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

The safety and robustness of learning-based decision-making systems are ...
research
02/07/2020

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Machine learning algorithms are known to be susceptible to data poisonin...
research
06/03/2022

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

Randomized smoothing is the dominant standard for provable defenses agai...
research
06/24/2023

Machine Learning needs its own Randomness Standard: Randomised Smoothing and PRNG-based attacks

Randomness supports many critical functions in the field of machine lear...
research
05/26/2019

Enhancing ML Robustness Using Physical-World Constraints

Recent advances in Machine Learning (ML) have demonstrated that neural n...
research
03/14/2019

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

In this article I describe a research agenda for securing machine learni...

Please sign up or login with your details

Forgot password? Click here to reset