On the Limitations of Stochastic Pre-processing Defenses

06/19/2022
by   Yue Gao, et al.
0

Defending against adversarial examples remains an open problem. A common belief is that randomness at inference increases the cost of finding adversarial inputs. An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model. In this paper, we empirically and theoretically investigate such stochastic pre-processing defenses and demonstrate that they are flawed. First, we show that most stochastic defenses are weaker than previously thought; they lack sufficient randomness to withstand even standard attacks like projected gradient descent. This casts doubt on a long-held assumption that stochastic defenses invalidate attacks designed to evade deterministic defenses and force attackers to integrate the Expectation over Transformation (EOT) concept. Second, we show that stochastic defenses confront a trade-off between adversarial robustness and model invariance; they become less effective as the defended model acquires more invariance to their randomization. Future work will need to decouple these two effects. Our code is available in the supplementary material.

READ FULL TEXT

page 7

page 8

page 28

research
06/28/2021

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Evading adversarial example detection defenses requires finding adversar...
research
02/27/2023

Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators

It is becoming increasingly imperative to design robust ML defenses. How...
research
05/15/2017

Extending Defensive Distillation

Machine learning is vulnerable to adversarial examples: inputs carefully...
research
02/05/2018

Robust Pre-Processing: A Robust Defense Method Against Adversary Attack

Deep learning algorithms and networks are vulnerable to perturbed inputs...
research
08/20/2019

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

Despite achieving remarkable success in various domains, recent studies ...
research
12/11/2018

Mix'n'Squeeze: Thwarting Adaptive Adversarial Samples Using Randomized Squeezing

Deep Learning (DL) has been shown to be particularly vulnerable to adver...
research
09/30/2019

Defense in Depth: The Basics of Blockade and Delay

Given that individual defenses are rarely sufficient, defense-in-depth i...

Please sign up or login with your details

Forgot password? Click here to reset