Breaking Transferability of Adversarial Samples with Randomness

05/11/2018
by   Yan Zhou, et al.
0

We investigate the role of transferability of adversarial attacks in the observed vulnerabilities of Deep Neural Networks (DNNs). We demonstrate that introducing randomness to the DNN models is sufficient to defeat adversarial attacks, given that the adversary does not have an unlimited attack budget. Instead of making one specific DNN model robust to perfect knowledge attacks (a.k.a, white box attacks), creating randomness within an army of DNNs completely eliminates the possibility of perfect knowledge acquisition, resulting in a significantly more robust DNN ensemble against the strongest form of attacks. We also show that when the adversary has an unlimited budget of data perturbation, all defensive techniques would eventually break down as the budget increases. Therefore, it is important to understand the game saddle point where the adversary would not further pursue this endeavor. Furthermore, we explore the relationship between attack severity and decision boundary robustness in the version space. We empirically demonstrate that by simply adding a small Gaussian random noise to the learned weights, a DNN model can increase its resilience to adversarial attacks by as much as 74.2 importantly, we show that by randomly activating/revealing a model from a pool of pre-trained DNNs at each query request, we can put a tremendous strain on the adversary's attack strategies. We compare our randomization techniques to the Ensemble Adversarial Training technique and show that our randomization techniques are superior under different attack budget constraints.

READ FULL TEXT

page 5

page 7

page 8

page 9

research
01/16/2020

Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

Adversarial attacks on deep neural networks (DNNs) have been found for s...
research
11/18/2022

Adversarial Detection by Approximation of Ensemble Boundary

A spectral approximation of a Boolean function is proposed for approxima...
research
10/14/2021

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks

White-box Adversarial Example (AE) attacks towards Deep Neural Networks ...
research
11/16/2021

Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks

Bayesian Neural Networks (BNNs), unlike Traditional Neural Networks (TNN...
research
06/05/2023

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception

Deep neural networks (DNNs) are known to have a fundamental sensitivity ...
research
10/13/2022

AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient

Adversarial training is exploited to develop a robust Deep Neural Networ...
research
01/01/2022

Rethinking Feature Uncertainty in Stochastic Neural Networks for Adversarial Robustness

It is well-known that deep neural networks (DNNs) have shown remarkable ...

Please sign up or login with your details

Forgot password? Click here to reset