Universalization of any adversarial attack using very few test examples

05/18/2020
by   Sandesh Kamath, et al.
3

Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but also to input-agnostic or universal adversarial attacks. Dezfooli et al. <cit.> construct universal adversarial attack on a given model by looking at a large number of training data points and the geometry of the decision boundary near them. Subsequent work <cit.> constructs universal attack by looking only at test examples and intermediate layers of the given model. In this paper, we propose a simple universalization technique to take any input-dependent adversarial attack and construct a universal attack by only looking at very few adversarial test examples. We do not require details of the given model and have negligible computational overhead for universalization. We theoretically justify our universalization technique by a spectral property common to many input-dependent adversarial perturbations, e.g., gradients, Fast Gradient Sign Method (FGSM) and DeepFool. Using matrix concentration inequalities and spectral perturbation bounds, we show that the top singular vector of input-dependent adversarial directions on a small test sample gives an effective and simple universal adversarial attack. For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks <cit.> for reasonable norms of perturbation.

READ FULL TEXT
research
10/15/2020

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

The previous study has shown that universal adversarial attacks can fool...
research
01/13/2021

Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series

Deep learning based models are vulnerable to adversarial attacks. These ...
research
06/08/2020

On Universalized Adversarial and Invariant Perturbations

Convolutional neural networks or standard CNNs (StdCNNs) are translation...
research
10/09/2018

The Adversarial Attack and Detection under the Fisher Information Metric

Many deep learning models are vulnerable to the adversarial attack, i.e....
research
04/07/2021

Universal Spectral Adversarial Attacks for Deformable Shapes

Machine learning models are known to be vulnerable to adversarial attack...
research
06/12/2020

D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack

We propose a novel technique that can generate natural-looking adversari...
research
06/14/2021

Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology

Deep learning models are routinely employed in computational pathology (...

Please sign up or login with your details

Forgot password? Click here to reset