One Sparse Perturbation to Fool them All, almost Always!

04/24/2020
by   Arka Ghosh, et al.
0

Constructing adversarial perturbations for deep neural networks is an important direction of research. Crafting image-dependent adversarial perturbations using white-box feedback has hitherto been the norm for such adversarial attacks. However, black-box attacks are much more practical for real-world applications. Universal perturbations applicable across multiple images are gaining popularity due to their innate generalizability. There have also been efforts to restrict the perturbations to a few pixels in the image. This helps to retain visual similarity with the original images making such attacks hard to detect. This paper marks an important step which combines all these directions of research. We propose the DEceit algorithm for constructing effective universal pixel-restricted perturbations using only black-box feedback from the target network. We conduct empirical investigations using the ImageNet validation set on the state-of-the-art deep neural classifiers by varying the number of pixels to be perturbed from a meagre 10 pixels to as high as all pixels in the image. We find that perturbing only about 10 pixels in an image using DEceit achieves a commendable and highly transferable Fooling Rate while retaining the visual quality. We further demonstrate that DEceit can be successfully applied to image dependent attacks as well. In both sets of experiments, we outperformed several state-of-the-art methods.

READ FULL TEXT
research
01/29/2021

You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries

Researchers have repeatedly shown that it is possible to craft adversari...
research
07/12/2021

EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks

Recent work has shown how easily white-box adversarial attacks can be ap...
research
11/27/2018

Universal Adversarial Training

Standard adversarial attacks change the predicted class label of an imag...
research
07/01/2019

Diminishing the Effect of Adversarial Perturbations via Refining Feature Representation

Deep neural networks are highly vulnerable to adversarial examples, whic...
research
06/20/2023

Comparative Evaluation of Recent Universal Adversarial Perturbations in Image Classification

The vulnerability of Convolutional Neural Networks (CNNs) to adversarial...
research
04/19/2018

Attacking Convolutional Neural Network using Differential Evolution

The output of Convolutional Neural Networks (CNN) has been shown to be d...
research
04/15/2019

Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks

In this paper, we study dependence of the success rate of adversarial at...

Please sign up or login with your details

Forgot password? Click here to reset