-
Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack
In recent years, deep neural networks demonstrated state-of-the-art perf...
read it
-
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Recent studies have highlighted the vulnerability of deep neural network...
read it
-
A Learning and Masking Approach to Secure Learning
Deep Neural Networks (DNNs) have been shown to be vulnerable against adv...
read it
-
One pixel attack for fooling deep neural networks
Recent research has revealed that the output of Deep Neural Networks (DN...
read it
-
Targeted Attention Attack on Deep Learning Models in Road Sign Recognition
Real world traffic sign recognition is an important step towards buildin...
read it
-
Attacking Convolutional Neural Network using Differential Evolution
The output of Convolutional Neural Networks (CNN) has been shown to be d...
read it
-
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
The success of DNNs has driven the extensive applications of person re-i...
read it
GreedyFool: Distortion-Aware Sparse Adversarial Attack
Modern deep neural networks(DNNs) are vulnerable to adversarial samples. Sparse adversarial samples are a special branch of adversarial samples that can fool the target model by only perturbing a few pixels. The existence of the sparse adversarial attack points out that DNNs are much more vulnerable than people believed, which is also a new aspect for analyzing DNNs. However, current sparse adversarial attack methods still have some shortcomings on both sparsity and invisibility. In this paper, we propose a novel two-stage distortion-aware greedy-based method dubbed as "GreedyFool". Specifically, it first selects the most effective candidate positions to modify by considering both the gradient(for adversary) and the distortion map(for invisibility), then drops some less important points in the reduce stage. Experiments demonstrate that compared with the start-of-the-art method, we only need to modify 3× fewer pixels under the same sparse perturbation setting. For target attack, the success rate of our method is 9.96% higher than the start-of-the-art method under the same pixel budget. Code can be found at https://github.com/LightDXY/GreedyFool.
READ FULL TEXT
Comments
There are no comments yet.