Adaptive Perturbation for Adversarial Attack

11/27/2021
by   Zheng Yuan, et al.
14

In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on L_∞ norm. However, we find that the sign function may be improper for generating adversarial examples since it modifies the exact gradient direction. We propose to remove the sign function and directly utilize the exact gradient direction with a scaling factor for generating adversarial perturbations, which improves the attack success rates of adversarial examples even with fewer perturbations. Moreover, considering that the best scaling factor varies across different images, we propose an adaptive scaling factor generator to seek an appropriate scaling factor for each image, which avoids the computational cost for manually searching the scaling factor. Our method can be integrated with almost all existing gradient-based attack methods to further improve the attack success rates. Extensive experiments on the CIFAR10 and ImageNet datasets show that our method exhibits higher transferability and outperforms the state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2020

Making Adversarial Examples More Transferable and Indistinguishable

Many previous methods generate adversarial examples based on the fast gr...
research
04/06/2022

Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks

Deep neural networks have shown to be very vulnerable to adversarial exa...
research
04/12/2019

Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients

We focus our attention on the problem of generating adversarial perturba...
research
08/06/2019

Random Directional Attack for Fooling Deep Neural Networks

Deep neural networks (DNNs) have been widely used in many fields such as...
research
06/15/2018

Random depthwise signed convolutional neural networks

Random weights in convolutional neural networks have shown promising res...
research
02/01/2019

Adaptive Gradient Refinement for Adversarial Perturbation Generation

Deep Neural Networks have achieved remarkable success in computer vision...
research
10/09/2020

Targeted Attention Attack on Deep Learning Models in Road Sign Recognition

Real world traffic sign recognition is an important step towards buildin...

Please sign up or login with your details

Forgot password? Click here to reset