DeepAI
Log In Sign Up

Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout

06/03/2021
by   Pengfei Xie, et al.
0

Deep neural networks(DNNs) is vulnerable to be attacked by adversarial examples. Black-box attack is the most threatening attack. At present, black-box attack methods mainly adopt gradient-based iterative attack methods, which usually limit the relationship between the iteration step size, the number of iterations, and the maximum perturbation. In this paper, we propose a new gradient iteration framework, which redefines the relationship between the above three. Under this framework, we easily improve the attack success rate of DI-TI-MIM. In addition, we propose a gradient iterative attack method based on input dropout, which can be well combined with our framework. We further propose a multi dropout rate version of this method. Experimental results show that our best method can achieve attack success rate of 96.2% for defense model on average, which is higher than the state-of-the-art gradient-based attacks.

READ FULL TEXT
12/01/2020

Improving the Transferability of Adversarial Examples with the Adam Optimizer

Convolutional neural networks have outperformed humans in image recognit...
08/06/2019

Random Directional Attack for Fooling Deep Neural Networks

Deep neural networks (DNNs) have been widely used in many fields such as...
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
05/25/2021

OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS

With the growing popularity of Android devices, Android malware is serio...