Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout

06/03/2021
by   Pengfei Xie, et al.
0

Deep neural networks(DNNs) is vulnerable to be attacked by adversarial examples. Black-box attack is the most threatening attack. At present, black-box attack methods mainly adopt gradient-based iterative attack methods, which usually limit the relationship between the iteration step size, the number of iterations, and the maximum perturbation. In this paper, we propose a new gradient iteration framework, which redefines the relationship between the above three. Under this framework, we easily improve the attack success rate of DI-TI-MIM. In addition, we propose a gradient iterative attack method based on input dropout, which can be well combined with our framework. We further propose a multi dropout rate version of this method. Experimental results show that our best method can achieve attack success rate of 96.2% for defense model on average, which is higher than the state-of-the-art gradient-based attacks.

READ FULL TEXT
research
12/01/2020

Improving the Transferability of Adversarial Examples with the Adam Optimizer

Convolutional neural networks have outperformed humans in image recognit...
research
08/06/2019

Random Directional Attack for Fooling Deep Neural Networks

Deep neural networks (DNNs) have been widely used in many fields such as...
research
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
research
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
05/25/2021

OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS

With the growing popularity of Android devices, Android malware is serio...
research
12/11/2021

Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting

We introduce a three stage pipeline: resized-diverse-inputs (RDIM), dive...
research
01/27/2023

Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks

Learning adversarial examples can be formulated as an optimization probl...

Please sign up or login with your details

Forgot password? Click here to reset