Random Directional Attack for Fooling Deep Neural Networks

08/06/2019
by   Wenjian Luo, et al.
4

Deep neural networks (DNNs) have been widely used in many fields such as images processing, speech recognition; however, they are vulnerable to adversarial examples, and this is a security issue worthy of attention. Because the training process of DNNs converge the loss by updating the weights along the gradient descent direction, many gradient-based methods attempt to destroy the DNN model by adding perturbations in the gradient direction. Unfortunately, as the model is nonlinear in most cases, the addition of perturbations in the gradient direction does not necessarily increase loss. Thus, we propose a random directed attack (RDA) for generating adversarial examples in this paper. Rather than limiting the gradient direction to generate an attack, RDA searches the attack direction based on hill climbing and uses multiple strategies to avoid local optima that cause attack failure. Compared with state-of-the-art gradient-based methods, the attack performance of RDA is very competitive. Moreover, RDA can attack without any internal knowledge of the model, and its performance under black-box attack is similar to that of the white-box attack in most cases, which is difficult to achieve using existing gradient-based attack methods.

READ FULL TEXT

page 1

page 4

page 8

page 9

research
06/03/2021

Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout

Deep neural networks(DNNs) is vulnerable to be attacked by adversarial e...
research
10/21/2020

Boosting Gradient for White-Box Adversarial Attacks

Deep neural networks (DNNs) are playing key roles in various artificial ...
research
11/08/2021

Robust and Information-theoretically Safe Bias Classifier against Adversarial Attacks

In this paper, the bias classifier is introduced, that is, the bias part...
research
03/03/2017

Generative Poisoning Attack Method Against Neural Networks

Poisoning attack is identified as a severe security threat to machine le...
research
06/15/2018

Random depthwise signed convolutional neural networks

Random weights in convolutional neural networks have shown promising res...
research
11/27/2021

Adaptive Perturbation for Adversarial Attack

In recent years, the security of deep learning models achieves more and ...
research
10/28/2021

Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework

Despite great success on many machine learning tasks, deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset