Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

12/06/2018
by   Jingyang Zhang, et al.
0

In recent years, deep neural networks demonstrated state-of-the-art performance in a large variety of tasks and therefore have been adopted in many applications. On the other hand, the latest studies revealed that neural networks are vulnerable to adversarial examples obtained by carefully adding small perturbation to legitimate samples. Based upon the observation, many attack methods were proposed. Among them, the optimization-based CW attack is the most powerful as the produced adversarial samples present much less distortion compared to other methods. The better attacking effect, however, comes at the cost of running more iterations and thus longer computation time to reach desirable results. In this work, we propose to leverage the information of gradients as a guidance during the search of adversaries. More specifically, directly incorporating the gradients into the perturbation can be regarded as a constraint added to the optimization process. We intuitively and empirically prove the rationality of our method in reducing the search space. Our experiments show that compared to the original CW attack, the proposed method requires fewer iterations towards adversarial samples, obtaining a higher success rate and resulting in smaller ℓ_2 distortion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...
research
10/26/2020

GreedyFool: Distortion-Aware Sparse Adversarial Attack

Modern deep neural networks(DNNs) are vulnerable to adversarial samples....
research
02/01/2020

AdvJND: Generating Adversarial Examples with Just Noticeable Difference

Compared with traditional machine learning models, deep neural networks ...
research
11/19/2021

Meta Adversarial Perturbations

A plethora of attack methods have been proposed to generate adversarial ...
research
01/05/2023

Silent Killer: Optimizing Backdoor Trigger Yields a Stealthy and Powerful Data Poisoning Attack

We propose a stealthy and powerful backdoor attack on neural networks ba...
research
10/08/2021

Multidirectional Conjugate Gradients for Scalable Bundle Adjustment

We revisit the problem of large-scale bundle adjustment and propose a te...
research
12/13/2019

Potential adversarial samples for white-box attacks

Deep convolutional neural networks can be highly vulnerable to small per...

Please sign up or login with your details

Forgot password? Click here to reset