GreedyFool: Distortion-Aware Sparse Adversarial Attack

10/26/2020
by   Xiaoyi Dong, et al.
0

Modern deep neural networks(DNNs) are vulnerable to adversarial samples. Sparse adversarial samples are a special branch of adversarial samples that can fool the target model by only perturbing a few pixels. The existence of the sparse adversarial attack points out that DNNs are much more vulnerable than people believed, which is also a new aspect for analyzing DNNs. However, current sparse adversarial attack methods still have some shortcomings on both sparsity and invisibility. In this paper, we propose a novel two-stage distortion-aware greedy-based method dubbed as "GreedyFool". Specifically, it first selects the most effective candidate positions to modify by considering both the gradient(for adversary) and the distortion map(for invisibility), then drops some less important points in the reduce stage. Experiments demonstrate that compared with the start-of-the-art method, we only need to modify 3× fewer pixels under the same sparse perturbation setting. For target attack, the success rate of our method is 9.96% higher than the start-of-the-art method under the same pixel budget. Code can be found at https://github.com/LightDXY/GreedyFool.

READ FULL TEXT

page 2

page 5

research
03/18/2022

AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack

Deep neural networks (DNNs) have been proven to be vulnerable to adversa...
research
06/10/2021

Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm

Sparse adversarial attacks can fool deep neural networks (DNNs) by only ...
research
12/06/2018

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
research
11/01/2021

ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack

In this paper, we present Zero-data Based Repeated bit flip Attack (ZeBR...
research
05/31/2021

Transferable Sparse Adversarial Attack

Deep neural networks have shown their vulnerability to adversarial attac...
research
05/01/2021

A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

Most of the adversarial attack methods suffer from large perceptual dist...
research
02/08/2019

Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis

Deep neural networks were shown to be vulnerable to single pixel modific...

Please sign up or login with your details

Forgot password? Click here to reset