Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

01/16/2020
by   Sizhe Chen, et al.
2

Adversarial attacks on deep neural networks (DNNs) have been found for several years. However, the existing adversarial attacks have high success rates only when the information of the attacked DNN is well-known or could be estimated by structure similarity or massive queries. In this paper, we propose an Attack on Attention (AoA), a semantic feature commonly shared by DNNs. The transferability of AoA is quite high. With no more than 10 queries of the decision only, AoA can achieve almost 100% success rate when attacking on many popular DNNs. Even without query, AoA could keep a surprisingly high attack performance. We apply AoA to generate 96020 adversarial samples from ImageNet to defeat many neural networks, and thus name the dataset as DAmageNet. 20 well-trained DNNs are tested on DAmageNet. Without adversarial training, most of the tested DNNs have an error rate over 90%. DAmageNet is the first universal adversarial dataset and it could serve as a benchmark for robustness testing and adversarial training.

READ FULL TEXT

page 2

page 5

page 6

page 8

page 10

research
12/16/2019

DAmageNet: A Universal Adversarial Dataset

It is now well known that deep neural networks (DNNs) are vulnerable to ...
research
08/18/2020

Improving adversarial robustness of deep neural networks by using semantic information

The vulnerability of deep neural networks (DNNs) to adversarial attack, ...
research
05/11/2018

Breaking Transferability of Adversarial Samples with Randomness

We investigate the role of transferability of adversarial attacks in the...
research
07/01/2020

ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks

Deep Neural Networks (DNNs) have been applied successfully in computer v...
research
04/14/2022

Q-TART: Quickly Training for Adversarial Robustness and in-Transferability

Raw deep neural network (DNN) performance is not enough; in real-world s...
research
02/26/2020

Defending against Backdoor Attack on Deep Neural Networks

Although deep neural networks (DNNs) have achieved a great success in va...
research
09/04/2021

Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness

Adversarial attacks have been shown to be highly effective at degrading ...

Please sign up or login with your details

Forgot password? Click here to reset