Direction-Aggregated Attack for Transferable Adversarial Examples

04/19/2021
by   Tianjin Huang, et al.
8

Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs. However, these adversarial examples are most successful in white-box settings where the model and its parameters are available. Finding adversarial examples that are transferable to other models or developed in a black-box setting is significantly more difficult. In this paper, we propose the Direction-Aggregated adversarial attacks that deliver transferable adversarial examples. Our method utilizes aggregated direction during the attack process for avoiding the generated adversarial examples overfitting to the white-box model. Extensive experiments on ImageNet show that our proposed method improves the transferability of adversarial examples significantly and outperforms state-of-the-art attacks, especially against adversarial robust models. The best averaged attack success rates of our proposed method reaches 94.6% against three adversarial trained models and 94.8% against five defense methods. It also reveals that current defense approaches do not prevent transferable adversarial attacks.

READ FULL TEXT

page 10

page 11

page 12

page 13

page 14

research
04/05/2019

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

Deep neural networks are vulnerable to adversarial examples, which can m...
research
11/17/2020

Generating universal language adversarial examples by understanding and enhancing the transferability across neural models

Deep neural network models are vulnerable to adversarial attacks. In man...
research
02/19/2018

Divide, Denoise, and Defend against Adversarial Attacks

Deep neural networks, although shown to be a successful class of machine...
research
12/08/2017

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

Neural networks are vulnerable to adversarial examples. This phenomenon ...
research
05/18/2023

Content-based Unrestricted Adversarial Attack

Unrestricted adversarial attacks typically manipulate the semantic conte...
research
01/10/2018

Fooling End-to-end Speaker Verification by Adversarial Examples

Automatic speaker verification systems are increasingly used as the prim...
research
03/10/2020

Using an ensemble color space model to tackle adversarial examples

Minute pixel changes in an image drastically change the prediction that ...

Please sign up or login with your details

Forgot password? Click here to reset