Discriminator-Free Generative Adversarial Attack

07/20/2021
by   Shaohao Lu, et al.
5

The Deep Neural Networks are vulnerable toadversarial exam-ples(Figure 1), making the DNNs-based systems collapsed byadding the inconspicuous perturbations to the images. Most of the existing works for adversarial attack are gradient-based and suf-fer from the latency efficiencies and the load on GPU memory. Thegenerative-based adversarial attacks can get rid of this limitation,and some relative works propose the approaches based on GAN.However, suffering from the difficulty of the convergence of train-ing a GAN, the adversarial examples have either bad attack abilityor bad visual quality. In this work, we find that the discriminatorcould be not necessary for generative-based adversarial attack, andpropose theSymmetric Saliency-based Auto-Encoder (SSAE)to generate the perturbations, which is composed of the saliencymap module and the angle-norm disentanglement of the featuresmodule. The advantage of our proposed method lies in that it is notdepending on discriminator, and uses the generative saliency map to pay more attention to label-relevant regions. The extensive exper-iments among the various tasks, datasets, and models demonstratethat the adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.The code is available at https://github.com/BravoLu/SSAE.

READ FULL TEXT

page 1

page 6

page 7

page 8

research
04/16/2019

AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets

Recent studies have discovered the vulnerability of Deep Neural Networks...
research
05/17/2018

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

In recent years, deep neural network approaches have been widely adopted...
research
08/04/2022

A New Kind of Adversarial Example

Almost all adversarial attacks are formulated to add an imperceptible pe...
research
05/27/2018

Defending Against Adversarial Attacks by Leveraging an Entire GAN

Recent work has shown that state-of-the-art models are highly vulnerable...
research
03/03/2017

Generative Poisoning Attack Method Against Neural Networks

Poisoning attack is identified as a severe security threat to machine le...
research
05/16/2019

GazeGAN: A Generative Adversarial Saliency Model based on Invariance Analysis of Human Gaze During Scene Free Viewing

Data size is the bottleneck for developing deep saliency models, because...
research
05/17/2021

Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing

Due to its powerful capability of representation learning and high-effic...

Please sign up or login with your details

Forgot password? Click here to reset