Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense

04/12/2019
by   Lingyun Jiang, et al.
0

In image classification of deep learning, adversarial examples where inputs intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies have been proposed to better research the mechanism of deep learning. However, those research in these networks are only for one aspect, either an attack or a defense, not considering that attacks and defenses should be interdependent and mutually reinforcing, just like the relationship between spears and shields. In this paper, we propose Cycle-Consistent Adversarial GAN (CycleAdvGAN) to generate adversarial examples, which can learn and approximate the distribution of original instances and adversarial examples. For CycleAdvGAN, once the Generator and are trained, can generate adversarial perturbations efficiently for any instance, so as to make DNNs predict wrong, and recovery adversarial examples to clean instances, so as to make DNNs predict correct. We apply CycleAdvGAN under semi-white box and black-box settings on two public datasets MNIST and CIFAR10. Using the extensive experiments, we show that our method has achieved the state-of-the-art adversarial attack method and also efficiently improve the defense ability, which make the integration of adversarial attack and defense come true. In additional, it has improved attack effect only trained on the adversarial dataset generated by any kind of adversarial attack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2018

Generating adversarial examples with adversarial networks

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
04/14/2018

On the Limitation of MagNet Defense against L_1-based Adversarial Examples

In recent years, defending adversarial perturbations to natural examples...
research
11/19/2021

Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method

Intelligent Internet of Things (IoT) systems based on deep neural networ...
research
03/31/2020

A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays

Recently, deep neural networks (DNNs) have made great progress on automa...
research
03/26/2020

On the adversarial robustness of DNNs based on error correcting output codes

Adversarial examples represent a great security threat for deep learning...
research
09/12/2019

An Empirical Investigation of Randomized Defenses against Adversarial Attacks

In recent years, Deep Neural Networks (DNNs) have had a dramatic impact ...
research
12/25/2021

Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping

Recently, deep neural networks (DNNs) have been deployed in safety-criti...

Please sign up or login with your details

Forgot password? Click here to reset