APE-GAN: Adversarial Perturbation Elimination with GAN

07/18/2017
by   Shiwei Shen, et al.
0

Although neural networks could achieve state-of-the-art performance while recongnizing images, they often suffer a tremendous defeat from adversarial examples--inputs generated by utilizing imperceptible but intentional perturbation to clean samples from the datasets. How to defense against adversarial examples is an important problem which is well worth researching. So far, very few methods have provided a significant defense to adversarial examples. In this paper, a novel idea is proposed and an effective framework based Generative Adversarial Nets named APE-GAN is implemented to defense against the adversarial examples. The experimental results on three benchmark datasets including MNIST, CIFAR10 and ImageNet indicate that APE-GAN is effective to resist adversarial examples generated from five attacks.

READ FULL TEXT

page 9

page 10

page 14

research
09/17/2019

HAD-GAN: A Human-perception Auxiliary Defense GAN model to Defend Adversarial Examples

Adversarial examples reveal the vulnerability and unexplained nature of ...
research
07/24/2023

AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models

Unrestricted adversarial attacks present a serious threat to deep learni...
research
06/27/2018

Customizing an Adversarial Example Generator with Class-Conditional GANs

Adversarial examples are intentionally crafted data with the purpose of ...
research
05/14/2021

Salient Feature Extractor for Adversarial Defense on Deep Neural Networks

Recent years have witnessed unprecedented success achieved by deep learn...
research
09/23/2020

Adversarial robustness via stochastic regularization of neural activation sensitivity

Recent works have shown that the input domain of any machine learning cl...
research
04/07/2022

Adaptive-Gravity: A Defense Against Adversarial Samples

This paper presents a novel model training solution, denoted as Adaptive...
research
12/25/2018

Adversarial Feature Genome: a Data Driven Adversarial Examples Recognition Method

Convolutional neural networks (CNNs) are easily spoofed by adversarial e...

Please sign up or login with your details

Forgot password? Click here to reset