Customizing an Adversarial Example Generator with Class-Conditional GANs

06/27/2018
by   Shih-hong Tsai, et al.
0

Adversarial examples are intentionally crafted data with the purpose of deceiving neural networks into misclassification. When we talk about strategies to create such examples, we usually refer to perturbation-based methods that fabricate adversarial examples by applying invisible perturbations onto normal data. The resulting data reserve their visual appearance to human observers, yet can be totally unrecognizable to DNN models, which in turn leads to completely misleading predictions. In this paper, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbation-based framework that generates native adversarial examples from class-conditional generative adversarial networks.As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an "adversarial-example generator". We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies.

READ FULL TEXT

page 6

page 7

page 8

research
07/18/2017

APE-GAN: Adversarial Perturbation Elimination with GAN

Although neural networks could achieve state-of-the-art performance whil...
research
08/31/2018

MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks

Despite being popularly used in many application domains such as image r...
research
02/06/2020

AI-GAN: Attack-Inspired Generation of Adversarial Examples

Adversarial examples that can fool deep models are mainly crafted by add...
research
03/08/2021

Enhancing Transformation-based Defenses against Adversarial Examples with First-Order Perturbations

Studies show that neural networks are susceptible to adversarial attacks...
research
12/18/2020

AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text

Adversarial examples are vital to expose the vulnerability of machine le...
research
09/17/2019

Enhancing JPEG Steganography using Iterative Adversarial Examples

Convolutional Neural Networks (CNN) based methods have significantly imp...
research
04/23/2018

Siamese Generative Adversarial Privatizer for Biometric Data

State-of-the-art machine learning algorithms can be fooled by carefully ...

Please sign up or login with your details

Forgot password? Click here to reset