DeepAI AI Chat
Log In Sign Up

Generating Adversarial Examples With Conditional Generative Adversarial Net

03/18/2019
by   Ping Yu, et al.
0

Recently, deep neural networks have significant progress and successful application in various fields, but they are found vulnerable to attack instances, e.g., adversarial examples. State-of-art attack methods can generate attack images by adding small perturbation to the source image. These attack images can fool the classifier but have little impact to human. Therefore, such attack instances are difficult to generate by searching the feature space. How to design an effective and robust generating method has become a spotlight. Inspired by adversarial examples, we propose two novel generative models to produce adaptive attack instances directly, in which conditional generative adversarial network is adopted and distinctive strategy is designed for training. Compared with the common method, such as Fast Gradient Sign Method, our models can reduce the generating cost and improve robustness and has about one fifth running time for producing attack instance.

READ FULL TEXT

page 3

page 4

page 5

03/04/2020

Type I Attack for Generative Models

Generative models are popular tools with a wide range of applications. N...
03/14/2020

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models

We study the robustness against adversarial examples of kNN classifiers ...
12/06/2018

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
06/09/2020

Low Distortion Block-Resampling with Spatially Stochastic Networks

We formalize and attack the problem of generating new images from old on...
03/16/2021

One-Time Pads from the Digits of Pi

I present a method for generating one-time pads from the digits of pi. C...
03/28/2017

Adversarial Transformation Networks: Learning to Generate Adversarial Examples

Multiple different approaches of generating adversarial examples have be...
04/26/2022

Self-recoverable Adversarial Examples: A New Effective Protection Mechanism in Social Networks

Malicious intelligent algorithms greatly threaten the security of social...