GAP++: Learning to generate target-conditioned adversarial examples

06/09/2020
by   Xiaofeng Mao, et al.
5

Adversarial examples are perturbed inputs which can cause a serious threat for machine learning models. Finding these perturbations is such a hard task that we can only use the iterative methods to traverse. For computational efficiency, recent works use adversarial generative networks to model the distribution of both the universal or image-dependent perturbations directly. However, these methods generate perturbations only rely on input images. In this work, we propose a more general-purpose framework which infers target-conditioned perturbations dependent on both input image and target label. Different from previous single-target attack models, our model can conduct target-conditioned attacks by learning the relations of attack target and the semantics in image. Using extensive experiments on the datasets of MNIST and CIFAR10, we show that our method achieves superior performance with single target attack models and obtains high fooling rates with small perturbation norms.

READ FULL TEXT

page 3

page 5

research
12/06/2017

Generative Adversarial Perturbations

In this paper, we propose novel generative models for creating adversari...
research
07/04/2017

UPSET and ANGRI : Breaking High Performance Image Classifiers

In this paper, targeted fooling of high performance image classifiers is...
research
02/06/2020

AI-GAN: Attack-Inspired Generation of Adversarial Examples

Adversarial examples that can fool deep models are mainly crafted by add...
research
04/05/2021

Adversarial Attack in the Context of Self-driving

In this paper, we propose a model that can attack segmentation models wi...
research
09/08/2018

Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples

Adversarial examples are perturbed inputs designed to fool machine learn...
research
11/20/2019

Generate (non-software) Bugs to Fool Classifiers

In adversarial attacks intended to confound deep learning models, most s...
research
09/01/2023

Image Hijacks: Adversarial Images can Control Generative Models at Runtime

Are foundation models secure from malicious actors? In this work, we foc...

Please sign up or login with your details

Forgot password? Click here to reset