NAG: Network for Adversary Generation

12/09/2017
by   Konda Reddy Mopuri, et al.
0

Adversarial perturbations can pose a serious threat for deploying machine learning systems. Recent works have shown existence of image-agnostic perturbations that can fool classifiers over most natural images. Existing methods present optimization approaches that solve for a fooling objective with an imperceptibility constraint to craft the perturbations. However, for a given classifier, they generate one perturbation at a time, which is a single instance from the manifold of adversarial perturbations. Also, in order to build robust models, it is essential to explore the manifold of adversarial perturbations. In this paper, we propose for the first time, a generative approach to model the distribution of adversarial perturbations. The architecture of the proposed model is inspired from that of GANs and is trained using fooling and diversity objectives. Our trained generator network attempts to capture the distribution of adversarial perturbations for a given classifier and readily generates a wide variety of such perturbations. Our experimental evaluation demonstrates that perturbations crafted by our model (i) achieve state-of-the-art fooling rates, (ii) exhibit wide variety and (iii) deliver excellent cross model generalizability. Our work can be deemed as an important step in the process of inferring about the complex manifolds of adversarial perturbations.

READ FULL TEXT

page 6

page 7

page 9

page 10

page 11

page 12

research
12/19/2019

Adversarial Perturbations on the Perceptual Ball

We present a simple regularisation of Adversarial Perturbations based up...
research
07/02/2018

Adversarial Perturbations Against Real-Time Video Classification Systems

Recent research has demonstrated the brittleness of machine learning sys...
research
08/03/2018

Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions

Deep learning models are susceptible to input specific noise, called adv...
research
11/21/2017

The Manifold Assumption and Defenses Against Adversarial Perturbations

In the adversarial-perturbation problem of neural networks, an adversary...
research
11/21/2017

Manifold Assumption and Defenses Against Adversarial Perturbations

In the adversarial perturbation problem of neural networks, an adversary...
research
01/16/2020

A Little Fog for a Large Turn

Small, carefully crafted perturbations called adversarial perturbations ...
research
03/26/2022

Reverse Engineering of Imperceptible Adversarial Image Perturbations

It has been well recognized that neural network based image classifiers ...

Please sign up or login with your details

Forgot password? Click here to reset