Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models

05/19/2020
by   Martin Kotuliak, et al.
0

Adversarial examples are data points misclassified by neural networks. Originally, adversarial examples were limited to adding small perturbations to a given image. Recent work introduced the generalized concept of unrestricted adversarial examples, without limits on the added perturbations. In this paper, we introduce a new category of attacks that create unrestricted adversarial examples for object detection. Our key idea is to generate adversarial objects that are unrelated to the classes identified by the target object detector. Different from previous attacks, we use off-the-shelf Generative Adversarial Networks (GAN), without requiring any further training or modification. Our method consists of searching over the latent normal space of the GAN for adversarial objects that are wrongly identified by the target object detector. We evaluate this method on the commonly used Faster R-CNN ResNet-101, Inception v2 and SSD Mobilenet v1 object detectors using logo generative iWGAN-LC and SNGAN trained on CIFAR-10. The empirical results show that the generated adversarial objects are indistinguishable from non-adversarial objects generated by the GANs, transferable between the object detectors and robust in the physical world. This is the first work to study unrestricted false positive adversarial examples for object detection.

READ FULL TEXT

page 2

page 7

page 8

research
07/20/2018

Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-malic...
research
04/16/2019

AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets

Recent studies have discovered the vulnerability of Deep Neural Networks...
research
09/22/2019

HAWKEYE: Adversarial Example Detector for Deep Neural Networks

Adversarial examples (AEs) are images that can mislead deep neural netwo...
research
12/07/2017

Adversarial Examples that Fool Detectors

An adversarial example is an example that has been adjusted to produce a...
research
11/30/2018

Transferable Adversarial Attacks for Image and Video Object Detection

Adversarial examples have been demonstrated to threaten many computer vi...
research
01/01/2020

Erase and Restore: Simple, Accurate and Resilient Detection of L_2 Adversarial Examples

By adding carefully crafted perturbations to input images, adversarial e...
research
06/02/2019

Adversarial Examples for Edge Detection: They Exist, and They Transfer

Convolutional neural networks have recently advanced the state of the ar...

Please sign up or login with your details

Forgot password? Click here to reset