Physical Adversarial Examples for Object Detectors

07/20/2018
by   Kevin Eykholt, et al.
2

Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to "disappear" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 experiment, YOLO was fooled by the poster and sticker attacks in 72.5 63.5 object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.

READ FULL TEXT

page 5

page 6

page 8

page 10

research
12/21/2017

Note on Attacking Object Detectors with Adversarial Stickers

Deep learning has proven to be a powerful tool for computer vision and h...
research
05/19/2020

Synthesizing Unrestricted False Positive Adversarial Objects Using Generative Models

Adversarial examples are data points misclassified by neural networks. O...
research
04/16/2018

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Given the ability to directly manipulate image pixels in the digital inp...
research
07/12/2017

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

It has been shown that most machine learning algorithms are susceptible ...
research
06/30/2021

Bio-Inspired Adversarial Attack Against Deep Neural Networks

The paper develops a new adversarial attack against deep neural networks...
research
02/17/2020

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

We present Survival-OPT, a physical adversarial example algorithm in the...
research
03/21/2019

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

Recent work has thoroughly documented the susceptibility of deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset