Standard detectors aren't (currently) fooled by physical adversarial stop signs

10/09/2017
by   Jiajun Lu, et al.
0

An adversarial example is an example that has been adjusted to produce the wrong label when presented to a system at test time. If adversarial examples existed that could fool a detector, they could be used to (for example) wreak havoc on roads populated with smart vehicles. Recently, we described our difficulties creating physical adversarial stop signs that fool a detector. More recently, Evtimov et al. produced a physical adversarial stop sign that fools a proxy model of a detector. In this paper, we show that these physical adversarial stop signs do not fool two standard detectors (YOLO and Faster RCNN) in standard configuration. Evtimov et al.'s construction relies on a crop of the image to the stop sign; this crop is then resized and presented to a classifier. We argue that the cropping and resizing procedure largely eliminates the effects of rescaling and of view angle. Whether an adversarial attack is robust under rescaling and change of view direction remains moot. We argue that attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - likely makes it difficult to make adversarial patterns. Finally, an adversarial pattern on a physical object that could fool a detector would have to be adversarial in the face of a wide family of parametric distortions (scale; view angle; box shift inside the detector; illumination; and so on). Such a pattern would be of great theoretical and practical interest. There is currently no evidence that such patterns exist.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 8

page 9

page 10

research
12/07/2017

Adversarial Examples that Fool Detectors

An adversarial example is an example that has been adjusted to produce a...
research
10/18/2019

Evading Real-Time Person Detectors by Adversarial T-shirt

It is known that deep neural networks (DNNs) could be vulnerable to adve...
research
07/31/2020

Physical Adversarial Attack on Vehicle Detector in the Carla Simulator

In this paper, we tackle the issue of physical adversarial examples for ...
research
12/21/2017

Note on Attacking Object Detectors with Adversarial Stickers

Deep learning has proven to be a powerful tool for computer vision and h...
research
12/18/2017

When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

A significant threat to the recent, wide deployment of machine learning-...
research
09/24/2019

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

We study the most practical problem setup for evaluating adversarial rob...
research
08/01/2021

On the Success Probability of Three Detectors for the Box-Constrained Integer Linear Model

This paper is concerned with detecting an integer parameter vector insid...

Please sign up or login with your details

Forgot password? Click here to reset