NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

07/12/2017
by   Jiajun Lu, et al.
0

It has been shown that most machine learning algorithms are susceptible to adversarial perturbations. Slightly perturbing an image in a carefully chosen direction in the image space may cause a trained neural network model to misclassify it. Recently, it was shown that physical adversarial examples exist: printing perturbed images then taking pictures of them would still result in misclassification. This raises security and safety concerns. However, these experiments ignore a crucial property of physical objects: the camera can view objects from different distances and at different angles. In this paper, we show experiments that suggest that current constructions of physical adversarial examples do not disrupt object detection from a moving platform. Instead, a trained neural network classifies most of the pictures taken from different distances and angles of a perturbed image correctly. We believe this is because the adversarial property of the perturbation is sensitive to the scale at which the perturbed picture is viewed, so (for example) an autonomous car will misclassify a stop sign only from a small range of distances. Our work raises an important question: can one construct examples that are adversarial for many or most viewing conditions? If so, the construction should offer very significant insights into the internal representation of patterns by deep networks. If not, there is a good prospect that adversarial examples can be reduced to a curiosity with little practical impact.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

research
07/20/2018

Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-malic...
research
04/16/2018

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Given the ability to directly manipulate image pixels in the digital inp...
research
01/28/2019

Adversarial Examples Target Topological Holes in Deep Networks

It is currently unclear why adversarial examples are easy to construct f...
research
07/10/2021

Resilience of Autonomous Vehicle Object Category Detection to Universal Adversarial Perturbations

Due to the vulnerability of deep neural networks to adversarial examples...
research
07/24/2017

Synthesizing Robust Adversarial Examples

Neural network-based classifiers parallel or exceed human-level accuracy...
research
09/11/2018

Taking a machine's perspective: Humans can decipher adversarial images

How similar is the human mind to the sophisticated machine-learning syst...
research
02/18/2018

DARTS: Deceiving Autonomous Cars with Toxic Signs

Sign recognition is an integral part of autonomous cars. Any misclassifi...

Please sign up or login with your details

Forgot password? Click here to reset