Robust Physical Adversarial Attack on Faster R-CNN Object Detector

04/16/2018
by   Shang-Tse Chen, et al.
0

Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.

READ FULL TEXT

page 8

page 12

page 13

page 14

research
07/20/2018

Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-malic...
research
06/05/2020

Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection

Many recent studies have shown that deep neural models are vulnerable to...
research
07/12/2017

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

It has been shown that most machine learning algorithms are susceptible ...
research
12/28/2022

Learning When to Use Adaptive Adversarial Image Perturbations against Autonomous Vehicles

The deep neural network (DNN) models for object detection using camera i...
research
07/24/2023

Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

Camera-based autonomous systems that emulate human perception are increa...
research
01/09/2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

We propose a new real-world attack against the computer vision based sys...
research
07/08/2020

SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

Whilst significant research effort into adversarial examples (AE) has em...

Please sign up or login with your details

Forgot password? Click here to reset