Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems

01/17/2022
by   Wei Jia, et al.
30

Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and have received a lot of attention recently. However, majority of the research on AEs is in the digital domain and the adversarial patches are static, which is very different from many real-world DNN applications such as Traffic Sign Recognition (TSR) systems in autonomous vehicles. In TSR systems, object detectors use DNNs to process streaming video in real time. From the view of object detectors, the traffic sign`s position and quality of the video are continuously changing, rendering the digital AEs ineffective in the physical world. In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors. Robustness is achieved in three ways. First, we simulate the in-vehicle cameras by extending the distribution of image transformations with the blur transformation and the resolution transformation. Second, we design the single and multiple bounding boxes filters to improve the efficiency of the perturbation training. Third, we consider four representative attack vectors, namely Hiding Attack, Appearance Attack, Non-Target Attack and Target Attack. We perform a comprehensive set of experiments under a variety of environmental conditions, and considering illuminations in sunny and cloudy weather as well as at night. The experimental results show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based TSR system. The attacks have good transferability and can deceive other state-of-the-art object detectors. We launched HA and NTA on a brand-new 2021 model vehicle. Both attacks are successful in fooling the TSR system, which could be a life-threatening case for autonomous vehicles. Finally, we discuss three defense mechanisms based on image preprocessing, AEs detection, and model enhancing.

READ FULL TEXT

page 3

page 5

page 7

page 8

page 9

page 11

page 16

page 17

research
01/09/2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

We propose a new real-world attack against the computer vision based sys...
research
10/09/2020

Targeted Attention Attack on Deep Learning Models in Road Sign Recognition

Real world traffic sign recognition is an important step towards buildin...
research
08/14/2023

ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal and Robust Vehicle Evasion

Adversarial camouflage has garnered attention for its ability to attack ...
research
02/18/2018

DARTS: Deceiving Autonomous Cars with Toxic Signs

Sign recognition is an integral part of autonomous cars. Any misclassifi...
research
04/27/2023

Detection of Adversarial Physical Attacks in Time-Series Image Data

Deep neural networks (DNN) have become a common sensing modality in auto...
research
05/09/2021

Learning Image Attacks toward Vision Guided Autonomous Vehicles

While adversarial neural networks have been shown successful for static ...
research
03/02/2022

Clean-Annotation Backdoor Attack against Lane Detection Systems in the Wild

We present the first backdoor attack against the lane detection systems ...

Please sign up or login with your details

Forgot password? Click here to reset