DARTS: Deceiving Autonomous Cars with Toxic Signs

02/18/2018
by   Chawin Sitawarin, et al.
0

Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine realistic security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). Leveraging the concept of adversarial examples, we modify innocuous signs/advertisements in the environment in such a way that they seem normal to human observers but are interpreted as the adversary's desired traffic sign by autonomous cars. Further, we pursue a fundamentally different perspective to attacking autonomous cars, motivated by the observation that the driver and vehicle-mounted camera see the environment from different angles (the camera commonly sees the road with a higher angle, e.g., from top of the car). We propose a novel attack against vehicular sign recognition systems: we create signs that change as they are viewed from different angles, and thus, can be interpreted differently by the driver and sign recognition. We extensively evaluate the proposed attacks under various conditions: different distances, lighting conditions, and camera angles. We first examine our attacks virtually, i.e., we check if the digital images of toxic signs can deceive the sign recognition system. Further, we investigate the effectiveness of attacks in real-world settings: we print toxic signs, install them in the environment, capture videos using a vehicle-mounted camera, and process them using our sign recognition pipeline.

READ FULL TEXT

page 3

page 10

page 13

page 16

page 19

page 20

research
01/09/2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

We propose a new real-world attack against the computer vision based sys...
research
01/17/2022

Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems

Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and h...
research
07/17/2023

Adversarial Attacks on Traffic Sign Recognition: A Survey

Traffic sign recognition is an essential component of perception in auto...
research
06/24/2019

MobilBye: Attacking ADAS with Camera Spoofing

Advanced driver assistance systems (ADASs) were developed to reduce the ...
research
08/17/2023

Automatic Signboard Recognition in Low Quality Night Images

An essential requirement for driver assistance systems and autonomous dr...
research
07/12/2017

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

It has been shown that most machine learning algorithms are susceptible ...
research
07/09/2020

Monocular Vision based Crowdsourced 3D Traffic Sign Positioning with Unknown Camera Intrinsics and Distortion Coefficients

Autonomous vehicles and driver assistance systems utilize maps of 3D sem...

Please sign up or login with your details

Forgot password? Click here to reset