Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

01/09/2018
by   Chawin Sitawarin, et al.
0

We propose a new real-world attack against the computer vision based systems of autonomous vehicles (AVs). Our novel Sign Embedding attack exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence. Our attack greatly expands the scope of the threat posed to AVs since adversaries are no longer restricted to just modifying existing traffic signs as in previous work. Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world. We ensure this by including a variety of possible image transformations in the optimization problem used to generate adversarial samples. We verify the robustness of the adversarial samples by printing them out and carrying out drive-by tests simulating the conditions under which image capture would occur in a real-world scenario. We experimented with physical attack samples for different distances, lighting conditions, and camera angles. In addition, extensive evaluations were carried out in the virtual setting for a variety of image transformations. The adversarial samples generated using our method have adversarial success rates in excess of 95

READ FULL TEXT

page 1

page 3

page 4

research
01/17/2022

Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems

Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and h...
research
02/18/2018

DARTS: Deceiving Autonomous Cars with Toxic Signs

Sign recognition is an integral part of autonomous cars. Any misclassifi...
research
07/08/2020

SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

Whilst significant research effort into adversarial examples (AE) has em...
research
06/30/2019

Fooling a Real Car with Adversarial Traffic Signs

The attacks on the neural-network-based classifiers using adversarial im...
research
04/16/2018

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Given the ability to directly manipulate image pixels in the digital inp...
research
08/18/2021

Adversarial Relighting against Face Recognition

Deep face recognition (FR) has achieved significantly high accuracy on s...
research
07/06/2019

Affine Disentangled GAN for Interpretable and Robust AV Perception

Autonomous vehicles (AV) have progressed rapidly with the advancements i...

Please sign up or login with your details

Forgot password? Click here to reset