Fooling a Real Car with Adversarial Traffic Signs

06/30/2019
by   Nir Morgulis, et al.
13

The attacks on the neural-network-based classifiers using adversarial images have gained a lot of attention recently. An adversary can purposely generate an image that is indistinguishable from a innocent image for a human being but is incorrectly classified by the neural networks. The adversarial images do not need to be tuned to a particular architecture of the classifier - an image that fools one network can fool another one with a certain success rate.The published works mostly concentrate on the use of modified image files for attacks against the classifiers trained on the model databases. Although there exists a general understanding that such attacks can be carried in the real world as well, the works considering the real-world attacks are scarce. Moreover, to the best of our knowledge, there have been no reports on the attacks against real production-grade image classification systems.In our work we present a robust pipeline for reproducible production of adversarial traffic signs that can fool a wide range of classifiers, both open-source and production-grade in the real world. The efficiency of the attacks was checked both with the neural-network-based classifiers and legacy computer vision systems. Most of the attacks have been performed in the black-box mode, e.g. the adversarial signs produced for a particular classifier were used to attack a variety of other classifiers. The efficiency was confirmed in drive-by experiments with a production-grade traffic sign recognition systems of a real car.

READ FULL TEXT

page 5

page 8

page 9

page 10

page 12

page 13

page 14

research
04/30/2021

Black-box adversarial attacks using Evolution Strategies

In the last decade, deep neural networks have proven to be very powerful...
research
11/28/2018

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

The success of deep learning research has catapulted deep models into pr...
research
07/17/2023

Adversarial Attacks on Traffic Sign Recognition: A Survey

Traffic sign recognition is an essential component of perception in auto...
research
01/09/2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos

We propose a new real-world attack against the computer vision based sys...
research
04/24/2023

Evaluating Adversarial Robustness on Document Image Classification

Adversarial attacks and defenses have gained increasing interest on comp...
research
04/01/2020

Evading Deepfake-Image Detectors with White- and Black-Box Attacks

It is now possible to synthesize highly realistic images of people who d...
research
06/19/2020

Analyzing the Real-World Applicability of DGA Classifiers

Separating benign domains from domains generated by DGAs with the help o...

Please sign up or login with your details

Forgot password? Click here to reset