Targeted Attention Attack on Deep Learning Models in Road Sign Recognition

10/09/2020
by   Xinghao Yang, et al.
4

Real world traffic sign recognition is an important step towards building autonomous vehicles, most of which highly dependent on Deep Neural Networks (DNNs). Recent studies demonstrated that DNNs are surprisingly susceptible to adversarial examples. Many attack methods have been proposed to understand and generate adversarial examples, such as gradient based attack, score based attack, decision based attack, and transfer based attacks. However, most of these algorithms are ineffective in real-world road sign attack, because (1) iteratively learning perturbations for each frame is not realistic for a fast moving car and (2) most optimization algorithms traverse all pixels equally without considering their diverse contribution. To alleviate these problems, this paper proposes the targeted attention attack (TAA) method for real world road sign attack. Specifically, we have made the following contributions: (1) we leverage the soft attention map to highlight those important pixels and skip those zero-contributed areas - this also helps to generate natural perturbations, (2) we design an efficient universal attack that optimizes a single perturbation/noise based on a set of training images under the guidance of the pre-trained attention map, (3) we design a simple objective function that can be easily optimized, (4) we evaluate the effectiveness of TAA on real world data sets. Experimental results validate that the TAA method improves the attack successful rate (nearly 10 quarter) compared with the popular RP2 method. Additionally, our TAA also provides good properties, e.g., transferability and generalization capability. We provide code and data to ensure the reproducibility: https://github.com/AdvAttack/RoadSignAttack.

READ FULL TEXT

page 1

page 7

page 8

page 9

page 10

page 11

research
01/17/2022

Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems

Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and h...
research
08/17/2019

Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...
research
07/17/2023

Adversarial Attacks on Traffic Sign Recognition: A Survey

Traffic sign recognition is an essential component of perception in auto...
research
03/08/2022

Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon

Estimating the risk level of adversarial examples is essential for safel...
research
11/27/2021

Adaptive Perturbation for Adversarial Attack

In recent years, the security of deep learning models achieves more and ...
research
04/20/2021

Staircase Sign Method for Boosting Adversarial Attacks

Crafting adversarial examples for the transfer-based attack is challengi...
research
01/29/2019

RED-Attack: Resource Efficient Decision based Attack for Machine Learning

Due to data dependency and model leakage properties, Deep Neural Network...

Please sign up or login with your details

Forgot password? Click here to reset