Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

08/06/2021
by   Jindi Zhang, et al.
13

In recent years, many deep learning models have been adopted in autonomous driving. At the same time, these models introduce new vulnerabilities that may compromise the safety of autonomous vehicles. Specifically, recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models. Although driving safety is the ultimate concern for autonomous driving, there is no comprehensive study on the linkage between the performance of deep learning models and the driving safety of autonomous vehicles under adversarial attacks. In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models. In particular, we consider two state-of-the-art models in vision-based 3D object detection, Stereo R-CNN and DSGN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. By analyzing the results of our extensive evaluation experiments, we find that (1) the attack's impact on the driving safety of autonomous vehicles and the attack's impact on the precision of 3D object detectors are decoupled, and (2) the DSGN model demonstrates stronger robustness to adversarial attacks than the Stereo R-CNN model. In addition, we further investigate the causes behind the two findings with an ablation study. The findings of this paper provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.

READ FULL TEXT

page 1

page 4

page 5

page 8

page 11

page 12

research
08/15/2022

Man-in-the-Middle Attack against Object Detection Systems

Is deep learning secure for robots? As embedded systems have access to m...
research
03/30/2022

Sensor Data Validation and Driving Safety in Autonomous Driving Systems

Autonomous driving technology has drawn a lot of attention due to its fa...
research
06/17/2019

The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks

Most state-of-the-art machine learning (ML) classification systems are v...
research
02/10/2022

Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios

Visual detection is a key task in autonomous driving, and it serves as o...
research
07/06/2019

Affine Disentangled GAN for Interpretable and Robust AV Perception

Autonomous vehicles (AV) have progressed rapidly with the advancements i...
research
04/24/2020

ML-driven Malware that Targets AV Safety

Ensuring the safety of autonomous vehicles (AVs) is critical for their m...
research
01/26/2023

Certified Interpretability Robustness for Class Activation Mapping

Interpreting machine learning models is challenging but crucial for ensu...

Please sign up or login with your details

Forgot password? Click here to reset