Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks

01/12/2022
by   Ziwen Wan, et al.
0

In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly securitycritical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes. To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problemspecific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previouslyunknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.

READ FULL TEXT

page 1

page 3

page 14

research
07/26/2023

Lateral-Direction Localization Attack in High-Level Autonomous Driving: Domain-Specific Defense Opportunity via Lane Detection

Localization in high-level Autonomous Driving (AD) systems is highly sec...
research
06/18/2020

Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing (Extended Version)

For high-level Autonomous Vehicles (AV), localization is highly security...
research
10/25/2022

DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing

Autonomous driving has become real; semi-autonomous driving vehicles in ...
research
12/12/2021

Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard Arbitration Reward

Discovering hazardous scenarios is crucial in testing and further improv...
research
08/23/2023

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

In autonomous driving (AD), accurate perception is indispensable to achi...
research
05/14/2023

Systematic Meets Unintended: Prior Knowledge Adaptive 5G Vulnerability Detection via Multi-Fuzzing

The virtualization and softwarization of 5G and NextG are critical enabl...
research
12/04/2018

Verlässliche Software im 21. Jahrhundert

Software is the main innovation driver in many different areas, like clo...

Please sign up or login with your details

Forgot password? Click here to reset