Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard Arbitration Reward

12/12/2021
by   Weilin Liu, et al.
0

Discovering hazardous scenarios is crucial in testing and further improving driving policies. However, conducting efficient driving policy testing faces two key challenges. On the one hand, the probability of naturally encountering hazardous scenarios is low when testing a well-trained autonomous driving strategy. Thus, discovering these scenarios by purely real-world road testing is extremely costly. On the other hand, a proper determination of accident responsibility is necessary for this task. Collecting scenarios with wrong-attributed responsibilities will lead to an overly conservative autonomous driving strategy. To be more specific, we aim to discover hazardous scenarios that are autonomous-vehicle responsible (AV-responsible), i.e., the vulnerabilities of the under-test driving policy. To this end, this work proposes a Safety Test framework by finding Av-Responsible Scenarios (STARS) based on multi-agent reinforcement learning. STARS guides other traffic participants to produce Av-Responsible Scenarios and make the under-test driving policy misbehave via introducing Hazard Arbitration Reward (HAR). HAR enables our framework to discover diverse, complex, and AV-responsible hazardous scenarios. Experimental results against four different driving policies in three environments demonstrate that STARS can effectively discover AV-responsible hazardous scenarios. These scenarios indeed correspond to the vulnerabilities of the under-test driving policies, thus are meaningful for their further improvements.

READ FULL TEXT

page 1

page 2

page 5

research
12/22/2021

Adversarial Deep Reinforcement Learning for Trustworthy Autonomous Driving Policies

Deep reinforcement learning is widely used to train autonomous cars in a...
research
12/16/2020

CARLA Real Traffic Scenarios – novel training ground and benchmark for autonomous driving

This work introduces interactive traffic scenarios in the CARLA simulato...
research
10/25/2022

DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing

Autonomous driving has become real; semi-autonomous driving vehicles in ...
research
03/26/2019

Failure-Scenario Maker for Rule-Based Agent using Multi-agent Adversarial Reinforcement Learning and its Application to Autonomous Driving

We examine the problem of adversarial reinforcement learning for multi-a...
research
01/12/2022

Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks

In high-level Autonomous Driving (AD) systems, behavioral planning is in...
research
01/25/2021

Learning to falsify automated driving vehicles with prior knowledge

While automated driving technology has achieved a tremendous progress, t...
research
10/11/2016

Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving

Autonomous driving is a multi-agent setting where the host vehicle must ...

Please sign up or login with your details

Forgot password? Click here to reset