Minimizing Safety Interference for Safe and Comfortable Automated Driving with Distributional Reinforcement Learning

07/15/2021
by   Danial Kamran, et al.
0

Despite recent advances in reinforcement learning (RL), its application in safety critical domains like autonomous vehicles is still challenging. Although punishing RL agents for risky situations can help to learn safe policies, it may also lead to highly conservative behavior. In this paper, we propose a distributional RL framework in order to learn adaptive policies that can tune their level of conservativity at run-time based on the desired comfort and utility. Using a proactive safety verification approach, the proposed framework can guarantee that actions generated from RL are fail-safe according to the worst-case assumptions. Concurrently, the policy is encouraged to minimize safety interference and generate more comfortable behavior. We trained and evaluated the proposed approach and baseline policies using a high level simulator with a variety of randomized scenarios including several corner cases which rarely happen in reality but are very crucial. In light of our experiments, the behavior of policies learned using distributional RL can be adaptive at run-time and robust to the environment uncertainty. Quantitatively, the learned distributional RL agent drives in average 8 seconds faster than the normal DQN policy and requires 83% less safety interference compared to the rule-based policy with slightly increasing the average crossing time. We also study sensitivity of the learned policy in environments with higher perception noise and show that our algorithm learns policies that can still drive reliable when the perception noise is two times higher than the training configuration for automated merging and crossing at occluded intersections.

READ FULL TEXT
research
07/15/2021

High-level Decisions from a Safe Maneuver Catalog with Reinforcement Learning for Safe and Cooperative Automated Merging

Reinforcement learning (RL) has recently been used for solving challengi...
research
10/01/2021

Motion Planning for Autonomous Vehicles in the Presence of Uncertainty Using Reinforcement Learning

Motion planning under uncertainty is one of the main challenges in devel...
research
03/16/2022

How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for Efficient and Safe Driving Strategies

Autonomous driving has the potential to revolutionize mobility and is he...
research
07/14/2021

Safer Reinforcement Learning through Transferable Instinct Networks

Random exploration is one of the main mechanisms through which reinforce...
research
02/05/2021

Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for Automated Driving using Distributional Reinforcement Learning

For highly automated driving above SAE level 3, behavior generation algo...
research
01/27/2023

Modeling human road crossing decisions as reward maximization with visual perception limitations

Understanding the interaction between different road users is critical f...
research
08/29/2020

Driving Through Ghosts: Behavioral Cloning with False Positives

Safe autonomous driving requires robust detection of other traffic parti...

Please sign up or login with your details

Forgot password? Click here to reset