DeepAI AI Chat
Log In Sign Up

A New Approach for Tactical Decision Making in Lane Changing: Sample Efficient Deep Q Learning with a Safety Feedback Reward

by   M. Ugur Yavas, et al.

Automated lane change is one of the most challenging task to be solved of highly automated vehicles due to its safety-critical, uncertain and multi-agent nature. This paper presents the novel deployment of the state of art Q learning method, namely Rainbow DQN, that uses a new safety driven rewarding scheme to tackle the issues in an dynamic and uncertain simulation environment. We present various comparative results to show that our novel approach of having reward feedback from the safety layer dramatically increases both the agent's performance and sample efficiency. Furthermore, through the novel deployment of Rainbow DQN, it is shown that more intuition about the agent's actions is extracted by examining the distributions of generated Q values of the agents. The proposed algorithm shows superior performance to the baseline algorithm in the challenging scenarios with only 200000 training steps (i.e. equivalent to 55 hours driving).


page 6

page 7


Lane-Change Initiation and Planning Approach for Highly Automated Driving on Freeways

Quantifying and encoding occupants' preferences as an objective function...

Continuous Control for Automated Lane Change Behavior Based on Deep Deterministic Policy Gradient Algorithm

Lane change is a challenging task which requires delicate actions to ens...

Positive Trust Balance for Self-Driving Car Deployment

The crucial decision about when self-driving cars are ready to deploy is...