In a traditional autonomous vehicle (AV) system, after receiving the processed observations coming from the perception system, the ego vehicle performs behavior planning to deal with different scenarios or environments. At the behavior planning level, algorithms generate high-level decisions such as Go, Stop, Follow front vehicle, etc. After that, a lower-level trajectory planning system maps those high-level decisions to trajectories according to map and dynamic object information. Then a lower-level controller outputs the detailed pedal or brake inputs to allow the vehicle to follow these trajectories.
appear to describe human-like decision processes well. However, estimating other vehicles’ behaviors accurately and adjusting the corresponding decisions to account for changes in the environment is difficult if the decisions of the ego car are completely hand-engineered. This is because the environment can vary across many different dimensions, all relevant to the task of driving, and the number of rules necessary for planning in this nuanced setting can be untenable.
. In recent works, RL has been used to solve some particular problems by designing states, actions and reward functions in a simulated environment. For example, the related applications within the autonomous vehicle domain include learning an output controller for lane-following, merging into a roundabout, traversing an intersection and lane changing. However, low stability and large computational requirements make RL difficult to use widely for more general tasks with multiple sub-goals. Obviously, applying RL to learn the behavior planning system from scratch not only increases the difficulties of adding or deleting sub-functions within the existing behavior planning system, but also makes it harder to debug problems. A hierarchical structure which is structurally similar to the heuristic-based algorithms is more feasible and can save computation time by learning different functions or tasks separately.
Reinforcement learning (RL) has proven the capability of solving for the optimal policy, which can map various observations to corresponding actions in complicated scenarios. In traditional RL approaches it is often necessary to train a unique policy for each task the agent may be faced with. In order to solve a new task the entire policy must be relearned regardless of how similar the two tasks may be. Our goal in this work is to construct a single planning algorithm based on hierarchical deep reinforcement learning (HRL) which can accomplish behavior planning in an environment where the agent must pursue multiple sub-goals and to do so in a way in which any sub-goal policies can be reused for subsequent tasks in a modular fashion (see Figure 1). The main contributions of the work are:
A state attention model-based HRL structure.
A hybrid reward function mechanism which can efficiently evaluate the performance among actions of different hierarchical levels.
A hierarchical prioritized experience replay designed for HRL.
Ii Related Work
This section introduces previous work related to this paper, which can be categorized as follows: 1) papers that address reinforcement learning (RL) and hierarchical reinforcement learning algorithms; 2) papers that propose self-driving behavior planning algorithms.
Ii-a Reinforcement Learning
Based on the context of reinforcement learning, algorithms with extended functions based on RL and HRL have been proposed.  proposed the idea of a meta controller, which is used to define a policy governing when the lower-level action policy is initialized and terminated.  introduced the concept of hierarchical Q learning called MAXQ, which proved the convergence of MAXQ mathematically and could be computed faster than the original Q learning experimentally.  proposed an improved MAXQ method by combining the R-MAX  algorithm with MAXQ. It has both the efficient model-based exploration of R-MAX and the opportunities for abstraction provided by the MAXQ framework.  used the idea of the hierarchical model and transferred it into parameterized action representations. They use a DRL algorithm to train high-level parameterized actions and low-level actions together in order to get more stable results than by getting the continuous actions directly.
Ii-B Behavior Planning of Autonomous Vehicles
Previous work applied heuristic-based and learning-based algorithms to the behavior planning of autonomous vehicles based on different scenarios. For example,  proposed a slot-based approach to check if a situation is safe to merge into lanes or across an intersection with moving traffic. This method is based on information on slots available for merging behavior, which may include the size of the slot in the target lane, and the distance between the ego-vehicle and front vehicle. Time-to-collision (TTC)  is a heuristic-based algorithm which has normally been applied in intersection scenarios as a baseline algorithm. Fuzzy logic is also a very popular heuristic-based approach to model the decision making and behavior planning for autonomous vehicles. In contrast to the vanilla heuristic-based algorithm, fuzzy logic allows adding the uncertainty of the results into the decision process.  used a fuzzy logic method to control the traffic flow in urban intersection scenarios, where the vehicles have access to the environment information via the vehicle to vehicle (V2V) system. However, the V2V system has only been applied to a small number of public roads and few vehicle manufacturers have added V2V function into their vehicles. In , the researchers developed a fuzzy logic method for the application of steering control in roundabout scenarios.
The heuristic-based algorithms need much work from human beings to design various rules in order to deal with different scenarios in urban environments. As a result, learning-based algorithms, especially reinforcement learning, has been applied to transfer multiple rules into a mapping function or one neural network.
formulated the decision-making problem for autonomous vehicles under uncertain environments as a POMDP and trained out a Bayesian Network representation to deal with a T-shape intersection merging problem. modeled the interaction between autonomous vehicles and human drivers by the method of Inverse Reinforcement Learning (IRL)  in a simulated environment. The work simulated autonomous vehicles to motivate human drivers’ reactions and acquired reward functions in order to plan better decisions while controlling autonomous vehicles.  dealt with the traversing problem via Deep Q-Networks combined with a long-term memory component. They trained a state-action function Q to allow an autonomous vehicle to traverse intersections with moving traffic.  used Deep Recurrent Q-network (DRQN) with states from a bird’s-eye view of the intersection to learn a policy for traversing the intersection.  proposed an efficient strategy to navigate through intersections with occlusion by using the DRL method. Their results showed better performance compared to some heuristic methods.
In our work, the main idea is to combine the heuristic-based decision-making structured with the HRL-based approaches in order to integrate the advantages coming from both methods. We built the HRL-structure according to the heuristic method (see Figure 1) so that the system is easier for validating different functions in the system instead of a whole neural-network black-box.
In this section, the preliminary background of the problem is described. The fundamental algorithms including Deep Q-Learing , Double Deep Q-Learning  and Hierarchical Deep Reinforcement Learning  (HRL) are introduced in this part.
Iii-1 Deep Q-learning and Double Deep Q-learning
Since proposed, Deep Q-Networks and Double Deep Q-Networks have been widely applied in reinforcement learning problems. In Q-learning, an action-value function is learned to get the optimal policy which can maximize the action-value function . Hence, a parameterized action-value function is used with a discount factor , as in Equation 1.
Iii-2 Double Deep Q-learning
For the setting of Deep Q-learning, the network parameter
is optimized by minimizing the loss function, which is defined as the difference between the predicted action-value and the target action-value . can be updated with a learning rate , as shown in Equation 2.
For the Double Deep Q-learning setting, the target action-value is revised according to another target Q-network with parameter :
Iii-3 Hierarchical Reinforcement Learning
For the HRL model  with sequential sub-goals, a meta controller generates the sub-goal for the following steps and a controller outputs the actions based on this sub-goal until the next sub-goal is generated by the meta controller.
In this section we present our proposed model, which is a hierarchical RL network with an explicit attention model, hybrid reward mechanism and a hierarchical prioritized experience replay training schema. We will refer to this model as Hybrid HRL throughout the paper.
Iv-a Hierarchical RL with Attention
Hierarchical structures based on RL can be applied to learn a task with multiple sub-goals. For a hierarchical structure with two levels, an option set is assigned to the first level, whose object is to select among sub-goals. The weight is updated according to Equation 5.
After selecting an option , the corresponding action set represents the action candidates that can be executed on the second level of the hierarchical structure with respect to the selected option
. Some previous work proposed the Hierarchical Markov Decision Process (MDP), which shares the state setamong different hierarchical levels during the MDP or designs different states for changing sub-goals and applies initial and terminating condition sets to transfer from one state set to another.
In many situations, the portion of the state set and the amount of abstraction needed to choose actions at different levels of this hierarchy can vary widely. In order to avoid designing a myriad of state representations corresponding to each hierarchy level and sub-goal, we share one state set for the whole hierarchical structure. Meanwhile, an attention model is applied to define the importance of each state element with respect to each hierarchical level and sub-goal and then use these weights to reconstruct the state . The weight is updated according to Equation 6.
When implementing the attention-based HRL, we construct the option network and the action network (Figure 2), which includes the attention mechanism as a layer in the action-value network .
Iv-B Hybrid Reward Mechanism
For a sequential sub-goals HRL model , the reward function is designed separately for the sub-goals and main task. The extrinsic meta reward is responsible for the option-level task, and meanwhile the intrinsic reward is responsible for the action-level sub-goals. For HRL with parameterized actions , an integrated reward is designed to evaluate both option-level and action-level together.
In our work, instead of generating one reward function which is applied to evaluate the final outputs coming from both options and actions in one step together, we designed a reward mechanism which can evaluate the goodness of option and action separately during the learning procedure. As a result, a hybrid reward mechanism is introduced so that: 1) the algorithm gets the information of which reward function should be triggered to get rewards or penalties; 2) meanwhile, a positive reward which benefits both option reward and action reward occurs if and only if the whole task and the sub-goals in the hierarchical structure have all been completed. Figure 3 demonstrates the idea for the hybrid reward mechanism.
Iv-C Hierarchical Prioritized Experience Replay
In  the authors propose a framework for more efficiently replaying experience during the training process in DQN so that the stored transitions
with higher TD-error in the previous training iteration result in a higher probability of being selected in the mini-batch for training during the current iteration. However, in the HRL structure, the rewards received from the whole system not only rely on the current level, but also are affected by the interactions among different hierarchical levels.
For the transitions stored during the HRL process, the central observation is that if the output of the option-value network is chosen wrongly due to high error between predicted option-value and the targeted option-value , then the success or failure of the corresponding action-value network is inconsequential to the current transition. As a result, we propose a hierarchical prioritized experience replay (HPER) in which the priorities in the option-level are based on error directly and the priorities in the lower level are based on the difference between errors coming from two levels. Higher priority is assigned to the action-level experience replay if the corresponding option-level has lower priority. According to Equations 5 and 6, the transition priorities for option and action level are given in Equation 7.
In this section, we apply the proposed algorithm to the behavior planning of a self-driving car and make comparisons with competing methods.
|Rewards||Step||Step Penalty||Performance Rate|
|Option Reward||Action Reward||Unsmoothness||Unsafe||Collision||Not Stop||Timeout||Success|
We tested our algorithm in MSC’s VIRES VTD, which is a complete simulation tool-chain for driving applications . We designed a task in which an autonomous vehicle (green box with ) intends to stop at the stop-line behind a random number of front vehicles (pink boxed with ) which have random initial positions and behavior profiles (see Figure 4). The two sub-goals in this scenario are designed as STOP AT STOP-LINE (SSL) and FOLLOW FRONT VEHICLE (FFV).
The state which is used to formulate the hierarchical deep reinforcement learning includes the information of the ego car, which is useful for both sub-goals, and the related information that is needed for each sub-goal.
Equation 8 describes our state space where , and are respectively the velocity, acceleration and jerk of the ego car, while and denote the distance from the ego car to the nearest front vehicle and the stop-line, respectively. A safety distance parameter is introduced as a nominal distance behind the target object which can improve safety due to different sub-goals.
Here and denote the ego car’s maximum deceleration and minimum allowable distance to the front vehicle, respectively, and and are the distances that can be chased by the ego car (distances to the front vehicle minus safety distance of the target). The initial positions of front vehicles and ego car are randomly selected.
V-B2 Option and Action
The option network in the scenario outputs the selected sub-goal: SSL or FFV. Then, according to the option result, the action network generates the throttle or brake choices.
V-B3 Reward Functions
Assume that for one step, the selected option is denoted as , . The reward function is given by:
For each step:
Time penalty: .
Unsmoothness penalty if jerk is too large: .
Unsafe penalty: .
For the termination conditions:
Collision penalty: .
Not stop at stop-line penalty: .
where are constants. are indicator functions. if and only if is satisfied, otherwise .
Assume that for one step, the selected option is denoted as , and the unselected option is , :
where represents the portion of the reward common to , and .
For comparison, we also formulate the problem without considering a hierarchical model via Double DQN. Then denotes the reward for achieving the task in this flattened action space.
We compare the proposed algorithm with four rule-based algorithms and some traditional RL algorithms mentioned before. Table I shows the quantitative results for testing the average performance of each algorithm over 100 cases.
|Hybrid Reward||Hierarchical PER||Attention Model|
The competing methods include:
Rule 1: stick to the option Follow Front Vehicle (FFV).
Rule 2: stick to the option Stop at Stop-line (SSL).
Rule 3: if , select FFV, w/o SSL.
Rule 4: if , select FFV, w/o SSL.
Figure 5 compares the Hybrid HRL method with different setup of HRL algorithms. The results show that the hybrid reward mechanism can perform better with the help of hierarchical PER approach.
Figure 6 depicts a typical case of the relative speed and position of the ego vehicle with respect to the nearest front vehicle as they both approach the stop-line. In the bottom graph we see the ego vehicle will tend to close the distance to the front vehicle until a certain threshold (about 5 meters) before lowering its speed relative to the front vehicle to allow a certain buffer between them. In the top graph we see that during this time the front vehicle begins to slow rapidly for the stop-line at around 25 meters out before taxing to a stop. Simultaneously, the ego vehicle opts to focus on stopping for the stop-line until it’s within a certain threshold of the front vehicle, at which point it will attend to the front vehicle instead. Finally, after a pause the front vehicle accelerates through the stop-line and at this point the ego vehicle immediately begins focusing on the stop sign once again as desired.
Figure 7 shows the results extracted from the attention softmax layer. Only the two state elements with the highest attentions have been visualized. The upper sub-figure shows the relationship between the distance to the nearest front vehicle (y-axis) and the distance to the stop-line (x-axis). The lower sub-figure is the attention value. When the ego car is approaching the front vehicle, the attention is mainly focused on . When the front vehicle leaves without stopping at the stop-line, the ego car transfers more and more attentions to during the process of approaching the stop-line.
For the scenario of approaching the intersection with front vehicles, one of the methods is to manually design all the rules. Another possibility is to design a rule-based policy of stopping at the stop-line which is relative easy to model. Then we train a DDQN model (see Figure 8 for training process) to be the policy of following front vehicles. Based on these two action-level models, we train another DDQN model (see Figure 9 for training process) to be the policy governing which option is needed for approaching the stop-line with front vehicles. During the training process, after every training epoch, the simulation will test 500 epochs without action exploration based on the trained-out network. By applying the proposed hybrid HRL, all the option-level and action-level policies can be trained together (see Figure 10 for training process) and the trained out policy can be separated if the target task only need to achieve one of the sub-goals. For example, the action-value network of Following Front Vehicle can be used alone with the corresponding option input to the network. Then, the ego car can follow the front vehicle without stopping at the stop-line.
In this paper, we proposed three extensions to hierarchical deep reinforcement learning aimed at improving convergence speed, sample efficiency and scalability over traditional RL approaches. Preliminary results suggest our algorithm is a promising candidate for future research as it is able to outperform a suite of hand-engineered rules on a simulated autonomous driving task in which the agent must pursue multiple sub-goals in order to succeed.
The authors would like to thank S. Bilal Mehdi of General Motors Research & Development for his assistance in implementing the VTD simulation environment used in our experiments.
-  S. Jin, Z.-y. Huang, P.-f. Tao, and D.-h. Wang, “Car-following theory of steady-state traffic flow using time-to-collision,” Journal of Zhejiang University-SCIENCE A, vol. 12, no. 8, pp. 645–654, 2011.
-  D. N. Lee, “A theory of visual control of braking based on information about time-to-collision,” Perception, vol. 5, no. 4, pp. 437–459, 1976.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
-  T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
-  D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, pp. 484–489, 2016.
-  T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum, “Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation,” in Advances in neural information processing systems, 2016, pp. 3675–3683.
-  T. G. Dietterich, “The maxq method for hierarchical reinforcement learning.” in ICML, vol. 98. Citeseer, 1998, pp. 118–126.
N. K. Jong and P. Stone, “Hierarchical model-based reinforcement learning:
R-max+ maxq,” in
Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 432–439.
-  R. I. Brafman and M. Tennenholtz, “R-max-a general polynomial time algorithm for near-optimal reinforcement learning,” Journal of Machine Learning Research, vol. 3, no. Oct, pp. 213–231, 2002.
-  W. Masson, P. Ranchod, and G. Konidaris, “Reinforcement learning with parameterized actions,” in AAAI, 2016, pp. 1934–1940.
-  C. R. Baker and J. M. Dolan, “Traffic interaction in the urban challenge: Putting boss on its best behavior,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2008, pp. 1752–1758.
-  V. Milanés, J. Pérez, E. Onieva, and C. González, “Controller for urban intersections based on wireless communications and fuzzy logic,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 1, pp. 243–248, 2009.
-  J. P. Rastelli and M. S. Peñas, “Fuzzy logic steering control of autonomous vehicles inside roundabouts,” Applied Soft Computing, vol. 35, pp. 662–669, 2015.
-  S. Brechtel, T. Gindele, and R. Dillmann, “Probabilistic decision-making under uncertainty for autonomous driving using continuous pomdps,” in 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). IEEE, 2014, pp. 392–399.
-  D. Sadigh, S. Sastry, S. A. Seshia, and A. D. Dragan, “Planning for autonomous cars that leverage effects on human actions.” in Robotics: Science and Systems, vol. 2. Ann Arbor, MI, USA, 2016.
-  A. Y. Ng, S. J. Russell et al., “Algorithms for inverse reinforcement learning.” in Icml, vol. 1, 2000, p. 2.
-  D. Isele, A. Cosgun, and K. Fujimura, “Analyzing knowledge transfer in deep q-networks for autonomously handling multiple intersections,” arXiv preprint arXiv:1705.01197, 2017.
-  D. Isele, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating intersections with autonomous vehicles using deep reinforcement learning,” arXiv preprint arXiv:1705.01196, 2017.
-  D. Isele, R. Rahimi, A. Cosgun, K. Subramanian, and K. Fujimura, “Navigating occluded intersections with autonomous vehicles using deep reinforcement learning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 2034–2039.
H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with
double q-learning,” in
Thirtieth AAAI conference on artificial intelligence, 2016.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
-  T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” arXiv preprint arXiv:1511.05952, 2015.
-  M. Hausknecht and P. Stone, “Deep reinforcement learning in parameterized action space,” arXiv preprint arXiv:1511.04143, 2015.
-  “VTD homepage.” 2019. [Online]. Available: https://vires.com/vtd-vires-virtual-test-drive