Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning

11/09/2019 ∙ by Zhiqian Qiao, et al. ∙ 0

In this work, we propose a hierarchical reinforcement learning (HRL) structure which is capable of performing autonomous vehicle planning tasks in simulated environments with multiple sub-goals. In this hierarchical structure, the network is capable of 1) learning one task with multiple sub-goals simultaneously; 2) extracting attentions of states according to changing sub-goals during the learning process; 3) reusing the well-trained network of sub-goals for other similar tasks with the same sub-goals. The states are defined as processed observations which are transmitted from the perception system of the autonomous vehicle. A hybrid reward mechanism is designed for different hierarchical layers in the proposed HRL structure. Compared to traditional RL methods, our algorithm is more sample-efficient since its modular design allows reusing the policies of sub-goals across similar tasks. The results show that the proposed method converges to an optimal policy faster than traditional RL methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In a traditional autonomous vehicle (AV) system, after receiving the processed observations coming from the perception system, the ego vehicle performs behavior planning to deal with different scenarios or environments. At the behavior planning level, algorithms generate high-level decisions such as Go, Stop, Follow front vehicle, etc. After that, a lower-level trajectory planning system maps those high-level decisions to trajectories according to map and dynamic object information. Then a lower-level controller outputs the detailed pedal or brake inputs to allow the vehicle to follow these trajectories.

At first glance, among algorithms generating behavior decisions, rule-based algorithms [1][2]

appear to describe human-like decision processes well. However, estimating other vehicles’ behaviors accurately and adjusting the corresponding decisions to account for changes in the environment is difficult if the decisions of the ego car are completely hand-engineered. This is because the environment can vary across many different dimensions, all relevant to the task of driving, and the number of rules necessary for planning in this nuanced setting can be untenable.

An alternative method is reinforcement learning [3][4][5]

. In recent works, RL has been used to solve some particular problems by designing states, actions and reward functions in a simulated environment. For example, the related applications within the autonomous vehicle domain include learning an output controller for lane-following, merging into a roundabout, traversing an intersection and lane changing. However, low stability and large computational requirements make RL difficult to use widely for more general tasks with multiple sub-goals. Obviously, applying RL to learn the behavior planning system from scratch not only increases the difficulties of adding or deleting sub-functions within the existing behavior planning system, but also makes it harder to debug problems. A hierarchical structure which is structurally similar to the heuristic-based algorithms is more feasible and can save computation time by learning different functions or tasks separately.

Fig. 1: Heuristic-based structure vs. HRL-based structure

Reinforcement learning (RL) has proven the capability of solving for the optimal policy, which can map various observations to corresponding actions in complicated scenarios. In traditional RL approaches it is often necessary to train a unique policy for each task the agent may be faced with. In order to solve a new task the entire policy must be relearned regardless of how similar the two tasks may be. Our goal in this work is to construct a single planning algorithm based on hierarchical deep reinforcement learning (HRL) which can accomplish behavior planning in an environment where the agent must pursue multiple sub-goals and to do so in a way in which any sub-goal policies can be reused for subsequent tasks in a modular fashion (see Figure 1). The main contributions of the work are:

  • A state attention model-based HRL structure.

  • A hybrid reward function mechanism which can efficiently evaluate the performance among actions of different hierarchical levels.

  • A hierarchical prioritized experience replay designed for HRL.

Ii Related Work

This section introduces previous work related to this paper, which can be categorized as follows: 1) papers that address reinforcement learning (RL) and hierarchical reinforcement learning algorithms; 2) papers that propose self-driving behavior planning algorithms.

Ii-a Reinforcement Learning

Based on the context of reinforcement learning, algorithms with extended functions based on RL and HRL have been proposed. [6] proposed the idea of a meta controller, which is used to define a policy governing when the lower-level action policy is initialized and terminated. [7] introduced the concept of hierarchical Q learning called MAXQ, which proved the convergence of MAXQ mathematically and could be computed faster than the original Q learning experimentally. [8] proposed an improved MAXQ method by combining the R-MAX [9] algorithm with MAXQ. It has both the efficient model-based exploration of R-MAX and the opportunities for abstraction provided by the MAXQ framework. [10] used the idea of the hierarchical model and transferred it into parameterized action representations. They use a DRL algorithm to train high-level parameterized actions and low-level actions together in order to get more stable results than by getting the continuous actions directly.

Ii-B Behavior Planning of Autonomous Vehicles

Previous work applied heuristic-based and learning-based algorithms to the behavior planning of autonomous vehicles based on different scenarios. For example, [11] proposed a slot-based approach to check if a situation is safe to merge into lanes or across an intersection with moving traffic. This method is based on information on slots available for merging behavior, which may include the size of the slot in the target lane, and the distance between the ego-vehicle and front vehicle. Time-to-collision (TTC) [2] is a heuristic-based algorithm which has normally been applied in intersection scenarios as a baseline algorithm. Fuzzy logic is also a very popular heuristic-based approach to model the decision making and behavior planning for autonomous vehicles. In contrast to the vanilla heuristic-based algorithm, fuzzy logic allows adding the uncertainty of the results into the decision process. [12] used a fuzzy logic method to control the traffic flow in urban intersection scenarios, where the vehicles have access to the environment information via the vehicle to vehicle (V2V) system. However, the V2V system has only been applied to a small number of public roads and few vehicle manufacturers have added V2V function into their vehicles. In [13], the researchers developed a fuzzy logic method for the application of steering control in roundabout scenarios.

The heuristic-based algorithms need much work from human beings to design various rules in order to deal with different scenarios in urban environments. As a result, learning-based algorithms, especially reinforcement learning, has been applied to transfer multiple rules into a mapping function or one neural network.

[14]

formulated the decision-making problem for autonomous vehicles under uncertain environments as a POMDP and trained out a Bayesian Network representation to deal with a T-shape intersection merging problem.

[15] modeled the interaction between autonomous vehicles and human drivers by the method of Inverse Reinforcement Learning (IRL) [16] in a simulated environment. The work simulated autonomous vehicles to motivate human drivers’ reactions and acquired reward functions in order to plan better decisions while controlling autonomous vehicles. [17] dealt with the traversing problem via Deep Q-Networks combined with a long-term memory component. They trained a state-action function Q to allow an autonomous vehicle to traverse intersections with moving traffic. [18] used Deep Recurrent Q-network (DRQN) with states from a bird’s-eye view of the intersection to learn a policy for traversing the intersection. [19] proposed an efficient strategy to navigate through intersections with occlusion by using the DRL method. Their results showed better performance compared to some heuristic methods.

In our work, the main idea is to combine the heuristic-based decision-making structured with the HRL-based approaches in order to integrate the advantages coming from both methods. We built the HRL-structure according to the heuristic method (see Figure 1) so that the system is easier for validating different functions in the system instead of a whole neural-network black-box.

Iii Preliminaries

In this section, the preliminary background of the problem is described. The fundamental algorithms including Deep Q-Learing [3], Double Deep Q-Learning [20] and Hierarchical Deep Reinforcement Learning [6] (HRL) are introduced in this part.

Iii-1 Deep Q-learning and Double Deep Q-learning

Since proposed, Deep Q-Networks and Double Deep Q-Networks have been widely applied in reinforcement learning problems. In Q-learning, an action-value function is learned to get the optimal policy which can maximize the action-value function . Hence, a parameterized action-value function is used with a discount factor , as in Equation 1.

(1)

Iii-2 Double Deep Q-learning

For the setting of Deep Q-learning, the network parameter

is optimized by minimizing the loss function

, which is defined as the difference between the predicted action-value and the target action-value . can be updated with a learning rate , as shown in Equation 2.

(2)

For the Double Deep Q-learning setting, the target action-value is revised according to another target Q-network with parameter :

(3)

During the training procedure, technologies such as -greedy approach [21] and the prioritized experience replay approach [22] can be applied to improve the training performance.

Iii-3 Hierarchical Reinforcement Learning

For the HRL model [6] with sequential sub-goals, a meta controller generates the sub-goal for the following steps and a controller outputs the actions based on this sub-goal until the next sub-goal is generated by the meta controller.

(4)

Iv Methodology

In this section we present our proposed model, which is a hierarchical RL network with an explicit attention model, hybrid reward mechanism and a hierarchical prioritized experience replay training schema. We will refer to this model as Hybrid HRL throughout the paper.

Iv-a Hierarchical RL with Attention

Fig. 2: Hierarchical RL Option and Action Q-Network. FC stands for a fully connected layer. Within all the FC layers, Linearactivation functions are used to generate last layers in both Option-Value and Action-Value networks. For the rest of the layers, ReLu activation functions are applied.

Hierarchical structures based on RL can be applied to learn a task with multiple sub-goals. For a hierarchical structure with two levels, an option set is assigned to the first level, whose object is to select among sub-goals. The weight is updated according to Equation 5.

(5)

After selecting an option , the corresponding action set represents the action candidates that can be executed on the second level of the hierarchical structure with respect to the selected option

. Some previous work proposed the Hierarchical Markov Decision Process (MDP), which shares the state set

among different hierarchical levels during the MDP or designs different states for changing sub-goals and applies initial and terminating condition sets to transfer from one state set to another.

In many situations, the portion of the state set and the amount of abstraction needed to choose actions at different levels of this hierarchy can vary widely. In order to avoid designing a myriad of state representations corresponding to each hierarchy level and sub-goal, we share one state set for the whole hierarchical structure. Meanwhile, an attention model is applied to define the importance of each state element with respect to each hierarchical level and sub-goal and then use these weights to reconstruct the state . The weight is updated according to Equation 6.

(6)

When implementing the attention-based HRL, we construct the option network and the action network (Figure 2), which includes the attention mechanism as a layer in the action-value network .

Iv-B Hybrid Reward Mechanism

For a sequential sub-goals HRL model [6], the reward function is designed separately for the sub-goals and main task. The extrinsic meta reward is responsible for the option-level task, and meanwhile the intrinsic reward is responsible for the action-level sub-goals. For HRL with parameterized actions [23], an integrated reward is designed to evaluate both option-level and action-level together.

Fig. 3: Hybrid Reward Mechanism

In our work, instead of generating one reward function which is applied to evaluate the final outputs coming from both options and actions in one step together, we designed a reward mechanism which can evaluate the goodness of option and action separately during the learning procedure. As a result, a hybrid reward mechanism is introduced so that: 1) the algorithm gets the information of which reward function should be triggered to get rewards or penalties; 2) meanwhile, a positive reward which benefits both option reward and action reward occurs if and only if the whole task and the sub-goals in the hierarchical structure have all been completed. Figure 3 demonstrates the idea for the hybrid reward mechanism.

Iv-C Hierarchical Prioritized Experience Replay

In [22] the authors propose a framework for more efficiently replaying experience during the training process in DQN so that the stored transitions

with higher TD-error in the previous training iteration result in a higher probability of being selected in the mini-batch for training during the current iteration. However, in the HRL structure, the rewards received from the whole system not only rely on the current level, but also are affected by the interactions among different hierarchical levels.

For the transitions stored during the HRL process, the central observation is that if the output of the option-value network is chosen wrongly due to high error between predicted option-value and the targeted option-value , then the success or failure of the corresponding action-value network is inconsequential to the current transition. As a result, we propose a hierarchical prioritized experience replay (HPER) in which the priorities in the option-level are based on error directly and the priorities in the lower level are based on the difference between errors coming from two levels. Higher priority is assigned to the action-level experience replay if the corresponding option-level has lower priority. According to Equations 5 and 6, the transition priorities for option and action level are given in Equation 7.

(7)

Based on the aforementioned approaches, the Hybrid HRL is shown in Algorithm 1, 2 and 3.

1:procedure HRL-AHR()
2:      Initialize option and action network , with weights , and the target option and action network , with weights , .
3:      Construct an empty replay buffer with max memory length .
4:      for  to

training epochs 

do
5:            Get initial states .
6:            while  is not the terminal state do
7:                 Select option based on -greedy. is the selected sub-goal that the lower-level action will execute.
8:                 Apply attention model to state based on the selected option : .
9:                 Select action based on -greedy.
10:                 Execute in simulation to get .
11:                 .
12:                 Store transition into : .             
13:            Train the buffer .
14:            if  mod  then
15:                 Test without action exploration with the weights from training results for epochs and save the average rewards.                   
Algorithm 1 Hierarchical RL with Attention State
1:procedure HybridReward()
2:      Penalize and for regular step penalties (e.x.: time penalty).
3:      for  in sub-goals candidates do
4:            if  fails then
5:                 if option  then
6:                       Penalize option reward
7:                 else
8:                       Penalize action reward                                    
9:      if task success (all success) then
10:            Reward both and .       
Algorithm 2 Hybrid Reward Mechanism
1:procedure ReplayBuffer()
2:      mini-batch size , training size , exponents and .
3:      Sample transitions for option and action mini-batch:
4:      Compute importance-sampling weights:
5:      Update transition priorities:
6:      Adjust the transition priorities to be greater than 0: .
7:      Perform gradient descent to update according to sample weights , .
8:      Update target networks weights , .
Algorithm 3 Hierarchical Prioritized Experience Replay

V Experiment

In this section, we apply the proposed algorithm to the behavior planning of a self-driving car and make comparisons with competing methods.

Rewards Step Step Penalty Performance Rate
Option Reward Action Reward Unsmoothness Unsafe Collision Not Stop Timeout Success
Rule 1 -36.82 -9.11 112 0.38 8.05 18% 82% 0% 0%
Rule 2 -28.69 0.33 53 0.32 6.41 89% 0% 0% 11%
Rule 3 26.42 13.62 128 0.54 13.39 31% 0% 0% 69%
Rule 4 40.02 17.20 149 0.58 16.50 14% 0% 0% 86%
Hybrid HRL 43.52 28.87 178 5.32 1.23 0% 7% 0% 93%
TABLE I: Results comparisons among different behavior policies

V-a Scenario

We tested our algorithm in MSC’s VIRES VTD, which is a complete simulation tool-chain for driving applications [24]. We designed a task in which an autonomous vehicle (green box with ) intends to stop at the stop-line behind a random number of front vehicles (pink boxed with ) which have random initial positions and behavior profiles (see Figure 4). The two sub-goals in this scenario are designed as STOP AT STOP-LINE (SSL) and FOLLOW FRONT VEHICLE (FFV).

V-B Transitions

Fig. 4: Autonomous vehicle (green box with ) approaching stop-sign intersection

V-B1 State

The state which is used to formulate the hierarchical deep reinforcement learning includes the information of the ego car, which is useful for both sub-goals, and the related information that is needed for each sub-goal.

(8)

Equation 8 describes our state space where , and are respectively the velocity, acceleration and jerk of the ego car, while and denote the distance from the ego car to the nearest front vehicle and the stop-line, respectively. A safety distance parameter is introduced as a nominal distance behind the target object which can improve safety due to different sub-goals.

(9)

Here and denote the ego car’s maximum deceleration and minimum allowable distance to the front vehicle, respectively, and and are the distances that can be chased by the ego car (distances to the front vehicle minus safety distance of the target). The initial positions of front vehicles and ego car are randomly selected.

V-B2 Option and Action

The option network in the scenario outputs the selected sub-goal: SSL or FFV. Then, according to the option result, the action network generates the throttle or brake choices.

V-B3 Reward Functions

Assume that for one step, the selected option is denoted as , . The reward function is given by:

For each step:

  • Time penalty: .

  • Unsmoothness penalty if jerk is too large: .

  • Unsafe penalty: .

For the termination conditions:

  • Collision penalty: .

  • Not stop at stop-line penalty: .

  • Timeout: .

  • Success reward:

where are constants. are indicator functions. if and only if is satisfied, otherwise .

Assume that for one step, the selected option is denoted as , and the unselected option is , :

(10)

where represents the portion of the reward common to , and .

For comparison, we also formulate the problem without considering a hierarchical model via Double DQN. Then denotes the reward for achieving the task in this flattened action space.

V-C Results

We compare the proposed algorithm with four rule-based algorithms and some traditional RL algorithms mentioned before. Table I shows the quantitative results for testing the average performance of each algorithm over 100 cases.

Fig. 5: Training results
Hybrid Reward Hierarchical PER Attention Model
HRL
HRL
HRL
HRL
Hybrid HRL
TABLE II: Different HRL-based policies

The competing methods include:

  • Rule 1: stick to the option Follow Front Vehicle (FFV).

  • Rule 2: stick to the option Stop at Stop-line (SSL).

  • Rule 3: if , select FFV, w/o SSL.

  • Rule 4: if , select FFV, w/o SSL.

  • Table II shows the explanations of different HRL-based algorithms whose results are shown in Figure 5.

Figure 5 compares the Hybrid HRL method with different setup of HRL algorithms. The results show that the hybrid reward mechanism can perform better with the help of hierarchical PER approach.

Fig. 6: Velocities of ego car and front vehicles

Figure 6 depicts a typical case of the relative speed and position of the ego vehicle with respect to the nearest front vehicle as they both approach the stop-line. In the bottom graph we see the ego vehicle will tend to close the distance to the front vehicle until a certain threshold (about 5 meters) before lowering its speed relative to the front vehicle to allow a certain buffer between them. In the top graph we see that during this time the front vehicle begins to slow rapidly for the stop-line at around 25 meters out before taxing to a stop. Simultaneously, the ego vehicle opts to focus on stopping for the stop-line until it’s within a certain threshold of the front vehicle, at which point it will attend to the front vehicle instead. Finally, after a pause the front vehicle accelerates through the stop-line and at this point the ego vehicle immediately begins focusing on the stop sign once again as desired.

Fig. 7: Attention value extracted from the attention layer in the model. and are and in the introduced state, respectively.
Fig. 8: Performance rate of only training to Follow Front Vehicles during the training process. Results from training include random actions taken according to explorations. Results from testing show average performance by testing 200 cases based on the trained network after that training epoch.

Figure 7 shows the results extracted from the attention softmax layer. Only the two state elements with the highest attentions have been visualized. The upper sub-figure shows the relationship between the distance to the nearest front vehicle (y-axis) and the distance to the stop-line (x-axis). The lower sub-figure is the attention value. When the ego car is approaching the front vehicle, the attention is mainly focused on . When the front vehicle leaves without stopping at the stop-line, the ego car transfers more and more attentions to during the process of approaching the stop-line.

Fig. 9: Performance rate of only training to choose the options between FFV or SSL based on the designed rule-based or trained action-level policies. Results from Test shows average performance by testing 100 cases based on the trained network after that training epoch.

For the scenario of approaching the intersection with front vehicles, one of the methods is to manually design all the rules. Another possibility is to design a rule-based policy of stopping at the stop-line which is relative easy to model. Then we train a DDQN model (see Figure 8 for training process) to be the policy of following front vehicles. Based on these two action-level models, we train another DDQN model (see Figure 9 for training process) to be the policy governing which option is needed for approaching the stop-line with front vehicles. During the training process, after every training epoch, the simulation will test 500 epochs without action exploration based on the trained-out network. By applying the proposed hybrid HRL, all the option-level and action-level policies can be trained together (see Figure 10 for training process) and the trained out policy can be separated if the target task only need to achieve one of the sub-goals. For example, the action-value network of Following Front Vehicle can be used alone with the corresponding option input to the network. Then, the ego car can follow the front vehicle without stopping at the stop-line.

Fig. 10: Performance rate of Hybrid HRL training process. Results from testing show average performance by testing 500 cases based on the trained network after that training epoch.

Vi Conclusions

In this paper, we proposed three extensions to hierarchical deep reinforcement learning aimed at improving convergence speed, sample efficiency and scalability over traditional RL approaches. Preliminary results suggest our algorithm is a promising candidate for future research as it is able to outperform a suite of hand-engineered rules on a simulated autonomous driving task in which the agent must pursue multiple sub-goals in order to succeed.

Acknowledgments

The authors would like to thank S. Bilal Mehdi of General Motors Research & Development for his assistance in implementing the VTD simulation environment used in our experiments.

References