Offline Reinforcement Learning for Autonomous Driving with Safety and Exploration Enhancement

10/13/2021
by   Tianyu Shi, et al.
Michigan State University
0

Reinforcement learning (RL) is a powerful data-driven control method that has been largely explored in autonomous driving tasks. However, conventional RL approaches learn control policies through trial-and-error interactions with the environment and therefore may cause disastrous consequences such as collisions when testing in real-world traffic. Offline RL has recently emerged as a promising framework to learn effective policies from previously-collected, static datasets without the requirement of active interactions, making it especially appealing for autonomous driving applications. Despite promising, existing offline RL algorithms such as Batch-Constrained deep Q-learning (BCQ) generally lead to rather conservative policies with limited exploration efficiency. To address such issues, this paper presents an enhanced BCQ algorithm by employing a learnable parameter noise scheme in the perturbation model to increase the diversity of observed actions. In addition, a Lyapunov-based safety enhancement strategy is incorporated to constrain the explorable state space within a safe region. Experimental results in highway and parking traffic scenarios show that our approach outperforms the conventional RL method, as well as state-of-the-art offline RL algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 6

02/18/2021

Continuous Doubly Constrained Batch Reinforcement Learning

Reliant on too many experiments to learn good actions, current Reinforce...
06/23/2021

Uncertainty-Aware Model-Based Reinforcement Learning with Application to Autonomous Driving

To further improve the learning efficiency and performance of reinforcem...
11/12/2021

DriverGym: Democratising Reinforcement Learning for Autonomous Driving

Despite promising progress in reinforcement learning (RL), developing al...
04/11/2022

Automatically Learning Fallback Strategies with Model-Free Reinforcement Learning in Safety-Critical Driving Scenarios

When learning to behave in a stochastic environment where safety is crit...
06/08/2020

Conservative Q-Learning for Offline Reinforcement Learning

Effectively leveraging large, previously collected datasets in reinforce...
03/16/2022

How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for Efficient and Safe Driving Strategies

Autonomous driving has the potential to revolutionize mobility and is he...
03/18/2021

Integrated Decision and Control: Towards Interpretable and Efficient Driving Intelligence

Decision and control are two of the core functionalities of high-level a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Autonomous driving has received exceedingly high research interests in the past two decades as it offers the promise of releasing drivers from exhausting driving. While great advances have been achieved in the field of path planning, perception and controls, high-level decision-making remains a challenge especially in mixed traffic with complex and dynamic driving environment. Recently, numerous reinforcement learning (RL) approaches have been applied to autonomous driving tasks and promising results are reported sallab2017deep; wang2018automated; shi2019driving; chen2020autonomous; chen2021interpretable; wu2021human. However, conventional RL algorithms evolve through interacting with the environment, via sometimes trial-and-error exploratory actions that make the vehicles vulnerable to accidents in real-world traffic.

Offline RL (also known as batch RL) has been proposed as a promising framework to address the safety issue where agents learn from pre-collected datasets without interacting with the real-world environment. As such, it has received increased interests in safety-critical applications such as decision making in healthcare, robotics, and autonomous driving levine2020offline. In particular, the batch-constrained RL (BCQ) algorithm is proposed in fujimoto2019off

, where a state-dependent generative model is used to restrict predicted actions to be similar to previously observed ones to tackle the issue of extrapolation error caused by erroneously estimating seen state-action pairs. In addition, the authors in

wu2019behavior exploit the schemes of value penalty factor and policy regularization in the value and policy objective functions to regularize the learned policy towards the expert policy and worthy performance gains on recently proposed offline RL methods are obtained. The aforementioned behavior-constrained approaches essentially restrict the learned policy distribution to resemble the datasets to mitigate the effects of extrapolation error, which on the other hand will generally drive the agents to act conservatively without efficiently exploring the state and action space fujimoto2019off. This tends to result in poor diversity of seen state-action pairs, which negatively impairs the learning performance.

Learning to explore is an emerging paradigm to address the issue of insufficient exploration fujimoto2019off; lillicrap2015continuous; plappert2017parameter; xu2018learning. For instance, plappert2017parameter

has shown improved exploratory behavior through adding additive Gaussian noise to the parameter vectors on 3 off-policy deep RL algorithms. Deep Deterministic Policy Gradient (DDPG)

lillicrap2015continuous is then used to independently train an exploration policy by integrating it with an auto-correlated noise added to the actor policy. Despite promising results, the aforementioned approaches apply state-independent noises to enhance exploration, which may not adapt satisfactorily to more diverse environments like the case in autonomous driving.

In this paper, we build upon the state-of-the-art offline RL algorithm, BCQ, and develop a more efficient RL framework with a learnable parameter noise in the perturbation model to enhance exploration and achieve increased diversity in seen actions. Furthermore, Lyapunov-based safety regulation is adopted to enhance the safety in explorations. The main contributions and the technical advancements of this paper are summarized as follows.

  1. We build upon BCQ and develop a more efficient and safety-enhanced offline RL framework that are applicable to many safety-critical real-world applications.

  2. A novel learnable parameter noise scheme is employed to enhance the diversity of seen actions and a Lyapunov-based risk factor is constructed to restrict the exploratory state space within the safe region.

  3. We conduct comprehensive experiments on autonomous driving in both highway and parking traffic scenarios, and the results show that our approach consistently outperforms standard RL and several state-of-the-art offline RL algorithms in terms of driving safety and efficiency.

The remainder of this paper is organized as follows. Section 2 briefly introduces the preliminaries of RL, offline RL and Lyapunov stability theory. The proposed offline RL framework with enhanced safety and exploration efficiency is described in Section 3 whereas experiments, results, and discussions are presented in Section 4. Finally, we conclude the paper and discuss future works in Section 5.

2 Background

2.1 Preliminaries of Reinforcement Learning

In a RL setting, the objective is to learn an optimal policy that maximizes the accumulated return , where is the reward at time step and is the discount factor. More specifically, the agent observes the state of the environment at each time , and interacts with the environment by performing an action according to a policy . The state-action value function (or Q-function) of a policy is the expected return when following the policy after taking action in state . The optimal value function , representing the reward of taking action in state followed by the optimal policy through greedy action choices, can be obtained from the following Bellman equation:

(1)

where denotes the Bellman operator and

is the transition probability. Off-policy algorithms like Q-learning

sutton2018reinforcement; mnih2013playing

fit the Q-function with a parametric model

and update the parameters with sampled data from the experience buffer dataset lin1992self. Actor-critic networks like DDPG lillicrap2015continuous adopt two networks: an actor network for policy learning and a critic network

to reduce variance, where the policy network is updated as:

(2)

2.2 Offline Reinforcement Learning

Offline Reinforcement learning is essentially a type of off-policy RL that works on a pre-collected and static dataset without the requirement of continuous interactions with the environment levine2020offline; ardoinextracting. Typically, the dataset of unknown quality is first obtained. Batch-Constrained deep Q-learning (BCQ) fujimoto2019off is a state-of-the-art offline RL method aiming at enforcing the learned policy to be similar to the behavior policy exhibited in the data. BCQ aims to solve a key challenge in offline RL that the values of the seen state-action pairs are often erroneously estimated (also known the as extrapolation error phenomenon). Towards that end, BCQ samples multi-step actions from a generative model (i.e., VAE kingma2013auto), which is then used to train the policy by producing actions similar to the ones in the observed data batch:

(3)

where with being the action generated from a generative model and being a perturbation model added to increase the diversity of seen actions fujimoto2019off. The perturbation model is updated as:

(4)

and the critic network is updated as:

(5)

where is a combination of the two target Q-values, and , from the target networks and is defined as:

(6)

where is a parameter that controls the uncertainty introduced from future time steps.

2.3 Lyapunov Stability

Consider the following dynamical system:

(7)

where is the state vector with being the domain, and is the control input vector. The closed-loop system is stable at the origin if for any , there exists , such that if then for all . Furthermore, the system is asymptotically stable if it is stable and the state goes to zero asymptotically, i.e., for all chang2019neural.

Lyapunov theory Khalil2002NLsys is a well-studied method to characterize the stability conditions. Specifically, if there exists a continuously differentiable function for the closed-loop system such that

(8)

Here is the Lie derivative and defined as

(9)

3 Methodology

3.1 Learning to Explore

In BCQ, a perturbation model parameterized by is used to generate a noise signal, which is added to the VAE-generated action to facilitate exploration and increase the diversity of the seen actions. As reported in plappert2017parameter, injecting parameter noises within traditional RL methods can generally promote the exploration. As such, we extend the BCQ algorithm by adding a learnable parameter noise fortunato2017noisy to the perturbation model as . Taking a fully-connected layer as an example, where and are the input and output features, respectively, and is the network parameter. Then the corresponding network with perturbation parameter noise is modified as:

(10)

where the parameters and are learnable parameters of the perturbation network. Here,

are noisy random variables that can be learned through back-propagation. The modified perturbation model is thus updated as:

(11)

where is the parameter of the new perturbation model after incorporating the learnable noise parameters.

3.2 Learning to Provide Safety Guarantee

We consider the case that the operation space is defined and restricted based upon those observed within the static dataset . We aim at enhancing the BCQ algorithm with guaranteed safety. Towards that end, we perform a joint learning framework to obtain the system dynamics in Eqn. 7 together with its Lyapunov function. This collective learning schemes ensures system stability according to the Lyapunov stability criterion introduced in Section 2.3. Specifically, we define a “nominal” closed-loop system dynamics and the corresponding Lyapunov function

as two neural networks. From

manek2020learning, it follows that:

(12)

where the structure of can be conveniently chosen as random fully connected network whereas the network for Lyapunov function learning is generally chosen as Input Convex Neural Network (ICNN) amos2017input. Here is an assigned parameter, and

is a smoothed ReLU activation with a quadratic region in

:

(13)

By enforcing that no positive component of is along the direction of , according to the afore-mentioned Lyapunov stability theory, the stability of is guaranteed.

Furthermore, in addition to system stability, we also seek to provide safety guarantees with the optimized solution from the exploration policy. According to Eqn. 8, an extended Lyapunov function design can be formulated as the following mini-max based cost function chang2019neural:

(14)

Note that even the convexity of ICNN ensures that has only a single global optimum amos2017input, it does not require the optimum is at . To address this issue while avoiding increased computational burden and maintaining the function convexity, we perform an internal kernel function shifting manek2020learning to achieve . In the meantime, a small positive term is added to ensure strict positive-definiteness:

(15)

where is a small constant and is an ICNN. In practice, Eqn. 14 can be solved as the following empirical Lyapunov risk index through Monte Carlo estimation,

(16)

where is the state variable sampled according to distribution from the data batch . Finally, the following Lyapunov risk is added to the critic network as:

(17)

Pseudo-code of the proposed offline RL algorithm with enhanced safety and promoted exploration is summarized in Algorithm 1, and the major changes from the BCQ algorithm are highlighted in blue.

1:  Input: Batch of data , horizon , target network update rate , mini-batch size N, number of sampled action , minimum weighting .Initialize Q-networks and , noisy perturbation network , VAE and Lyapunov function , with random parameters and target network and with , .
2:  for episode to do do
3:     Sample mini-batch of N transitions from
4:     , ,
5:     w
6:     Sample actions:
7:     (Explore efficiently) Generate perturbed actions:
8:     (Guarantee Safety) Compute Lyapunov risk according to Eq.16
9:     Compute value target (Eqn. 5)
10:     
11:     
12:     Update target network:
13:  end for* The major changes from the BCQ algorithm are highlighted in blue.
Algorithm 1 Improved BCQ with safety and exploration enhancement

4 Experiments

4.1 Experimental Setup

We apply our new offline RL framework to autonomous driving tasks, where the open-sourced gym-based environment, highway-env simulator

111https://highway-env.readthedocs.io/en/latest/, is adapted as our simulation platform. In this platform, vehicle trajectories are generated based on the kinematic bicycle model polack2017kinematic, where the vehicles take continuous-valued actions for steering and throttle controls as defined in highway-env. To collect data for offline RL training, a DDPG agent over 5,000 time steps is trained and the experience buffer is trained. We use the DDPG implementation from the OpenAI baselines222https://stable-baselines.readthedocs.io/en/master/. The proposed approach is experimented on the following two traffic scenarios.

4.1.1 Highway scenario

The highway environment is illustrated in Fig. 1, where autonomous vehicle (AV, blue) intends to navigate as fast as possible without colliding with the human-driven vehicles (HDVs, green). The AV is expected to make lane changes to overtake slow-moving vehicles whenever possible to achieve higher speed. The reward function is defined as:

(18)

where are the current, minimum and maximum speed of the ego-vehicle, respectively, and are two weighting coefficients.

4.1.2 Parking scenario

Fig. 1 shows the parking scenario, where the objective of the AV is to park successfully to stay within a desired space with appropriate heading while not colliding with the obstacles (dark green boxes). In this scenario, the reward is defined as:

(19)

where represents the current state of the AV whereas is the goal position and orientation. The violation term represents the penalty on hitting obstacles. Here is the heading angle, and are two weighting coefficients.

[Lane-change scenario. ]   [Parking scenario. ]

Figure 1: Two traffic scenarios: freeway and parking.

4.2 Baselines

To demonstrate the effectiveness of our proposed approach, we compare our approach with a state-of-the-art conventional off-policy RL, as well as BCQ, a state-of-the-art offline RL algorithm:

  1. Deep Deterministic Policy Gradient (DDPG) lillicrap2015continuous: DDPG is an off-policy deterministic version of model-free RL algorithm that can handle continuous action space. We adapt the implementation based on OpenAI stable baseline333https://stable-baselines.readthedocs.io/en/master/.

  2. Batch Constraint Reinforcement Learning (BCQ) fujimoto2019benchmarking: BCQ is a state-of-the-art offline RL algorithm for continuous control with a state-dependent generative model used to restrict predicted actions to be similar to previous observed ones.

  3. Noisy BCQ: In this version, we extend BCQ by only adding the exploration-promotion strategy on the policy as detailed in Section 3.1, without employing any safety-enhancement schemes.

  4. Ours: The framework extends BCQ by incorporating a new perturbation model with learnable parameter noise as well as a Lyapunov-based safety-enhancement scheme.

For this comparsion, we train all algorithms over 200 episodes and evaluate the models every 10 episodes with 5 different random seeds while the same random seeds are shared among the models. We set the discount factor as 0.7.

4.3 Performance Comparison

4.3.1 Comparison with state-of-the-art benchmarks

The comparison between the proposed algorithm and state-of-the-art off-policy and offline algorithms are shown in Fig. 2 and Fig. 2 on the highway and parking scenarios, respectively. It is clear that our proposed approach consistently outperforms the benchmark algorithms in terms of evaluation returns and training efficiency, which is a result of the proposed parameter noise injection and safety guarantee schemes that facilitate exploration and enhance system safety. It is also noted that Noisy BCQ also outperforms standard BCQ in both traffic scenarios, which demonstrates that adding parameter noises to the perturbation model in BCQ can promote efficient explorations in BCQ.

[Returns in highway. ]   [Returns in parking. ]

Figure 2: Comparison on evaluation returns between the proposed approach and state-of-the-art benchmarks.

To show the correlation between state and action pairs, we plot the state-action density in the parking scenario in Fig. 3

, where we transform the multi-dimensional features of state and action into one dimensional vectors using principal component analysis (PCA) to show the diversity of the observed state-action pairs. It can be seen that BCQ explores rather “cautiously” with very limited state and action space. In contrast, the Noisy BCQ exhibits more efficient and “aggressive” exploration, surveying a much larger state-action space. This demonstrates that the proposed parameter noise injection scheme can effectively promote exploration in BCQ. With additional Lyapunov-based safety-enhancement, our proposed approach shows the same range of visited action space as Noisy BCQ but restricts the state space in a reasonable range, striking a good balance between exploration and safety as can be seen next.

Figure 3: State action density contours in the parking scenario. Darker colors represent more frequent state-action pairs.

4.3.2 Performance of safety enhancement

To evaluate the performance of the proposed safety scheme, we compare the proposed approach with the Noisy BCQ that only has the parameter noise injection scheme without safety enhancement. Fig. 4 shows the minimum distance to the surrounding vehicles in the highway scenario for the proposed approach and Noisy BCQ. It is obvious that our approach presents a much higher minimum distance than Noisy BCQ which frequently leads to distances smaller than 5 . This is because Noisy BCQ only promotes exploration without considering the safety issues.

[Minimum distance with our approach. ]   [Minimum distance with noisy BCQ. ]

Figure 4: Comparison on minimum distance between our method and Noisy BCQ.

Furthermore, we compare the performance of our approach with Noisy BCQ in the parking scenario in terms of steering angle, acceleration and success rat. As shown in Fig. 5, our proposed approach has a smooth steering angle than Noisy BCQ which has sharp changes in steering angle that is risky and leads to poor ride comfort in real-world driving. The acceleration plots in Fig. 5 indicates that our approach also has a lower acceleration compared to Noisy BCQ. Higher and more oscillatory accelerations can cause very poor drive comfort and reduce the lifespan of vehicles. Above all, our approach achieves the highest success rates than the BCQ and Noisy BCQ in the parking scenario as shown in Fig. 6.

[Comparison on steering performance. ]   [Comparison on acceleration performance. ]

Figure 5: Performance comparison on steering and acceleration.

Figure 6: Comparison on different success rates in the parking scenario.

5 Conclusion and Future Work

In this paper, we developed an efficient and safety-enhanced offline RL framework with application to autonomous driving in highway and parking traffic scenarios. To facilitate exploration, we improved the BCQ algorithm by exploiting learnable parameterized noises in the perturbation model. A novel safety scheme was developed using Lyapunov stability theory to enhance safety during explorations. Comprehensive experiments on the application of autonomous driving were conducted to compare our approach with several state-of-the-art algorithms, which demonstrated that the proposed approach consistently outperformed the benchmark approaches in terms of training efficiency and safety. In our future work, we plan to collect and employ more diverse data such as data from conventional control methods and real-world data from autonomous vehicles to further improve the performance.

References