Safety-Guided Deep Reinforcement Learning via Online Gaussian Process Estimation

03/06/2019
by   Jiameng Fan, et al.
Boston University
0

An important facet of reinforcement learning (RL) has to do with how the agent goes about exploring the environment. Traditional exploration strategies typically focus on efficiency and ignore safety. However, for practical applications, ensuring safety of the agent during exploration is crucial since performing an unsafe action or reaching an unsafe state could result in irreversible damage to the agent. The main challenge of safe exploration is that characterizing the unsafe states and actions is difficult for large continuous state or action spaces and unknown environments. In this paper, we propose a novel approach to incorporate estimations of safety to guide exploration and policy search in deep reinforcement learning. By using a cost function to capture trajectory-based safety, our key idea is to formulate the state-action value function of this safety cost as a candidate Lyapunov function and extend control-theoretic results to approximate its derivative using online Gaussian Process (GP) estimation. We show how to use these statistical models to guide the agent in unknown environments to obtain high-performance control policies with provable stability certificates.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/01/2021

Safe Exploration for Constrained Reinforcement Learning with Provable Guarantees

We consider the problem of learning an episodic safe control policy that...
03/10/2020

Exploring Unknown States with Action Balance

Exploration is a key problem in reinforcement learning. Recently bonus-b...
09/29/2021

Improving Safety in Deep Reinforcement Learning using Unsupervised Action Planning

One of the key challenges to deep reinforcement learning (deep RL) is to...
02/04/2014

Safe Exploration of State and Action Spaces in Reinforcement Learning

In this paper, we consider the important problem of safe exploration in ...
09/21/2018

Constrained Exploration and Recovery from Experience Shaping

We consider the problem of reinforcement learning under safety requireme...
03/27/2019

Autoregressive Policies for Continuous Control Deep Reinforcement Learning

Reinforcement learning algorithms rely on exploration to discover new be...
02/25/2020

Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements

In this paper, we propose a derivative-free model learning framework for...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning (RL) algorithms (Sutton & Barto, 2018) have achieved impressive results in game environments such as those on the Atari platform (Mnih et al., ). However, they are rarely applied to real-world, physical systems. The main reason is that, besides the goal of optimizing for performance, there often exist safety requirements that make RL challenging in actual applications. In particular, these safety requirements might be imposed in deployment (Amodei et al., 2016; Garcıa & Fernández, 2015) or during exploration and training (Leike et al., 2017; Berkenkamp et al., 2017; Chow et al., 2018). For example, an intermediate, learned policy exercised by a robot during training should not break the system or harm the environment. The importance of safety is well recognized by the community and safe reinforcement learning has recently emerged as an important subfield within RL (for an extensive survey, see Garcıa & Fernández (2015)). In general, the goal of safe RL is to maximize system performance while minimizing safety violations (or meeting safety constraints) during the learning and/or deployment processes.

In this paper, we consider a notion of safety that is defined over executions of the agent (i.e., trajectories). It has been observed that, in many safety-critical applications such as robot exploration (Moldovan & Abbeel, 2012), portfolio planning (Tamar et al., 2012) and resource allocation (Tesauro et al., 2006), it is often more natural to define safety over the whole trajectory, as opposed to over particular states or state-action pairs. We associate a real-valued safety cost with each state-action pair. A policy is thus deemed safe if its cumulative safety costs (different from the reward return) for the length of the trajectory is below a certain threshold. In general, this threshold might not be known a priori. Thus, our goal is to keep the cumulative safety cost as low as possible. Compared with approaches that guarantee safety over state-action pairs by relying on human oversight and intervention (Saunders et al., 2018) or blocking the unsafe actions using the so-called shields (Alshiekh et al., 2018), trajectory-based safety is more suitable for evaluating the safety of a given policy when the environment model is unknown. Besides, characterizing unsafe states and unsafe actions can be intractable or infeasible for the high-dimensional and continuous cases.

Figure 1: The safety-guided RL framework: the parameterized policy generates which includes current state, curent action, next state, reward and safety cost along the trajectory; these values are used to fit models and which estimate the expected reward and safety cost respectively; the GP estimation is updated in every iteration given the new tuples and measurements from ; the parameterized policy is then optimized based on the objective function which combines the reward return and safety estimations.

In trajectory-based safety, in order to minimize the cumulative safety costs, it is important for the agent to be able to recover from states with high safety cost. This ability to recover is known as asymptotic stability in control theory (Bhatia & Szegö, 2002), which provides a powerful paradigm to translate global properties of the system to local ones and vice versa. While the main challenge of Lyapunov-based methods (Berkenkamp et al., 2016; Bhatia & Szegö, 2002) is to design an appropriate Lyapunov function candidate, our idea is to formulate the state-action value function for the safety costs as the candidate Lyapunov function and model its derivative with a Gaussian Process which provides statistical guarantees. By combining with the original value function, our approach steers the policy in a direction that both decreases the future cumulative safety costs and increases the expected total reward. Fig. 1 shows the overall framework.

In short, we propose

a model-free RL algorithm that can provide high-probability trajectory-based safety guarantees for unknown environments with continuous state spaces

. The main contributions of our paper are four-fold.

  • We propose a novel Lyapunov-based approach to guide the exploration process of deep RL.

  • We propose to use Gaussian Processes to model the evolution of stability as policies get updated during training to cope with unknown environments and large continuous state/action spaces.

  • We show that adjusting the GP estimation online is needed to effectively and safely guide policy search.

  • We demonstrate the effectiveness of the approach in significantly reducing the number of catastrophes (e.g. falling) during training and exploration in a high-dimensional locomotion task with continuous states and actions. In addition, we show that our approach can attain higher performance in fewer iterations and shorter amount of time compared to the Deep Deterministic Policy Gradient method.

2 Related Work

Safety is an important issue in RL and safe RL has emerged as an active research topic in recent years (Pecka & Svoboda, 2014; Garcıa & Fernández, 2015). Below, we discuss metrics of safety, representative approaches in model-based and model-free RL, and recent works on safe RL.

Safety Metrics. The concept of safety, or dually, risk has taken various forms in the RL literature. In Sato et al. (2001)

, the authors show that variability induced by the trained policy can lead to risky or undesirable situations. This characterization unfortunately does not generalize to settings where a policy with a small variance produces significant risks. In general, the safety metric should be easily generalizable to any safety-critical domain and independent of the nature of the task.

Torrey & Taylor (2012) propose a level metric based on the distance between the known and the unknown space. However, this metric relies on constant monitoring by humans to provide the necessary guidance. In Gehring & Precup (2013), the authors measure safety as state controllability based on the notion of temporal difference. The weighted sum of an entropy measurement and the expected return is used to evaluate safety in Law et al. (2005). While these metrics seem suitable for finite MDPs, for MDPs with large state and action spaces, these measurements are computationally intractable. This paper considers trajectory-based safety with respect to the executed policy and uses function approximators to estimate safety instead of relying on human monitoring or assuming that the MDP model is given.

Model-based and Model-free RL.

In the model-based setting, research has focused on estimating the true model of the environment by interacting with it. Model-based methods typically cannot cope with continuous or large state/action spaces and have trouble scaling due to the curse of dimensionality 

(Abbeel & Ng, 2005). In continuous state/action spaces, model-free policy search algorithms have been shown to be successful. These approaches update the policies without knowing the system model by repeatedly executing the same task (Lillicrap et al., 2015). Achiam et al. (2017) introduce safety guarantees in terms of constraint satisfaction that holds in expectation. However, safety has only been considered by disallowing large steps along the gradient into areas of the parameter space that have not been explored before. Existing works use Gaussian Process models (Rasmussen, 2004) along with Bayesian optimization (Mockus, 2012) to approximate the value function (Chowdhary et al., 2014). On the down side, these methods are limited to simple and low-dimensional systems.

Safe RL. There are primarily two types of approaches to the safe RL problem: approaches that modify the optimization criterion with a safety component, and approaches that modify the exploration process through the incorporation of external knowledge (Garcıa & Fernández, 2015).

In RL, maximizing the long-term reward does not necessarily avoid the rare occurrences of large negative outcomes. In risk-sensitive RL, the optimization criterion is transformed into an exponential utility function (Howard & Matheson, 1972), or a linear combination of return and risk, where risk can be defined as the variance of the return (Sato et al., 2001). Geibel & Wysotzki (2005) define risk as the probability of driving the agent to a set of known but undesirable states. The optimization objective is then transformed to include minimizing the probability of visiting those states.

Other works instead change the exploration process directly. Most exploration methods are based on heuristics and have a random exploratory component, which can result in the exploration being risk-blind. Both

Moldovan & Abbeel (2012) and Berkenkamp et al. (2017) introduce algorithms to safely explore state-action space so that the agent never gets stuck. However, these two methods require an accurate probabilistic or approximated statistical model of the system. The common shortcoming of these methods is that they are limited to small and simple systems where exact control synthesis is possible. Eysenbach et al. (2017) propose to learn both forward and reset policies simultaneously with two action-value functions using Deep RL. Although the reset policy can move the agent back to the initial state after early aborts, there are no performance guarantees for the reset policy and the switching mechanism may result in very conservative behavior of the agent.

It is worth noting that the first type of approach, which modifies the optimization objective, will also modify the exploration process indirectly (Garcıa & Fernández, 2015). The vital component across these two types of approaches is transforming the optimization criterion or change the exploration process to include a form of risk. In this paper, we propose a novel risk/safety evaluation-guided training technique that significantly improves safety during training and exploration.

3 Background

We consider a model-free RL setup, where an agent interacts with the environment in discrete timesteps. RL is a sequential decision problem with state space , action space , transition dynamics , an initial state distribution , and an immediate scalar reward . We need to specify a deterministic policy , that given the current state, determines the appropriate action that maximizes the expected sum of -discounted returns, .

Typically, the RL training routines involve iteratively sampling from the current policy to explore the state-action space without considering safety. As a result, in practical applications, hard-coded termination or human intervention is required to stop the agent from entering unsafe states. Our work aims to enable safe exploration even when the environment is unknown or only partially known to us. Similar to the notion of reward, we define an additional function called safety cost to capture the cost of performing action in state with respect to safety. In the trajectory-based setting, the agent should aim to minimize future accumulated safety costs in a way similar to maximizing expected return. Safety requirement is defined over the whole trajectory. This mean that, during training, the agent will try to avoid increasing the total safety costs, and pick exploratory actions that can drive the system away from the trajectories that violate the safety requirement.

Deep Deterministic Policy Gradient (DDPG). Lillicrap et al. (2015) proposed a model-free algorithm for solving the deterministic policy gradient problems with continuous action space. Let represent the deterministic policy. Since the expectation depends only on the environment, it is possible to learn off policy with respect to another policy with different stochastic behaviors. Let be the state visiting distribution generated from . DDPG combines greedy policy commonly used in Q-learning (Watkins & Dayan, 1992) with function approximators of the -function and policy parameterized by and respectively under the actor-critic framework.

Then, we can compute the gradient of the greedy policy by applying the chain rule to the expected return

from the start distribution with respect to the actor parameters (Lillicrap et al., 2015):

(1)

Lyapunov function. To satisfy the specified safety requirement for safe exploration, we need a tool to determine safety of a trajectory that follows the current policy into the future. In control theory, this safety is usually computed for a fixed policy using Lyapunov functions.

Definition 1.

Lyapunov functions are continuously differentiable functions with and . The origin set is the set of terminal states.

In our experiemnts, we exploit the fact that action-value functions of the accumulated safety costs, , in RL are Lyapunov functions if the cost function are strictly negative away from the origin. This follows directly from the definition of the action-value function, where

(2)

Safety Evaluation. The key idea is to use the Lyapunov function to provide the measurements of the trajectory-based safety. In recent literatures, trajectory-based properties are evaluated on a set of policies (Achiam et al., 2017; Chow et al., 2018), which will require the function to be able to express the evaluation given some policy on the state-action space. Thus, we design the Lyapunov function as the accumulated safety costs of policy with respect to .

We show that the state-action value function of safety cost is similar to that of gradient ascent on strictly quasiconcave functions: if one can show that, given a policy , the agent is able to obtain strictly larger values of at (‘going uphill’), then the state will eventually converge to the equilibrium points at the origin. Then, we can achieve safe exploration if for given policy . However, the difference between the values at the two timesteps is not known a priori. Our idea is to use Gaussian Process to approximate the difference given the state-action pair. During the training phase, the GP model, , will be fed with approximated measurement at . In order to bound the safety evaluation, we make the following assumption.

Assumption 1.

The function has bounded Reproducing kernel Hilbert space (RKHS) norm with respect to a continuously differentiable, bounded kernel ; that is, .

4 Safe Exploration with GP Guidance

We choose DDPG (Lillicrap et al., 2015) as the baseline RL algorithm, since its off-policy learning allows sharing of the experience between the expected return of reward and safety costs estimation.

4.1 Approximate Lyapunov Function

We consider an additional function approximator, the Guard Network, parameterized by , which minimizes the following loss.

(3)
(4)

4.2 Gaussian Process

In GP regression, we use the outputs of the Guard Network as noisy observations of the true safety estimation. Let denote the station-action pair observed by GP. Specifically, we can obtain the posterior distribution of a function value at arbitrary state-action pair by conditioning the GP distribution of on a set of past measurements with -bound noise, for state-action pairs . The measurements are provided by the Guard Network approximation given the current policy, current state-action pair and the next state:

(5)

To bound the noise of the observations, we select the measurements within the balls of or . The posterior over is a GP distribution again, with mean , covariance and variance .

(6)
(7)
(8)

where contains the covariances between the new input and in , is the positive-definite covariance matrix.

is the identity matrix.

With Assumption 1, we can obtain the following result for  (Chowdhury & Gopalan, 2017):

Lemma 1.

Supposed that and that the observation noise is uniformly bounded by . Choose , where is the information capacity. Then, for all , it holds with probability at least , that

(9)

Lemma 1 allows us to make high-probability statements about the true function values of . The information capacity, , is the maximal mutual information that can be obtained about the GP prior from noisy samples at state-action pairs set . As a result, we are able to learn about the true values of over time by making appropriate choices from .

4.3 Initialization

In order to incorporate new data, we maximize the marginal likelihood of

after every iteration by adjusting the hyperparameters of the GP model. The term marginal likelihood refers to the marginalization over the function values

. Under the Gaussian Process model, the prior is Gaussian, i.e. , and the likelihood is a factorized Gaussian, i.e. . We can then obtain the log marginal likelihood as follows (Rasmussen, 2004).

(10)

The hyperparameters in the GP model, such as the kernel function’s parameters, can be optimized to fit the current dataset and measurements with high probabilities. This step is aimed at addressing the issue of inaccuracy in the initial estimation.

To prevent our model from converging too quickly to an incorrect estimate of in high-dimensional tasks, we introduce a single safe trajectory, with state-action pairs at each timestep, as initial knowledge to initialize the GP model, the approximator and the approximator. This trajectory is required to be safe in the sense that the cost measurements in each state are less than some threshold depending on the system requirement. Hence, we will discard the state-action pair that exceeds the cost threshold. These demonstrations will be added to the replay buffers of the and approximators with the associated rewards given by . The initial GP dataset will contain the state-action pairs from the safe trajectory, and the measurements are given by the negation of cost function for each state-action pair as .

4.4 Online GP Estimation

As an agent continues to collect new measurements during the execution of policies, the set of samples will increase in size. The state-action pair will be stored in if the measurements, , are outside the ball of zero, , or . We use this to prevent overfitting at the origin sets. After each run, the singularity of the covariance matrix based on

will be checked by QR decomposition to eliminate highly correlated data.

In order to maintain a dataset of fixed size, a natural and simple way to determine whether to delete a point from the dataset is to check how well it is approximated by the rest of elements in . This is known as the kernel linear independence test (Csató & Opper, 2002). For GPs, the linear independence test for the element from is computed as

(11)

which is the variance of conditioned on the rest of elements without observation noise. In Csató & Opper (2002), they show that the diagonal values of correspond to of the element. Hence, we can delete the element that has the lowest value of such that it will have less impact on the GP prediction and keep the size of the dataset at .

Remark.

While the full dataset encounters a new data point and becomes

, the kernel linear independence test will measure the length of the each data basis vector,

, in kernel space that is perpendicular to the linear subspace spanned by the current bases. For GPs, the linear dependence values vector for each data element in can be computed as .

4.5 Safety-Guided Exploration

Given the result of Lemma 1, we can derive the lower and upper bounds of the confidence intervals after

measurements of from Eq. 5

(12)
(13)

respectively. In the following, we assume that is chosen according to Lemma 1, which allows us to state that takes values within with high probability (at least ).

Based on the confidence interval, we can adapt our policy search to maximize the Q-value, while ensuring that the lower bound of , also the worst-case increase of Lyapunov function, is larger than zero with high probability. We pick a positive scalar and modify the policy update to

(14)

where is large enough to force the agent to choose the safe action satisfying .

To obtain more accurate GP models, we need to satisfy the safety requirements and reduce the uncertainty of the GP. We select the policy that:

(15)
s.t. (16)

as the next state-action pair to evaluate the trajectory-based safety. These two objective will turn the safe exploration problem into a multi-objective optimization problem. On one hand, the agent will take a safe action to maximize the return. On the other, the chosen action should provide as much information as possible to the GP estimation to reduce uncertainties. From the above formulation, we can derive that the optimal value of action with the following property.

(17)
Initialize GP model and the bound on observation noise.
if state-action space is high-dimensional then
     Initialize GP data set , the replay buffers of Q and G with initial knowledge .
end if
repeat
     
     for  to  do
          Environment.step()
         
         if  is in ball of or  then
              Store data element in
         end if
     end for
     Update and .
     Concatenate with .
     while .size  do
         Pick the first elements as
         Remove the element with the lowest score, where scores = .
     end while
     Update the actor policy via SGD on Eq. 18
Algorithm 1 Safety-Guided DDPG

With this property, we can combine these two objective and constraints, with a term that penalizes the actions that result in negative lower bound and rewards the actions that result in positive lower bound around zero. Thus, we can design the term as a Gaussian distribution with zero mean for

. We can rewrite the multi-objective policy optimization problem using the weighted sum method:

(18)

where . So far, we have three components in the policy optimization objective, maximizing the reward return as given by the -value, penalizing violation of safety, and reducing uncertainty of GP.

The overall algorithm is summarized in Algorithm 1.

5 Experiments

In this section, we evaluate Algorithm 1 on two different tasks in simulation, inverted pendulum and half cheetah from the OpenAI Gym (Brockman et al., 2016). We assume that the dynamics of the system and the environment are both unknown. We consider the performance of the trained DDPG policy after million steps as the baseline. We first validate our approach on a benchmark swing-up problem in the inverted pendulum environment. Then, we extend our experiment to a more complex and safety-critical locomotion task where the goal is to make a half cheetah move forward as fast as possible. Both environments are in continuous state/action space and initialized randomly for each run. The safety goal is that the number of catastrophes, as defined in each experiment, should be minimized during training.

For all of our examples, we represent function,

function and policy as three feed-forward neural networks with two hidden layers and variant numbers of neurons in the different environments. The settings is similar to

Lillicrap et al. (2015).

Figure 2: Comparison between DDPG with GP and the pure DDPG on executing a swing-up task of an inverted pendulum. Both performance and the number of training-time catastrophes are plotted against timesteps. The average return achieved by DDPG after steps is .

5.1 Inverted Pendulum

The state of the inverted pendulum contains the angle and angular velocity of the pendulum. It has a single continuous action which is the applied torque bounded by . The limited torque will make the task harder since the maximum applied torque will not be able to swing up the pendulum directly. The goal is to swing up and balance the pendulum in an upright position. We define a negative reward which penalizes the large , and . We define the reward function , where the negative-definite and will penalize the large angular position , angular velocity and action . The cost function is the same as the reward function, . To approximate the function and

function, we use a feed-forward neural network with two hidden layers, and each consists of 64 neurons. The hidden layers use the ReLU as the activation function, and the output layer does not use the activation function. For the policy, we use a feed-forward neural network with two hidden layers and 64 neurons in each layer. We use ReLU for the hidden layers and tanh for the output layer. We optimize the policy via stochastic gradient descent on Eq. 

18.

To improve the computation efficiency, we fix in this experiment. In this case, catastrophe is defined as going through the vertically downward position in one episode (200 timesteps per episode). The experimental result 111Video link of the training result in pendulum environment: https://youtu.be/etYqt15sGRY is shown in Fig. 2. Starting from a random initial state, the policy derived from DDPG with GP can avoid catastrophe entirely during training. The pendulum achieves the baseline performance after around steps, which is much less compared to the steps that pure DDPG needs.

5.2 Locomotion Task

We further validate our approach on a 6-DOF planar half cheetah model with 17 continuous state components in MuJoCo (Todorov et al., 2012). Typically, in more complex tasks, it will be harder to encode both safety and performance in the same function. Also, the initial GP estimation will be very unreliable. Hence, we design different functions to represent reward and safety cost respectively, and assume some initial knowledge is given.

We define a reward function that rewards the positive forward velocity and penalizes the large control actions. The cost function here is related to the body rotation , which is . The larger value of , the cheetah will be more likely to fall. So a catastrophe is considered to have occurred when the half cheetah falls down somewhere along the trajectory. We cap the dataset for GP estimation to elements and initialize it with a single safe trajectory containing elements. Since in the high-dimensional space, it will be too conservative if we use a constant to approximate the scaling factor, , of the confidence intervals. Thus, we compute the approximated the scaling factor with the samples in the current dataset. The mutual information can be computed as:

(19)

and the RKHS bound can be obtained through kernel function as

(20)

Thus, according to Lemma 1, we can compute online.

The function and function are represented by two separated feed-forward neural networks with two hidden layers, and each consists of 64 neurons. The hidden layers use the ReLU as the activation function, and no activation function is applied at the output layers. The policy network has hidden layers with and neurons respectively ( parameters), which is the same used in Lillicrap et al. (2015). The hidden layers implement with the ReLU function as the activation function and the output layer implement tanh function as the activation function.

(a)
(b)
Figure 3: (a) The figure compares between DDPG with GP and the pure DDPG on half cheetah. Performance and number of training-time catastrophes curves are by discrete timesteps. DDPG will achieve an average return of only after steps. (b) Performance and number of training-time catastrophes are plotted against wall time.
Figure 4: We compare DDGP with online GP and DDPG with fixed GP given the same initial knowledge. Both have similar performance initially. As training progresses, online GP outperforms fixed GP significantly.

5.3 Safety and Performance Comparison.

For a fair comparison, we feed the same initial knowledge into the replay buffer of the pure DDPG before training. Using our method, the agent can safely explore the environment and achieve the baseline performance after around steps as the DDPG policy obtains after steps. We compare our method with DDPG trained with the same amount of samples in Fig. 2(a). The result 222Video link of the training result in Cheetah environment: https://youtu.be/CcNIrLlbijU shows our method obtains higher return and fewer training-time catastrophes than DDPG. Although the prediction and data elimination from the online GP model will add computation overhead, DDPG with GP is still able to achieve higher performance and safer policy within the same amount of wall time (Fig. 2(b)). Our approach is in line with recent results on learning acceleration when a small amount of demonstration data is available at the beginning (Večerík et al., 2017; Hester et al., 2018).

5.4 Validate the Role of Online GP.

We compare safety-guided learning using online GP estimation with one that uses a fixed GP model. We initialize both models with the same initial knowledge. In Fig. 4, we can see that the initial performances of both models are similar. However, as training goes on, for DDGP with fixed GP, the accumulated reward drops and the number of training-time catastrophes increases (due to inaccuracies in the GP estimation). For the same number of timesteps, DDPG with fixed GP has lower performance than DDPG with online GP. This result shows that adjusting the GP models online is critical as policies get updated during training.

6 Conclusion

In this paper, we propose to tackle the safe RL problem with the notion of Lyapunov function and trajectory-based safety to learn policies that are both safe and have low accumulated safety cost during exploration. We have shown how to incorporate estimation of trajectory-based safety in deep reinforcement learning algorithms such as DDPG. Specifically, we show how to safely optimize policies and give stability certificates based on Gaussian Process models of trajectory-based safety evaluation. On a simple control benchmark and a more complex locomotion task, we demonstrate the effectiveness of our approach in significantly reducing catastrophes and accelerating training.

In terms of future work, we want to understand better what role initial knowledge plays in influencing the efficacy of our method. One direction is to come up with statistical characterization of initial knowledge which can give statistical guarantees on the safety of the training process. On the computational side, as safety evaluation inevitably adds an overhead to the training process, we plan to investigate more efficient ways to estimate trajectory-based safety and to incorporate these estimates in policy optimization.

References

Appendix A Additional Experimental Results

In Fig. 5, we investigate the initial knowledge choices. Two full trajectories with different accumulated reward are considered here. The low performance trajectory obtains the return and accumulated safety costs . The high performance trajectory obtains the return and accumulated safety costs . The two initialization settings can both ensure the safety during the training. However, we can derive that the high performance trajectory is tend to guide the policy search more close to the optimal policy and results in less performance variance.

Figure 5: Comparison between DDPG with GP initialized by the high performance trajectory and the low performance trajectory for 7 runs.