1 Introduction
Deep reinforcement learning (RL) algorithms (Sutton & Barto, 2018) have achieved impressive results in game environments such as those on the Atari platform (Mnih et al., ). However, they are rarely applied to real-world, physical systems. The main reason is that, besides the goal of optimizing for performance, there often exist safety requirements that make RL challenging in actual applications. In particular, these safety requirements might be imposed in deployment (Amodei et al., 2016; Garcıa & Fernández, 2015) or during exploration and training (Leike et al., 2017; Berkenkamp et al., 2017; Chow et al., 2018). For example, an intermediate, learned policy exercised by a robot during training should not break the system or harm the environment. The importance of safety is well recognized by the community and safe reinforcement learning has recently emerged as an important subfield within RL (for an extensive survey, see Garcıa & Fernández (2015)). In general, the goal of safe RL is to maximize system performance while minimizing safety violations (or meeting safety constraints) during the learning and/or deployment processes.
In this paper, we consider a notion of safety that is defined over executions of the agent (i.e., trajectories). It has been observed that, in many safety-critical applications such as robot exploration (Moldovan & Abbeel, 2012), portfolio planning (Tamar et al., 2012) and resource allocation (Tesauro et al., 2006), it is often more natural to define safety over the whole trajectory, as opposed to over particular states or state-action pairs. We associate a real-valued safety cost with each state-action pair. A policy is thus deemed safe if its cumulative safety costs (different from the reward return) for the length of the trajectory is below a certain threshold. In general, this threshold might not be known a priori. Thus, our goal is to keep the cumulative safety cost as low as possible. Compared with approaches that guarantee safety over state-action pairs by relying on human oversight and intervention (Saunders et al., 2018) or blocking the unsafe actions using the so-called shields (Alshiekh et al., 2018), trajectory-based safety is more suitable for evaluating the safety of a given policy when the environment model is unknown. Besides, characterizing unsafe states and unsafe actions can be intractable or infeasible for the high-dimensional and continuous cases.

In trajectory-based safety, in order to minimize the cumulative safety costs, it is important for the agent to be able to recover from states with high safety cost. This ability to recover is known as asymptotic stability in control theory (Bhatia & Szegö, 2002), which provides a powerful paradigm to translate global properties of the system to local ones and vice versa. While the main challenge of Lyapunov-based methods (Berkenkamp et al., 2016; Bhatia & Szegö, 2002) is to design an appropriate Lyapunov function candidate, our idea is to formulate the state-action value function for the safety costs as the candidate Lyapunov function and model its derivative with a Gaussian Process which provides statistical guarantees. By combining with the original value function, our approach steers the policy in a direction that both decreases the future cumulative safety costs and increases the expected total reward. Fig. 1 shows the overall framework.
In short, we propose a model-free RL algorithm that can provide high-probability trajectory-based safety guarantees for unknown environments with continuous state spaces
-
We propose a novel Lyapunov-based approach to guide the exploration process of deep RL.
-
We propose to use Gaussian Processes to model the evolution of stability as policies get updated during training to cope with unknown environments and large continuous state/action spaces.
-
We show that adjusting the GP estimation online is needed to effectively and safely guide policy search.
-
We demonstrate the effectiveness of the approach in significantly reducing the number of catastrophes (e.g. falling) during training and exploration in a high-dimensional locomotion task with continuous states and actions. In addition, we show that our approach can attain higher performance in fewer iterations and shorter amount of time compared to the Deep Deterministic Policy Gradient method.
2 Related Work
Safety is an important issue in RL and safe RL has emerged as an active research topic in recent years (Pecka & Svoboda, 2014; Garcıa & Fernández, 2015). Below, we discuss metrics of safety, representative approaches in model-based and model-free RL, and recent works on safe RL.
Safety Metrics. The concept of safety, or dually, risk has taken various forms in the RL literature. In Sato et al. (2001)
, the authors show that variability induced by the trained policy can lead to risky or undesirable situations. This characterization unfortunately does not generalize to settings where a policy with a small variance produces significant risks. In general, the safety metric should be easily generalizable to any safety-critical domain and independent of the nature of the task.
Torrey & Taylor (2012) propose a level metric based on the distance between the known and the unknown space. However, this metric relies on constant monitoring by humans to provide the necessary guidance. In Gehring & Precup (2013), the authors measure safety as state controllability based on the notion of temporal difference. The weighted sum of an entropy measurement and the expected return is used to evaluate safety in Law et al. (2005). While these metrics seem suitable for finite MDPs, for MDPs with large state and action spaces, these measurements are computationally intractable. This paper considers trajectory-based safety with respect to the executed policy and uses function approximators to estimate safety instead of relying on human monitoring or assuming that the MDP model is given.Model-based and Model-free RL.
In the model-based setting, research has focused on estimating the true model of the environment by interacting with it. Model-based methods typically cannot cope with continuous or large state/action spaces and have trouble scaling due to the curse of dimensionality
(Abbeel & Ng, 2005). In continuous state/action spaces, model-free policy search algorithms have been shown to be successful. These approaches update the policies without knowing the system model by repeatedly executing the same task (Lillicrap et al., 2015). Achiam et al. (2017) introduce safety guarantees in terms of constraint satisfaction that holds in expectation. However, safety has only been considered by disallowing large steps along the gradient into areas of the parameter space that have not been explored before. Existing works use Gaussian Process models (Rasmussen, 2004) along with Bayesian optimization (Mockus, 2012) to approximate the value function (Chowdhary et al., 2014). On the down side, these methods are limited to simple and low-dimensional systems.Safe RL. There are primarily two types of approaches to the safe RL problem: approaches that modify the optimization criterion with a safety component, and approaches that modify the exploration process through the incorporation of external knowledge (Garcıa & Fernández, 2015).
In RL, maximizing the long-term reward does not necessarily avoid the rare occurrences of large negative outcomes. In risk-sensitive RL, the optimization criterion is transformed into an exponential utility function (Howard & Matheson, 1972), or a linear combination of return and risk, where risk can be defined as the variance of the return (Sato et al., 2001). Geibel & Wysotzki (2005) define risk as the probability of driving the agent to a set of known but undesirable states. The optimization objective is then transformed to include minimizing the probability of visiting those states.
Other works instead change the exploration process directly. Most exploration methods are based on heuristics and have a random exploratory component, which can result in the exploration being risk-blind. Both
Moldovan & Abbeel (2012) and Berkenkamp et al. (2017) introduce algorithms to safely explore state-action space so that the agent never gets stuck. However, these two methods require an accurate probabilistic or approximated statistical model of the system. The common shortcoming of these methods is that they are limited to small and simple systems where exact control synthesis is possible. Eysenbach et al. (2017) propose to learn both forward and reset policies simultaneously with two action-value functions using Deep RL. Although the reset policy can move the agent back to the initial state after early aborts, there are no performance guarantees for the reset policy and the switching mechanism may result in very conservative behavior of the agent.It is worth noting that the first type of approach, which modifies the optimization objective, will also modify the exploration process indirectly (Garcıa & Fernández, 2015). The vital component across these two types of approaches is transforming the optimization criterion or change the exploration process to include a form of risk. In this paper, we propose a novel risk/safety evaluation-guided training technique that significantly improves safety during training and exploration.
3 Background
We consider a model-free RL setup, where an agent interacts with the environment in discrete timesteps. RL is a sequential decision problem with state space , action space , transition dynamics , an initial state distribution , and an immediate scalar reward . We need to specify a deterministic policy , that given the current state, determines the appropriate action that maximizes the expected sum of -discounted returns, .
Typically, the RL training routines involve iteratively sampling from the current policy to explore the state-action space without considering safety. As a result, in practical applications, hard-coded termination or human intervention is required to stop the agent from entering unsafe states. Our work aims to enable safe exploration even when the environment is unknown or only partially known to us. Similar to the notion of reward, we define an additional function called safety cost to capture the cost of performing action in state with respect to safety. In the trajectory-based setting, the agent should aim to minimize future accumulated safety costs in a way similar to maximizing expected return. Safety requirement is defined over the whole trajectory. This mean that, during training, the agent will try to avoid increasing the total safety costs, and pick exploratory actions that can drive the system away from the trajectories that violate the safety requirement.
Deep Deterministic Policy Gradient (DDPG). Lillicrap et al. (2015) proposed a model-free algorithm for solving the deterministic policy gradient problems with continuous action space. Let represent the deterministic policy. Since the expectation depends only on the environment, it is possible to learn off policy with respect to another policy with different stochastic behaviors. Let be the state visiting distribution generated from . DDPG combines greedy policy commonly used in Q-learning (Watkins & Dayan, 1992) with function approximators of the -function and policy parameterized by and respectively under the actor-critic framework.
Then, we can compute the gradient of the greedy policy by applying the chain rule to the expected return
from the start distribution with respect to the actor parameters (Lillicrap et al., 2015):(1) |
Lyapunov function. To satisfy the specified safety requirement for safe exploration, we need a tool to determine safety of a trajectory that follows the current policy into the future. In control theory, this safety is usually computed for a fixed policy using Lyapunov functions.
Definition 1.
Lyapunov functions are continuously differentiable functions with and . The origin set is the set of terminal states.
In our experiemnts, we exploit the fact that action-value functions of the accumulated safety costs, , in RL are Lyapunov functions if the cost function are strictly negative away from the origin. This follows directly from the definition of the action-value function, where
(2) |
Safety Evaluation. The key idea is to use the Lyapunov function to provide the measurements of the trajectory-based safety. In recent literatures, trajectory-based properties are evaluated on a set of policies (Achiam et al., 2017; Chow et al., 2018), which will require the function to be able to express the evaluation given some policy on the state-action space. Thus, we design the Lyapunov function as the accumulated safety costs of policy with respect to .
We show that the state-action value function of safety cost is similar to that of gradient ascent on strictly quasiconcave functions: if one can show that, given a policy , the agent is able to obtain strictly larger values of at (‘going uphill’), then the state will eventually converge to the equilibrium points at the origin. Then, we can achieve safe exploration if for given policy . However, the difference between the values at the two timesteps is not known a priori. Our idea is to use Gaussian Process to approximate the difference given the state-action pair. During the training phase, the GP model, , will be fed with approximated measurement at . In order to bound the safety evaluation, we make the following assumption.
Assumption 1.
The function has bounded Reproducing kernel Hilbert space (RKHS) norm with respect to a continuously differentiable, bounded kernel ; that is, .
4 Safe Exploration with GP Guidance
We choose DDPG (Lillicrap et al., 2015) as the baseline RL algorithm, since its off-policy learning allows sharing of the experience between the expected return of reward and safety costs estimation.
4.1 Approximate Lyapunov Function
We consider an additional function approximator, the Guard Network, parameterized by , which minimizes the following loss.
(3) |
(4) |
4.2 Gaussian Process
In GP regression, we use the outputs of the Guard Network as noisy observations of the true safety estimation. Let denote the station-action pair observed by GP. Specifically, we can obtain the posterior distribution of a function value at arbitrary state-action pair by conditioning the GP distribution of on a set of past measurements with -bound noise, for state-action pairs . The measurements are provided by the Guard Network approximation given the current policy, current state-action pair and the next state:
(5) |
To bound the noise of the observations, we select the measurements within the balls of or . The posterior over is a GP distribution again, with mean , covariance and variance .
(6) | ||||
(7) | ||||
(8) |
where contains the covariances between the new input and in , is the positive-definite covariance matrix.
is the identity matrix.
Lemma 1.
Supposed that and that the observation noise is uniformly bounded by . Choose , where is the information capacity. Then, for all , it holds with probability at least , that
(9) |
Lemma 1 allows us to make high-probability statements about the true function values of . The information capacity, , is the maximal mutual information that can be obtained about the GP prior from noisy samples at state-action pairs set . As a result, we are able to learn about the true values of over time by making appropriate choices from .
4.3 Initialization
In order to incorporate new data, we maximize the marginal likelihood of
after every iteration by adjusting the hyperparameters of the GP model. The term marginal likelihood refers to the marginalization over the function values
. Under the Gaussian Process model, the prior is Gaussian, i.e. , and the likelihood is a factorized Gaussian, i.e. . We can then obtain the log marginal likelihood as follows (Rasmussen, 2004).(10) |
The hyperparameters in the GP model, such as the kernel function’s parameters, can be optimized to fit the current dataset and measurements with high probabilities. This step is aimed at addressing the issue of inaccuracy in the initial estimation.
To prevent our model from converging too quickly to an incorrect estimate of in high-dimensional tasks, we introduce a single safe trajectory, with state-action pairs at each timestep, as initial knowledge to initialize the GP model, the approximator and the approximator. This trajectory is required to be safe in the sense that the cost measurements in each state are less than some threshold depending on the system requirement. Hence, we will discard the state-action pair that exceeds the cost threshold. These demonstrations will be added to the replay buffers of the and approximators with the associated rewards given by . The initial GP dataset will contain the state-action pairs from the safe trajectory, and the measurements are given by the negation of cost function for each state-action pair as .
4.4 Online GP Estimation
As an agent continues to collect new measurements during the execution of policies, the set of samples will increase in size. The state-action pair will be stored in if the measurements, , are outside the ball of zero, , or . We use this to prevent overfitting at the origin sets. After each run, the singularity of the covariance matrix based on
will be checked by QR decomposition to eliminate highly correlated data.
In order to maintain a dataset of fixed size, a natural and simple way to determine whether to delete a point from the dataset is to check how well it is approximated by the rest of elements in . This is known as the kernel linear independence test (Csató & Opper, 2002). For GPs, the linear independence test for the element from is computed as
(11) |
which is the variance of conditioned on the rest of elements without observation noise. In Csató & Opper (2002), they show that the diagonal values of correspond to of the element. Hence, we can delete the element that has the lowest value of such that it will have less impact on the GP prediction and keep the size of the dataset at .
Remark.
While the full dataset encounters a new data point and becomes , the kernel linear independence test will measure the length of the each data basis vector,
4.5 Safety-Guided Exploration
Given the result of Lemma 1, we can derive the lower and upper bounds of the confidence intervals after
measurements of from Eq. 5(12) |
(13) |
respectively. In the following, we assume that is chosen according to Lemma 1, which allows us to state that takes values within with high probability (at least ).
Based on the confidence interval, we can adapt our policy search to maximize the Q-value, while ensuring that the lower bound of , also the worst-case increase of Lyapunov function, is larger than zero with high probability. We pick a positive scalar and modify the policy update to
(14) |
where is large enough to force the agent to choose the safe action satisfying .
To obtain more accurate GP models, we need to satisfy the safety requirements and reduce the uncertainty of the GP. We select the policy that:
(15) | ||||
s.t. | (16) |
as the next state-action pair to evaluate the trajectory-based safety. These two objective will turn the safe exploration problem into a multi-objective optimization problem. On one hand, the agent will take a safe action to maximize the return. On the other, the chosen action should provide as much information as possible to the GP estimation to reduce uncertainties. From the above formulation, we can derive that the optimal value of action with the following property.
(17) |
With this property, we can combine these two objective and constraints, with a term that penalizes the actions that result in negative lower bound and rewards the actions that result in positive lower bound around zero. Thus, we can design the term as a Gaussian distribution with zero mean for
. We can rewrite the multi-objective policy optimization problem using the weighted sum method:(18) |
where . So far, we have three components in the policy optimization objective, maximizing the reward return as given by the -value, penalizing violation of safety, and reducing uncertainty of GP.
The overall algorithm is summarized in Algorithm 1.
5 Experiments
In this section, we evaluate Algorithm 1 on two different tasks in simulation, inverted pendulum and half cheetah from the OpenAI Gym (Brockman et al., 2016). We assume that the dynamics of the system and the environment are both unknown. We consider the performance of the trained DDPG policy after million steps as the baseline. We first validate our approach on a benchmark swing-up problem in the inverted pendulum environment. Then, we extend our experiment to a more complex and safety-critical locomotion task where the goal is to make a half cheetah move forward as fast as possible. Both environments are in continuous state/action space and initialized randomly for each run. The safety goal is that the number of catastrophes, as defined in each experiment, should be minimized during training.
For all of our examples, we represent function,
function and policy as three feed-forward neural networks with two hidden layers and variant numbers of neurons in the different environments. The settings is similar to
Lillicrap et al. (2015).
5.1 Inverted Pendulum
The state of the inverted pendulum contains the angle and angular velocity of the pendulum. It has a single continuous action which is the applied torque bounded by . The limited torque will make the task harder since the maximum applied torque will not be able to swing up the pendulum directly. The goal is to swing up and balance the pendulum in an upright position. We define a negative reward which penalizes the large , and . We define the reward function , where the negative-definite and will penalize the large angular position , angular velocity and action . The cost function is the same as the reward function, . To approximate the function and
function, we use a feed-forward neural network with two hidden layers, and each consists of 64 neurons. The hidden layers use the ReLU as the activation function, and the output layer does not use the activation function. For the policy, we use a feed-forward neural network with two hidden layers and 64 neurons in each layer. We use ReLU for the hidden layers and tanh for the output layer. We optimize the policy via stochastic gradient descent on Eq.
18.To improve the computation efficiency, we fix in this experiment. In this case, catastrophe is defined as going through the vertically downward position in one episode (200 timesteps per episode). The experimental result 111Video link of the training result in pendulum environment: https://youtu.be/etYqt15sGRY is shown in Fig. 2. Starting from a random initial state, the policy derived from DDPG with GP can avoid catastrophe entirely during training. The pendulum achieves the baseline performance after around steps, which is much less compared to the steps that pure DDPG needs.
5.2 Locomotion Task
We further validate our approach on a 6-DOF planar half cheetah model with 17 continuous state components in MuJoCo (Todorov et al., 2012). Typically, in more complex tasks, it will be harder to encode both safety and performance in the same function. Also, the initial GP estimation will be very unreliable. Hence, we design different functions to represent reward and safety cost respectively, and assume some initial knowledge is given.
We define a reward function that rewards the positive forward velocity and penalizes the large control actions. The cost function here is related to the body rotation , which is . The larger value of , the cheetah will be more likely to fall. So a catastrophe is considered to have occurred when the half cheetah falls down somewhere along the trajectory. We cap the dataset for GP estimation to elements and initialize it with a single safe trajectory containing elements. Since in the high-dimensional space, it will be too conservative if we use a constant to approximate the scaling factor, , of the confidence intervals. Thus, we compute the approximated the scaling factor with the samples in the current dataset. The mutual information can be computed as:
(19) |
and the RKHS bound can be obtained through kernel function as
(20) |
Thus, according to Lemma 1, we can compute online.
The function and function are represented by two separated feed-forward neural networks with two hidden layers, and each consists of 64 neurons. The hidden layers use the ReLU as the activation function, and no activation function is applied at the output layers. The policy network has hidden layers with and neurons respectively ( parameters), which is the same used in Lillicrap et al. (2015). The hidden layers implement with the ReLU function as the activation function and the output layer implement tanh function as the activation function.
![]() |
![]() |
![]() |
![]() |
![]() |
5.3 Safety and Performance Comparison.
For a fair comparison, we feed the same initial knowledge into the replay buffer of the pure DDPG before training. Using our method, the agent can safely explore the environment and achieve the baseline performance after around steps as the DDPG policy obtains after steps. We compare our method with DDPG trained with the same amount of samples in Fig. 2(a). The result 222Video link of the training result in Cheetah environment: https://youtu.be/CcNIrLlbijU shows our method obtains higher return and fewer training-time catastrophes than DDPG. Although the prediction and data elimination from the online GP model will add computation overhead, DDPG with GP is still able to achieve higher performance and safer policy within the same amount of wall time (Fig. 2(b)). Our approach is in line with recent results on learning acceleration when a small amount of demonstration data is available at the beginning (Večerík et al., 2017; Hester et al., 2018).
5.4 Validate the Role of Online GP.
We compare safety-guided learning using online GP estimation with one that uses a fixed GP model. We initialize both models with the same initial knowledge. In Fig. 4, we can see that the initial performances of both models are similar. However, as training goes on, for DDGP with fixed GP, the accumulated reward drops and the number of training-time catastrophes increases (due to inaccuracies in the GP estimation). For the same number of timesteps, DDPG with fixed GP has lower performance than DDPG with online GP. This result shows that adjusting the GP models online is critical as policies get updated during training.
6 Conclusion
In this paper, we propose to tackle the safe RL problem with the notion of Lyapunov function and trajectory-based safety to learn policies that are both safe and have low accumulated safety cost during exploration. We have shown how to incorporate estimation of trajectory-based safety in deep reinforcement learning algorithms such as DDPG. Specifically, we show how to safely optimize policies and give stability certificates based on Gaussian Process models of trajectory-based safety evaluation. On a simple control benchmark and a more complex locomotion task, we demonstrate the effectiveness of our approach in significantly reducing catastrophes and accelerating training.
In terms of future work, we want to understand better what role initial knowledge plays in influencing the efficacy of our method. One direction is to come up with statistical characterization of initial knowledge which can give statistical guarantees on the safety of the training process. On the computational side, as safety evaluation inevitably adds an overhead to the training process, we plan to investigate more efficient ways to estimate trajectory-based safety and to incorporate these estimates in policy optimization.
References
-
Abbeel & Ng (2005)
Pieter Abbeel and Andrew Y Ng.
Exploration
and apprenticeship learning in reinforcement learning.
In
Proceedings of the 22nd international conference on Machine learning
, pp. 1–8. ACM, 2005. - Achiam et al. (2017) Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. arXiv preprint arXiv:1705.10528, 2017.
- Alshiekh et al. (2018) Mohammed Alshiekh, Roderick Bloem, Rüdiger Ehlers, Bettina Könighofer, Scott Niekum, and Ufuk Topcu. Safe reinforcement learning via shielding. 2018.
- Amodei et al. (2016) Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 2016.
- Berkenkamp et al. (2016) Felix Berkenkamp, Riccardo Moriconi, Angela P Schoellig, and Andreas Krause. Safe learning of regions of attraction for uncertain, nonlinear systems with gaussian processes. In Decision and Control (CDC), 2016 IEEE 55th Conference on, pp. 4661–4666. IEEE, 2016.
- Berkenkamp et al. (2017) Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause. Safe model-based reinforcement learning with stability guarantees. In Advances in Neural Information Processing Systems, pp. 908–918, 2017.
- Bhatia & Szegö (2002) Nam Parshad Bhatia and Giorgio P Szegö. Stability theory of dynamical systems. Springer Science & Business Media, 2002.
- Brockman et al. (2016) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
- Chow et al. (2018) Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A Lyapunov-based Approach to Safe Reinforcement Learning. arXiv preprint arXiv:1805.07708, 2018.
- Chowdhary et al. (2014) Girish Chowdhary, Miao Liu, Robert Grande, Thomas Walsh, Jonathan How, and Lawrence Carin. Off-policy reinforcement learning with Gaussian processes. IEEE/CAA Journal of Automatica Sinica, 1(3):227–238, 2014.
- Chowdhury & Gopalan (2017) Sayak Ray Chowdhury and Aditya Gopalan. On kernelized multi-armed bandits. arXiv preprint arXiv:1704.00445, 2017.
- Csató & Opper (2002) Lehel Csató and Manfred Opper. Sparse on-line Gaussian processes. Neural computation, 14(3):641–668, 2002.
- Eysenbach et al. (2017) Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, and Sergey Levine. Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning. arXiv preprint arXiv:1711.06782, 2017.
- Garcıa & Fernández (2015) Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015.
- Gehring & Precup (2013) Clement Gehring and Doina Precup. Smart exploration in reinforcement learning using absolute temporal difference errors. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, pp. 1037–1044. International Foundation for Autonomous Agents and Multiagent Systems, 2013.
-
Geibel & Wysotzki (2005)
Peter Geibel and Fritz Wysotzki.
Risk-sensitive
reinforcement learning applied to control under constraints.
Journal of Artificial Intelligence Research
, 24:81–108, 2005. - Hester et al. (2018) Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
- Howard & Matheson (1972) Ronald A Howard and James E Matheson. Risk-sensitive Markov decision processes. Management science, 18(7):356–369, 1972.
- Law et al. (2005) Edith LM Law, Melanie Coggan, Doina Precup, and Bohdana Ratitch. Risk-directed Exploration in Reinforcement Learning. Planning and Learning in A Priori Unknown or Dynamic Domains, pp. 97, 2005.
- Leike et al. (2017) Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. Ai safety gridworlds. arXiv preprint arXiv:1711.09883, 2017.
- Lillicrap et al. (2015) Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
- (22) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529.
- Mockus (2012) Jonas Mockus. Bayesian approach to global optimization: theory and applications, volume 37. Springer Science & Business Media, 2012.
- Moldovan & Abbeel (2012) Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in Markov decision processes. arXiv preprint arXiv:1205.4810, 2012.
- Pecka & Svoboda (2014) Martin Pecka and Tomas Svoboda. Safe exploration techniques for reinforcement learning–an overview. In International Workshop on Modelling and Simulation for Autonomous Systems, pp. 357–375. Springer, 2014.
- Rasmussen (2004) Carl Edward Rasmussen. Gaussian processes in machine learning. In Advanced lectures on machine learning, pp. 63–71. Springer, 2004.
- Sato et al. (2001) Makoto Sato, Hajime Kimura, and Shibenobu Kobayashi. TD algorithm for the variance of return and mean-variance reinforcement learning. Transactions of the Japanese Society for Artificial Intelligence, 16(3):353–362, 2001.
- Saunders et al. (2018) William Saunders, Girish Sastry, Andreas Stuhlmueller, and Owain Evans. Trial without error: Towards safe reinforcement learning via human intervention. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2067–2069. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
- Sutton & Barto (2018) Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
- Tamar et al. (2012) Aviv Tamar, Dotan Di Castro, and Shie Mannor. Policy Gradients with Variance Related Risk Criteria. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, pp. 1651–1658, USA, 2012. Omnipress. ISBN 978-1-4503-1285-1.
- Tesauro et al. (2006) Gerald Tesauro, Nicholas K Jong, Rajarshi Das, and Mohamed N Bennani. A hybrid reinforcement learning approach to autonomic resource allocation. In Autonomic Computing, 2006. ICAC’06. IEEE International Conference on, pp. 65–73. IEEE, 2006.
- Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. IEEE, 2012.
- Torrey & Taylor (2012) Lisa Torrey and Matthew E Taylor. Help an agent out: Student/teacher learning in sequential decision tasks. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-12), 2012.
- Večerík et al. (2017) Matej Večerík, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817, 2017.
- Watkins & Dayan (1992) Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.
Appendix A Additional Experimental Results
In Fig. 5, we investigate the initial knowledge choices. Two full trajectories with different accumulated reward are considered here. The low performance trajectory obtains the return and accumulated safety costs . The high performance trajectory obtains the return and accumulated safety costs . The two initialization settings can both ensure the safety during the training. However, we can derive that the high performance trajectory is tend to guide the policy search more close to the optimal policy and results in less performance variance.
