A Lyapunov-based Approach to Safe Reinforcement Learning

05/20/2018 ∙ by Yinlam Chow, et al. ∙ Google 0

In many real-world reinforcement learning (RL) problems, besides optimizing the main objective function, an agent must concurrently avoid violating a number of constraints. In particular, besides optimizing performance it is crucial to guarantee the safety of an agent during training as well as deployment (e.g. a robot should avoid taking actions - exploratory or not - which irrevocably harm its hardware). To incorporate safety in RL, we derive algorithms under the framework of constrained Markov decision problems (CMDPs), an extension of the standard Markov decision problems (MDPs) augmented with constraints on expected cumulative costs. Our approach hinges on a novel Lyapunov method. We define and present a method for constructing Lyapunov functions, which provide an effective way to guarantee the global safety of a behavior policy during training via a set of local, linear constraints. Leveraging these theoretical underpinnings, we show how to use the Lyapunov approach to systematically transform dynamic programming (DP) and RL algorithms into their safe counterparts. To illustrate their effectiveness, we evaluate these algorithms in several CMDP planning and decision-making tasks on a safety benchmark domain. Our results show that our proposed method significantly outperforms existing baselines in balancing constraint satisfaction and performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 29

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement learning (RL) has shown exceptional successes in a variety of domains such as video games [24] and recommender systems [37], where the main goal is to optimize a single return. However, in many real-world problems, besides optimizing the main objective (the return), there can exist several conflicting constraints that make RL challenging. In particular, besides optimizing performance it is crucial to guarantee the safety of an agent in deployment [5] as well as during training [2]. For example, a robot agent should avoid taking actions which irrevocably harm its hardware; a recommender system must avoid presenting harmful or offending items to users.

Sequential decision-making in non-deterministic environments has been extensively studied in the literature under the framework of Markov decision problems (MDPs). To incorporate safety into the RL process, we are particularly interested in deriving algorithms under the context of constrained Markov decision problems (CMDPs), which is an extension of MDPs with expected cumulative constraint costs. The additional constraint component of CMDPs increases flexibility in modeling problems with trajectory-based constraints, when compared with other approaches that customize immediate costs in MDPs to handle constraints [31]. As shown in numerous applications from robot motion planning [29, 25, 10], resource allocation [23, 17], and financial engineering [1, 38], it is more natural to define safety over the whole trajectory, instead of over particular state and action pairs. Under this framework, we denote an agent’s behavior policy to be safe if it satisfies the cumulative cost constraints of the CMDP.

Despite the capabilities of CMDPs, they have not been very popular in RL. One main reason is that, although optimal policies of finite CMDPs are Markov and stationary, and with known models the CMDP can be solved using linear programming (LP)

[3], it is unclear how to extend this algorithm to handle cases when the model is unknown, or when the state and action spaces are large or continuous. A well-known approach to solve CMDPs is the Lagrangian method [4, 14], which augments the standard expected reward objective with a penalty on constraint violation. With a fixed Lagrange multiplier, one can use standard dynamic programming (DP) or RL algorithms to solve for an optimal policy. With a learnable Lagrange multiplier, one must solve the resulting saddle point problem. However, several studies [20] showed that iteratively solving the saddle point is apt to run into numerical stability issues. More importantly, the Lagrangian policy is only safe asymptotically and makes little guarantee with regards to safety of the behavior policy during each training iteration.

Motivated by these observations, several recent works have derived surrogate algorithms for solving CMDPs, which transform the original constraint to a more conservative one that yields an easier problem to solve. A straight-forward approach is to replace the cumulative constraint cost with a conservative stepwise surrogate constraint [9] that only depends on current state and action pair. Since this surrogate constraint can be easily embedded into the admissible control set, this formulation can be modeled by an MDP that has a restricted set of admissible actions. Another surrogate algorithm was proposed by [13], in which the algorithm first computes a uniform super-martingale constraint value function surrogate w.r.t. all policies. Then one computes a CMDP feasible policy by optimizing the surrogate problem using the lexicographical ordering method [36]. These methods are advantageous in the sense that (i) there are RL algorithms available to handle the surrogate problems (for example see [12] for the step-wise surrogate, and see [26] for the super-martingale surrogate), (ii) the policy returned by this method is safe, even during training. However, the main drawback of these approaches is their conservativeness. Characterizing sub-optimality performance of the corresponding solution policy also remains a challenging task. On the other hand, recently in policy gradient, [2] proposed the constrained policy optimization (CPO) method that extends trust-region policy optimization (TRPO) to handle the CMDP constraints. While this algorithm is scalable, and its policy is safe during training, analyzing its convergence is challenging, and applying this methodology to other RL algorithms is non-trivial.

Lyapunov functions have been extensively used in control theory to analyze the stability of dynamic systems [19, 27]. A Lyapunov function is a type of scalar potential function that keeps track of the energy that a system continually dissipates. Besides modeling physical energy, Lyapunov functions can also represent abstract quantities, such as the steady-state performance of a Markov process [15]. In many fields, Lyapunov functions provide a powerful paradigm to translate global properties of a system to local ones, and vice-versa. Using Lyapunov functions in RL was first studied by [30], where Lyapunov functions were used to guarantee closed-loop stability of an agent. Recently [6] used Lyapunov functions to guarantee a model-based RL agent’s ability to re-enter an “attraction region” during exploration. However no previous works have used Lyapunov approaches to explicitly model constraints in a CMDP. Furthermore, one major drawback of these approaches is that the Lyapunov functions are hand-crafted, and there are no principled guidelines on designing Lyapunov functions that can guarantee an agent’s performance.

The contribution of this paper is four-fold. First we formulate the problem of safe reinforcement learning as a CMDP and propose a novel Lyapunov approach for solving them. While the main challenge of other Lyapunov-based methods is to design a Lyapunov function candidate, we propose an LP-based algorithm to construct Lyapunov functions w.r.t. generic CMDP constraints. We also show that our method is guaranteed to always return a feasible policy and, under certain technical assumptions, it achieves optimality. Second, leveraging the theoretical underpinnings of the Lyapunov approach, we present two safe DP algorithms – safe policy iteration (SPI) and safe value iteration (SVI) – and analyze the feasibility and performance of these algorithms. Third, to handle unknown environment models and large state/action spaces, we develop two scalable, safe RL algorithms – (i) safe DQN, an off-policy fitted iteration method, and (ii) safe DPI, an approximate policy iteration method. Fourth, to illustrate the effectiveness of these algorithms, we evaluate them in several tasks on a benchmark 2D planning problem, and show that they outperform common baselines in terms of balancing performance and constraint satisfaction.

2 Preliminaries

We consider the RL problem in which the agent’s interaction with the system is modeled as a Markov decision process (MDP). A MDP is a tuple

, where is the state space, with transient state space and terminal state ; is the action space; is the immediate cost function (negative reward);

is the transition probability distribution; and

is the initial state. Our results easily generalize to random initial states and random costs, but for simplicity we will focus on the case of deterministic initial state and immediate cost. In a more general setting where cumulative constraints are taken into account, we define a constrained Markov decision process (CMDP), which extends the MDP model by introducing additional costs and associated constraints. A CMDP is defined by , where the components are the same for the unconstrained MDP; is the immediate constraint cost; and is an upper bound for the expected cumulative (through time) constraint cost. To formalize the optimization problem associated with CMDPs, let be the set of Markov stationary policies, i.e., for any state . Also let

be a random variable corresponding to the first-hitting time of the terminal state

induced by policy . In this paper, we follow the standard notion of transient MDPs and assume that the first-hitting time is uniformly bounded by an upper bound . This assumption can be justified by the fact that sample trajectories collected in most RL algorithms consist of a finite stopping time (also known as a time-out); the assumption may also be relaxed in cases where a discount is applied on future costs. For notational convenience, at each state , we define the generic Bellman operator w.r.t. policy and generic cost function :

Given a policy , an initial state , the cost function is defined as and the safety constraint is defined as where the safety constraint function is given by In general the CMDP problem we wish to solve is given as follows:

Problem : Given an initial state and a threshold , solve If there is a non-empty solution, the optimal policy is denoted by .

Under the transient CMDP assumption, Theorem 8.1 in [3], shows that if the feasibility set is non-empty, then there exists an optimal policy in the class of stationary Markovian policies . To motivate the CMDP formulation studied in this paper, in Appendix A, we include two real-life examples in modeling safety using (i) the reachability constraint, and (ii) the constraint that limits the agent’s visits to undesirable states. Recently there has been a number of works on CMDP algorithms; their details can be found in Appendix B.

3 A Lyapunov Approach for Solving CMDPs

In this section we develop a novel methodology for solving CMDPs using the Lyapunov approach. To start with, without loss of generality assume we have access to a baseline feasible policy of problem , namely 111One example of is a policy that minimizes the constraint, i.e., .. We define a non-empty222To see this, the constraint cost function , is a valid Lyapunov function, i.e., , , for , and set of Lyapunov functions w.r.t. initial state and constraint threshold as

(1)

For any arbitrary Lyapunov function , denote by the set of induced Markov stationary policies. Since is a contraction mapping [7], clearly any induced policy has the following property: , . Together with the property of , this further implies any induced policy is a feasible policy of problem . However in general the set does not necessarily contain any optimal policies of problem , and our main contribution is to design a Lyapunov function (w.r.t. a baseline policy) that provides this guarantee. In other words, our main goal is to construct a Lyapunov function such that

(2)

Before getting into the main results, we consider the following important technical lemma, which states that with appropriate cost-shaping, one can always transform the constraint value function w.r.t. optimal policy into a Lyapunov function that is induced by , i.e., . The proof of this lemma can be found in Appendix C.1.

Lemma 1.

There exists an auxiliary constraint cost such that the Lyapunov function is given by and for . Moreover, is equal to the constraint value function w.r.t. , i.e., .

From the structure of , one can see that the auxiliary constraint cost function is uniformly bounded by ,333The definition of total variation distance is given by . i.e., , for any . However in general it is unclear how to construct such a cost-shaping term without explicitly knowing a-priori. Rather, inspired by this result, we consider the bound to propose a Lyapunov function candidate . Immediately from its definition, this function has the following properties:

(3)

The first property is due to the facts that: (i) is a non-negative cost function; (ii) is a contraction mapping, which by the fixed point theorem [7] implies For the second property, from the above inequality one concludes that the Lyapunov function is a uniform upper bound to the constraint cost, i.e., , because the constraint cost w.r.t. policy is the unique solution to the fixed-point equation , . On the other hand, by construction is an upper-bound of the cost-shaping term . Therefore Lemma 1 implies that Lyapunov function is a uniform upper bound to the constraint cost w.r.t. optimal policy , i.e., .

To show that is a Lyapunov function that satisfies (2), we propose the following condition that enforces a baseline policy to be sufficiently close to an optimal policy .

Assumption 1.

The feasible baseline policy satisfies the following condition: where .

This condition characterizes the maximum allowable distance between and , such that the set of induced policies contains an optimal policy. To formalize this claim, we have the following main result showing that , and the set of policies contains an optimal policy.

Theorem 1.

Suppose the baseline policy satisfies Assumption , then on top of the properties in (3), the Lyapunov function candidate also satisfies the properties in (2), and therefore its induced feasible set of policies contains an optimal policy.

The proof of this theorem is given in Appendix C.2

. Suppose the distance between the baseline policy and the optimal policy can be estimated effectively. Using the above result, one can immediately determine if the set of

induced policies contain an optimal policy. Equipped with the set of induced feasible policies, consider the following safe Bellman operator:

(4)

Using standard analysis of Bellman operators, one can show that is a monotonic and contraction operator (see Appendix C.3 for proof). This further implies that the solution of the fixed point equation , , is unique. Let be such a value function. The following theorem shows that under Assumption 1, is a solution to problem .

Theorem 2.

Suppose the baseline policy satisfies Assumption 1. Then, the fixed-point solution at , i.e., , is equal to the solution of problem . Furthermore, an optimal policy can be constructed by , .

The proof of this theorem can be found in Appendix C.4. This shows that under Assumption 1 an optimal policy of problem can be solved using standard DP algorithms. Notice that verifying whether satisfies this assumption is still challenging because one requires a good estimate of . Yet to the best of our knowledge, this is the first result that connects the optimality of CMDP to Bellman’s principle of optimality. Another key observation is that, in practice we will explore ways of approximating via bootstrapping, and empirically show that this approach achieves good performance while guaranteeing safety at each iteration. In particular, in the next section we will illustrate how to systematically construct a Lyapunov function using an LP in the planning scenario, and using function approximation in RL for guaranteeing safety during learning.

4 Safe Reinforcement Learning Using Lyapunov Functions

Motivated by the challenge of computing a Lyapunov function such that its induced set of policies contains , in this section we approximate with an auxiliary constraint cost , which is the largest auxiliary cost that satisfies the Lyapunov condition: and the safety condition . The larger the , the larger the set of policies . Thus by choosing the largest such auxiliary cost, we hope to have a better chance of including the optimal policy in the set of feasible policies. So, we consider the following LP problem:

(5)

Here

represents a one-hot vector in which the non-zero element is located at

.

On the other hand, whenever is a feasible policy, then the problem in (5) always has a non-empty solution444This is due to the fact that , and therefore is a feasible solution.. Furthermore, notice that represents the total visiting probability from initial state to any state , which is a non-negative quantity. Therefore, using the extreme point argument in LP [22], one can simply conclude that the maximizer of problem (5) is an indicator function whose non-zero element locates at state that corresponds to the minimum total visiting probability from , i.e., , where . On the other hand, suppose we further restrict the structure of to be a constant function, i.e., , . Then one can show that the maximizer is given by , , where is the expected stopping time of the transient MDP. In cases when computing the expected stopping time is expensive, then one reasonable approximation is to replace the denominator of with the upper-bound .

Using this Lyapunov function , we propose the safe policy iteration (SPI) in Algorithm 1, in which the Lyapunov function is updated via bootstrapping, i.e., at each iteration is re-computed using (5), w.r.t. the current baseline policy. Properties of SPI are summarized in the following proposition.

  Input: Initial feasible policy ;
  for  do
     Step 0: With , evaluate the Lyapunov function , where is a solution of (5)
     Step 1: Evaluate the cost value function ; Then update the policy by solving the following problem:
  end for
  Return Final policy
Algorithm 1 Safe Policy Iteration (SPI)
Proposition 1.

Algorithm 1 has following properties: (i) Consistent Feasibility, i.e., suppose the current policy is feasible, then the updated policy is also feasible, i.e., implies ; (ii) Monotonic Policy Improvement, i.e., the cumulative cost induced by is lower than or equal to that by , i.e., for any ; (iii) Convergence, i.e., suppose a strictly concave regularizer is added to optimization problem (5) and a strictly convex regularizer is added to policy optimization step. Then the policy sequence asymptotically converges.

The proof of this proposition is given in Appendix C.5, and the sub-optimality performance bound of SPI can be found in Appendix C.6. Analogous to SPI, we also propose a safe value iteration (SVI), in which the Lyapunov function estimate is updated at every iteration via bootstrapping, using the current optimal value estimate. Details of SVI is given in Algorithm 2, and its properties are summarized in the following proposition (whose proof is given in Appendix C.7).

  Input: Initial function ; Initial Lyapunov function w.r.t. auxiliary cost function ;
  for  do
     Step 0: Compute function and policy
     Step 1: With , construct the Lyapunov function , where is a solution of (5);
  end for
  Return Final policy
Algorithm 2 Safe Value Iteration (SVI)
Proposition 2.

Algorithm 2 has following properties: (i) Consistent Feasibility; (ii) Convergence.

To justify the notion of bootstrapping, in both SVI and SPI, the Lyapunov function is updated based on the best baseline policy (the policy that is feasible and by far has the lowest cumulative cost). Once the current baseline policy is sufficiently close to an optimal policy , then by Theorem 1 one concludes that the induced set of policies contains an optimal policy. Although these algorithms do not have optimality guarantees, empirically they often return a near-optimal policy.

In each iteration, the policy optimization step in SPI and SVI requires solving LP sub-problems, where each of them has constraints and has a dimensional decision-variable. Collectively, at each iteration its complexity is . While in the worst case SVI converges in steps [7], and SPI converges in steps [35], in practice is much smaller than . Therefore, even with the additional complexity of policy evaluation in SPI that is , or the complexity of updating function in SVI that is , the complexity of these methods is , which in practice is much lower than that of the dual LP method, whose complexity is (see Section B for details).

4.1 Lyapunov-based Safe RL Algorithms

In order to improve scalability of SVI and SPI, we develop two off-policy safe RL algorithms, namely safe DQN and safe DPI, which replace the value and policy updates in safe DP with function approximations. Their pseudo-codes can be found in Appendix D. Before going into their details, we first introduce the policy distillation method, which will be later used in the safe RL algorithms.

Policy Distillation:

Consider the following LP problem for policy optimization in SVI and SPI:

(6)

where is the state-action Lyapunov function. When the state-space is large (or continuous), explicitly solving for a policy becomes impossible without function approximation. Consider a parameterized policy with weights . Utilizing the distillation concept [33], after computing the optimal action probabilities w.r.t. a batch of states, the policy is updated by solving , where the Jensen-Shannon divergence. The pseudo-code of distillation is given in Algorithm 3.

Safe learning (SDQN):

Here we sample an off-policy mini-batch of state-action-costs-next-state samples from the replay buffer and use it to update the value function estimates that minimize the MSE losses of Bellman residuals. Specifically, we first construct the state-action Lyapunov function estimate , by learning the constraint value network and stopping time value network respectively. With a current baseline policy , one can use function approximation to approximate the auxiliary constraint cost (which is the solution of (5),) by . Equipped with the Lyapunov function, in each iteration one can do a standard DQN update, except that the optimal action probabilities are computed via solving (6). Details of SDQN is given in Algorithm 4.

Safe Policy Improvement (SDPI):

Similar to SDQN, in this algorithm we first sample an off-policy mini-batch of samples from the replay buffer and use it to update the value function estimates (w.r.t. objective, constraint, and stopping-time estimate) that minimize MSE losses. Different from SDQN, in SDPI the value estimation is done using policy evaluation, which means that the objective function is trained to minimize the Bellman residual w.r.t. actions generated by the current policy , instead of the greedy actions. Using the same construction as in SDQN for auxiliary cost , and state-action Lyapunov function , we then perform a policy improvement step by computing a set of greedy action probabilities from (6), and constructing an updated policy using policy distillation. Assuming the function approximations (for both value and policy) have low errors, SDPI resembles several interesting properties from SPI, such as maintaining safety during training and improving policy monotonically. To improve learning stability, instead of the full policy update one can further consider a partial update , where is a mixing constant that controls safety and exploration [2, 18]. Details of SDPI is summarized in Algorithm 5.

In terms of practical implementations, in Appendix E we include techniques to improve stability during training, to handle continuous action space, and to scale up policy optimization step in (6).

5 Experiments

Figure 1: Results of various planning algorithms on the grid-world environment with obstacles, with x-axis showing the obstacle density. From the leftmost column, the first figure illustrates the 2D planning domain example (). The second and the third figure show the average return and the average cumulative constraint cost of the CMDP methods respectively. The fourth figure displays all the methods used in the experiment. The shaded regions indicate the confidence intervals. Clearly the safe DP algorithms compute policies that are safe and have good performance.

Motivated by the safety issues of RL in [21], we validate our safe RL algorithms using a stochastic 2D grid-world motion planning problem. In this domain, an agent (e.g., a robotic vehicle) starts in a safe region and its objective is to travel to a given destination. At each time step the agent can move to any of its four neighboring states. Due to sensing and control noise, however, with probability a move to a random neighboring state occurs. To account for fuel usage, the stage-wise cost of each move until reaching the destination is , while the reward achieved for reaching the destination is . Thus, we would like the agent to reach the destination in the shortest possible number of moves. In between the starting point and the destination there is a number of obstacles that the agent may pass through but should avoid for safety; each time the agent is on an obstacle it incurs a constraint cost of . Thus, in the CMDP setting, the agent’s goal is to reach the destination in the shortest possible number of moves while passing through obstacles at most times or less. For demonstration purposes, we choose a grid-world (see Figure 1) with a total of states. We also have a density ratio that sets the obstacle-to-terrain ratio. When is close to , the problem is obstacle-free, and if is close to , then the problem becomes more challenging. In the normal problem setting, we choose a density , an error probability , a constraint threshold , and a maximum horizon of steps. The initial state is located in , and the goal is placed in , where is a uniform random variable. To account for statistical significance, the results of each experiment are averaged over trials.

CMDP Planning: In this task we have explicit knowledge on reward function and transition probability. The main goal is to compare our safe DP algorithms (SPI and SVI) with the following common CMDP baseline methods: (i) Step-wise Surrogate, (ii) Super-martingale Surrogate, (iii) Lagrangian, and (iv) Dual LP. Since the methods in (i) and (ii) are surrogate algorithms, we will also evaluate these methods with both value iteration and policy iteration. To illustrate the level of sub-optimality, we will also compare the returns and constraint costs of these methods with baselines that are generated by maximizing return or minimizing constraint cost of two separate MDPs. The main objective here is to illustrate that safe DP algorithms are less conservative than other surrogate methods, are more numerically stable than the Lagrangian method, and are more computationally efficient than the Dual LP method (see Appendix F), without using function approximations.

Figure 1 presents the results on returns and the cumulative constraint costs of the aforementioned CMDP methods over a spectrum of values, ranging from to . In each method, the initial policy is a conservative baseline policy that minimizes the constraint cost. Clearly from the empirical results, although the polices generated by the four surrogate algorithms are feasible, they do not have significant policy improvements, i.e., return values are close to that of the initial baseline policy. Over all density settings, the SPI algorithm consistently computes a solution that is feasible and has good performance. The solution policy returned by SVI is always feasible, and it has near-optimal performance when the obstacle density is low. However, due to numerical instability its performance degrades as grows. Similarly, the Lagrangian methods return a near-optimal solution over most settings, but due to numerical issues their solutions start to violate constraint as grows.

Safe Reinforcement Learning:

In this section we present the results of RL algorithms on this safety task. We evaluate their learning performance on two variants: one in which the observation is a one-hot encoding the of the agent’s location, and the other in which the observation is the 2D image representation of the grid map. In each of these, we evaluate performance when

and . We compare our proposed safe RL algorithms, SDPI and SDQN, to their unconstrained counterparts, DPI and DQN, as well as the Lagrangian approach to safe RL, in which the Lagrange multiplier is optimized via extensive grid search. Details of the experimental setup is given in Appendix F. To make the tasks more challenging, we initialize the RL algorithms with a randomized baseline policy.

Figure 2 shows the results of these methods across all task variants. Clearly, we see that SDPI and SDQN can adequately solve the tasks and compute agents with good return performance (similar to that of DQN and DPI in some cases), while guaranteeing safety. Another interesting observation in the SDQN and SDPI algorithms is that, once the algorithm finds a safe policy, then all updated policies remain safe during throughout training. On the contrary, the Lagrangian approaches often achieve worse rewards and are more apt to violate the constraints during training 555In Appendix F, we also report the results from the Lagrangian method, in which the Lagrange multiplier is learned using gradient ascent method [11], and we observe similar (or even worse) behaviors., and the performance is very sensitive to initial conditions. Furthermore, in some cases (in experiment with and with discrete observation) the Lagrangian method cannot guarantee safety throughout training.

Discrete obs, Discrete obs, Image obs, Image obs,

Constraints                                      Rewards

Figure 2: Results of various RL algorithms on the grid-world environment with obstacles, with x-axis in thousands of episodes. We include runs using discrete observations (a one-hot encoding of the agent’s position) and image observations (showing the entire RGB 2D map of the world). We discover that the Lyapunov-based approaches can perform safe learning, despite the fact that the environment dynamics model is not known and that deep function approximations are necessary.

6 Conclusion

In this paper we formulated the problem of safe RL as a CMDP and proposed a novel Lyapunov approach to solve CMDPs. We also derived an effective LP-based method to generate Lyapunov functions, such that the corresponding algorithm guarantees feasibility, and optimality under certain conditions. Leveraging these theoretical underpinnings, we showed how Lyapunov approaches can be used to transform DP (and RL) algorithms into their safe counterparts, that only requires straightforward modifications in the algorithm implementations. Empirically we validated our theoretical findings in using Lyapunov approach to guarantee safety and robust learning in RL. In general, our work represents a step forward in deploying RL to real-world problems in which guaranteeing safety is of paramount importance. Future research will focus on two directions. On the algorithmic perspective, one major extension is to apply Lyapunov approach to policy gradient algorithms, and compare its performance with CPO in continuous RL problems. On the practical perspective, future work includes evaluating the Lyapunov-based RL algorithms on several real-world testbeds.

References

Appendix A Safety Constraints in Planning Problems

To motivate the CMDP formulation studied in this paper, in this section we include two real-life examples of modeling safety using the reachability constraint, and the constraint that limits the agent’s visits to undesirable states.

a.1 Reachability Constraint

Reachability is a common concept in motion-planning and engineering applications, where for any given policy and initial state , the following the constraint function is considered:

Here represents the real subset of hazardous regions for the states and actions. Therefore, the constraint cost represents the probability of reaching an unsafe region at any time before the state reaches the terminal state. To further analyze this constraint function, one notices that

In this case, a policy is deemed safe if the reachability probability to the unsafe region is bounded by threshold , i.e.,

(7)

To transform the reachability constraint into a standard CMDP constraint, we define an additional state that keeps track of the reachability status at time . Here indicates the system has never visited a hazardous region up till time , and otherwise . Let , we can easily see that by defining the following deterministic transition

has the following formulation:

Collectively, with the state augmentation , one defines the augmented CMDP , where is the augmented state space, is the augmented constraint cost, is the augmented transition probability, and is the initial (augmented) state. By using this augmented CMDP, immediately the reachability constraint is equivalent to

a.2 Constraint w.r.t. Undesirable Regions of States

Consider the notion of safety where one restricts the total visiting frequency of an agent to an undesirable region (of states). This notion of safety appears in applications such as system maintenance, in which the system can only tolerate its state to visit (in expectation) a hazardous region, namely , for a fixed number of times. Specifically, for given initial state , consider the following constraint that bounds the total frequency of visiting with a pre-defined threshold , i.e., where . To model this notion of safety using a CMDP, one can rewrite the above constraint using the constraint immediate cost , and the constraint threshold . To study the connection between the reachability constraint, and the above constraint w.r.t. undesirable region, notice that

This clearly indicates that any policies which satisfies the constraint w.r.t. undesirable region, also satisfies the reachability constraint.

Appendix B Existing Approaches for Solving CMDPs

Before going to the main result, we first revisit several existing CMDP algorithms in the literature, which later serve as the baselines for comparing with our safe CMDP algorithms. For the sake of brevity, we will only provide an overview of these approaches here and defer their details to Appendix B.1.

The Lagrangian Based Algorithm:

The standard way of solving problem is by applying the Lagrangian method. To start with, consider the following minimax problem: where is the Lagrange multiplier w.r.t. the CMDP constraint. According to Theorem 9.9 and Theorem 9.10 in [3], the optimal policy of problem can be calculated by solving the following Lagrangian function where is the optimal Lagrange multiplier. Utilizing this result, one can compute the saddle point pair using primal-dual iteration. Specifically, for a given , solve the policy minimization problem using standard dynamic programming with parametrized Bellman operator if ; For a given policy , solve for the following linear optimization problem: . Based on Theorem 9.10 in [3], this procedure will asymptotically converge to the saddle point solution. However, this algorithm presents several major challenges. (i) In general there is no known convergence rate guarantees, several studies [20] also showed that using primal-dual first-order iterative method to find saddle point may run into numerical instability issues; (ii) Choosing a good initial estimate of the Lagrange multiplier is not intuitive; (iii) Following the same arguments from [2], during iteration the policy may be infeasible w.r.t. problem , and feasibility is guaranteed after the algorithm converges. This is hazardous in RL when one needs to execute the intermediate policy (which may be unsafe) during training.

The Dual LP Based Algorithm:

Another method of solving problem is based on computing its occupation measures w.r.t. the optimal policy. In transient MDPs, for any given policy and initial state , the state-action occupation measure is , which characterizes the total visiting probability of state-action pair , induced by policy and initial state . Utilizing this quantity, Theorem 9.13 in [3], has shown that problem can be reformulated as a linear programming (LP) problem (see Equation (8) to (9) in Appendix B.1), whose decision variable is of dimension , and it has constraints. Let be the solution of this LP, the optimal Markov stationary policy is given by . To solve this problem, one can apply the standard algorithm such as interior point method, which is a strong polynomial time algorithm with complexity [8]. While this is a straight-forward methodology, it can only handle CMDPs with finite state and action spaces. Furthermore, this approach is computationally expensive when the size of these spaces are large. To the best of our knowledge, it is also unclear how to extend this approach to RL, when transition probability and immediate reward/constraint reward functions are unknown.

Step-wise Constraint Surrogate Approach:

This approach transforms the multi-stage CMDP constraint into a sequence of step-wise constraints, where each step-wise constraint can be directly embedded into set of admissible actions in the Bellman operator. To start with, for any state , consider the following feasible set of policies: where is the upper-bound of the MDP stopping time. Based on (10) in Appendix B.1, one deduces that every policy in is a feasible policy w.r.t. problem . Motivated by this observation, a solution policy can be solved by . One benefit of studying this surrogate problem is that its solution satisfies the Bellman optimality condition w.r.t. the step-wise Bellman operator as for any . In particular is a contraction operator, which implies that there exists a unique solution to fixed point equation for such that is a solution to the surrogate problem. Therefore this problem can be solved by standard DP methods such as value iteration or policy iteration. Furthermore, based on the structure of , any surrogate policy is feasible w.r.t. problem . However, the major drawback is that the step-wise constraint in can be much more stringent than the original safety constraint in problem .

Super-martingale Constraint Surrogate Approach:

This surrogate algorithm is originally proposed by [13], where the CMDP constraint is reformulated as the surrogate value function at initial state . It has been shown that an arbitrary policy is a feasible policy of the CMDP if and only if . Notice that is known as a super-martingale surrogate, due to the inequality with respect to the contraction Bellman operator of the constraint value function. However, for arbitrary policy , in general it is non-trivial to compute the value function , and instead one can easily compute its upper-bound value function which is the solution of the fixed-point equation , using standard dynamic programming techniques. To better understand how this surrogate value function guarantees feasibility in problem , at each state consider the optimal value function of the minimization problem . Then whenever , the corresponding solution policy is a feasible policy of problem , i.e., . Now define as the set of refined feasible policies induced by . If the condition holds, then all the policies in are feasible w.r.t. problem . Utilizing this observation, a surrogate solution policy of problem can be found by computing the solution policy of the fixed-point equation , for , where . Notice that is a contraction operator, this procedure can also be solved using standard DP methods. The major benefit of this 2-step approach is that the computation of the feasibility set is decoupled from solving the optimization problem. This allows us to apply approaches such as the lexicographical ordering method from multi-objective stochastic optimal control methods [32] to solve the CMDP, for which the constraint value function has a higher lexicographical order than the objective value function. However, since the refined set of feasible policies is constructed prior to policy optimization, it might still be overly conservative. Furthermore, even if there exists a non-trivial solution policy to the surrogate problem, characterizing its sub-optimality performance bound remains a challenging task.

b.1 Details of Existing Solution Algorithms

In this section, we provide the details of the existing algorithms for solving CMDPs.

The Lagrangian Based Algorithm:

The standard way of solving problem is by applying the Lagrangian method. To start with, consider the following minimax problem:

where is the Lagrange multiplier of the CMDP constraint, and the Lagrangian function is given by