Temporal Logic Guided Safe Reinforcement Learning Using Control Barrier Functions

03/23/2019 ∙ by Xiao Li, et al. ∙ 0

Using reinforcement learning to learn control policies is a challenge when the task is complex with potentially long horizons. Ensuring adequate but safe exploration is also crucial for controlling physical systems. In this paper, we use temporal logic to facilitate specification and learning of complex tasks. We combine temporal logic with control Lyapunov functions to improve exploration. We incorporate control barrier functions to safeguard the exploration and deployment process. We develop a flexible and learnable system that allows users to specify task objectives and constraints in different forms and at various levels. The framework is also able to take advantage of known system dynamics and handle unknown environmental dynamics by integrating model-free learning with model-based planning.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Our aim in this work is to address two challenges in current reinforcement learning (RL) methods - complex task specification and safe exploration. While much effort has been made in the development of sample efficient RL algorithms, the abilities to accurately specify the desired behavior and ensure safety during learning are paramount to the deployment of learning agents on physical systems.

Because reinforcement learning agents aim to maximize the expected return, they often exploit loopholes in the reward definition to gain an edge in achieving this goal and this results in undesired behavior. This phenomenon is often referred to as reward hacking [1] (some examples are provided in [2]). For tasks with higher complexity, an accurate definition of a reward function is challenging. For example, the authors of [3] shaped a reward function for the peg insertion task consisting of nested min/max functions that requires tuning to perform well. In this work, we use the robustness degree of temporal logic formulas as the reward function which is also constructed with nested min/max functions. However, the user only needs to specify the task as a logic formula and the transformation to the reward function is performed automatically. The robustness reward also has rigorous semantics such that a value greater than zero indicates completion of the task.

Safety is a critical aspect of reinforcement learning when physical systems that potentially operate around humans are involved. This issue is often addressed by constructing a similar environment in simulation, perform training in the simulation environment and transfer the learned policy to the real world [4][5]. However, it is difficult for the dynamics of the simulation to exactly resemble that of reality and some fine-tuning is necessary for the sim-to-real transfer to succeed. Therefore, it is often an effective practice to incorporate a shielding controller both during learning and at deployment to ensure safety as well as to guide exploration [6][7].

In this paper, we propose a system that integrates temporal logic guided reinforcement learning with control barrier functions and control Lyapunov functions [8]. Our main contributions are as follows:

  • we propose three ways to use temporal logic and its equivalent finite state automata (FSA) in our system - provide reward for the RL agent, perform goal selection for the control Lyapunov function for guided exploration and define safe sets for the control barrier functions.

  • we extend the FSA augmented MDP framework [9] to handle hard constraints and violation of specifications.

  • we integrate the system into a quadratic program that can be solved efficiently.

  • we demonstrate the applicability of our framework in a simulated continuous control task with safety constraints, known system dynamics and unknown environmental dynamics, and study the use and effectiveness of each component in the system.

Ii Related Work

Using temporal logic in an RL setting has been explored in the past. The authors of [10] and [11] use temporal logic and automata to solve the non-Markovian reward decision process (NMRD). In [12], the authors take advantage of the robustness degree of signal temporal logic to guide the learning process in discrete MDPs. In [13], the authors propose a reward machine, which in effect is an FSA. However, the user is required to manually design the reward machine. In [14] and [15], linear temporal logic (LTL) and geometric linear temporal logic (GLTL) are combined with MDP to learn policies in discrete environments.

Safe exploration and deployment is critical for applying learning agents on physical systems. The authors of [7] and [16] approach the problem in the context of constrained policy optimization and showed minimum violation of safety constraints during learning. The authors of [17] and [18] utilize control barrier functions to safeguard exploration. Temporal logic is used in conjunction with an abstraction of the system dynamics to shield the learning process from unsafe actions in [6]. The Authors of [19] provides a comprehensive survey on safe reinforcement learning. We incorporate hard constraints in a TL specification with control barriers functions such that these constraints can be strictly enforced (in addition to other user specified constraints) during learning. Compared to previous work, our method is relatively easy to implement, able to handle continuous state and action spaces, nonlinear constraints and dynamics, and can be executed efficiently.

Iii Preliminaries

Iii-a Reinforcement Learning

We start with the definition of a Markov Decision Process.

Definition 1.

An MDP is defined as a tuple , where is the state space ; is the action space ( and can also be discrete sets); is the transition function with

being the conditional probability density of taking action

at state and ending up in state ; is the reward function with being the reward obtained by executing action at state and transitioning to .

We define a task to be the process of finding the optimal policy (or for stochastic policies) that maximizes the expected return, i.e.


The horizon of a task (denoted ) is defined as the maximum allowable time-steps of each execution of and hence the maximum length of a trajectory. In Equation (1), is the expectation following . The state-action value function is defined as


i.e. it is the expected return of choosing action at state and following onwards. For off-policy actor critic methods such as deep deterministic policy gradient [20], is used to evaluate the quality of policy . Parameterized and ( and are learnable parameters) are optimized alternately to obtain .

Iii-B scTLTL And Finite State Automata

We consider tasks specified with syntactically co-safe Truncated Linear Temporal Logic (scTLTL) which is a restricted version of truncated linear temporal logic(TLTL) [21]. In particular, we restrict from using the (always) operator. By doing so, we can establish a correspondence between an scTLTL formula with a FSA.

Due to space constraints, we do not provide the complete set of definitions for the Boolean and quantitative semantics of scTLTL (refer to [21]). Examples of scTLTL include which entails that eventually and then eventually become true (sequencing). Another example means until becomes true, is true implies that eventually is true.

We denote to be the MDP state at time , and to be a sequence of states (state trajectory) from time to , i.e., . scTLTL provides a set of real-valued functions that quantify the degree of satisfaction of a given with respective to a formula . This measure is also referred to as robustness degree or simply robustness ( maps a state trajectory and a formula to a real number). For example, . Here, if , then is satisfied. Because requires to be true at least once in the trajectory, hence we take the max over the time horizon. In general, a robustness of greater than zero implies that satisfies and vice versa.

Definition 2.

An FSA corresponding to a scTLTL formula . is defined as a tuple , where is a set of automaton states; is the input alphabet (a set of first order logic formulas); is the initial state; is a conditional probability defined as


is a set of final automaton states.

Here is the automaton state of . is the predicate guarding the transition from to . Because is a predicate without temporal operators, the robustness is only evaluated at . Therefore, we use the shorthand . The FSA corresponding to a scTLTL formula can be generated automatically with available packages like Lomap [22] (refer to [23] for details on the generation procedure).

Iii-C FSA Augmented MDP

The FSA augmented MDP [9] establishes a connection between the TL specification and the standard reinforcement learning problem. A policy learned using has implicit knowledge of the FSA through the automaton state .

Definition 3.

An FSA augmented MDP corresponding to a scTLTL formula (constructed from FSA and MDP ) is defined as , where , is the probability of transitioning to given and ,


is defined in Equation (3). is the FSA augmented reward function, defined by


where represents the disjunction of all predicates guarding the outgoing transitions that originate from ( is the set of automata states that are connected with through outgoing edges).

Note that in Equation (5), is calculated from . It is a measure of the progress of satisfying by taking action in state (encapsulated by ).

Intuitively, the reward function in Equation (5) encourages the system to exit the current automaton state and move on to the next, and by doing so eventually reach the final state which satisfies the TL specification (property of FSA).

Iii-D Zeroing Control Barrier Functions (CBF) And Control Lyapunov Functions (CLF)

Define an affine control system as


where and are locally Lipschitz continuous, is the MDP state, is the control. Here we used the same notation for state and action as the MDP in Definition 1. As will become clear in the next section, we embed the control system as part of the MDP transition dynamics. Following [24] and [8], we provide the definition of the zeroing control barrier function and the control Lyapunov function.

Definition 4.

Given a set for a continuously differentiable function , the function is a zeroing control barrier function defined on set with , if there exists an extended class function such that

Definition 5.

A continuously differentiable function is an exponentially stabilizing control Lyapunov function [8] if there exists positive constants such that

Proposition 1.

A controller that meets the objectives specified by the CLF and renders the safe set forward invariant (satisfies the condition in Equation (7)) can be found by solving the quadratic program (CLF-CBF-QP)


where is a relaxation variable that grows when there is a conflict between the CBF and CLF constraints. is chosen to be a large positive constant ().

Iv Problem Formulation and Approach

Problem 1.

Given an MDP where is the state space; is the action space; consists of known system dynamics in the form where and unknown environmental dynamics ; is the reward function, a safe set and a scTLTL formula over state predicates, find a policy such that:


where is the FSA augmented MDP reward in Equation (5), is a weighting parameter, is the time horizon and is the discount factor, is the control barrier function in Definition 4. It is required that the system trajectory always stays within the safe set defined by and is not violated at all times during learning.

Equation (10) requires to optimize the expectation of a weighted return that combines satisfying (maximizing ) and maximizing the MDP reward while ensuring safety during learning. Here we distinguish between not satisfying and violating . Violation means that a state is entered such that can never be satisfied afterwards.

To solve Problem 1, we first translate to its equivalent FSA and construct a FSA augmented MDP using Definition 3. The FSA is used in three ways - provide reward for the RL agent, provide goal states for the CLF to guide exploration and provide the safe set for the CBF. The actions from the RL agent are then integrated with the CLF-CBF-QP to solve for the final safe action that’s transmitted to the system and environment. A diagram of our workflow is provided in Figure 1.


Fig. 1 : Flow diagram for TL guided safe reinforcement learning

V Tl Guided Safe Reinforcement Learning

In this section, we discuss in detail the components of Figure 1. We will use a running example for explanation purposes. We define three circular regions , , each of form where is the center coordinate of the region, is the radius. A specification entails that regions or and are to be eventually visited and and should not be visited before . The FSA for is shown in Figure 2.

One drawback of the FSA definition (Definition 2) is that it does not explicitly handle spec violation. This problem can be solved by adding an additional set of trap states such that if , then no path exists on the FSA that connects to (i.e. entering violates the spec). This can be seen in Figure 2when is visited before .


Fig. 2 : FSA for formala

V-a FSA as RL Reward

Here we extend the reward definition in Equation (5) for the FSA augmented MDP to handle the trap state. The modified reward is as follows:


where . As an example, in Figure 2 .

in Equation (11) promotes exiting the current FSA state and enter the next non-trap state. penalizes entering the trap state. the min() function penalizes the more violating term with lower robustness. is a weighting coefficient. It is important to note that in Equation (11) does not prevent the RL agent from violating during learning. In fact, the agent needs to explore in order to learn and hence the self-loop. During testing, the user can choose to terminate once the a trap state is entered

V-B FSA For CLF Goal Selection

The intuition behind goal selection using the FSA is to find the outgoing edge (leading to a non-trap state) that is easiest for the agent to activate at its current state. Then select the MDP state that maximizes the robustness of the guarding predicate of that edge as the goal. This can be formalized by


where is the set of automata states that are connected with through outgoing edges. Finding using Equation (12

) requires solving two optimization problems. The second one is relatively easy since there are only a finite number of outgoing edges to choose from. The first problem can be difficult if the state space is large. One useful heuristic is that if the predicates are defined as regions in the state space, the state with max robustness is the center of that region or at the center of intersection of multiple regions. For the example in Figure 

2, would be the center of region or whichever is closer to state .

After finding , a simple CLF can be defined as


Other CLFs that meet the criteria in Equation (8) can also be used. For simplicity, the above CLF will be adopted in our case studies.

V-C FSA for CBF Constraints

We exploit the fact that the guarding predicates in an FSA are propositional logic formulas of inequality constraints over states. They can be naturally interpreted as safe sets and incorporated into the CBF. At state , if there is a trap state connected to , we would like to avoid activating . That is we need to ensure that is always false or is always true. We first restructure to its conjunctive normal form (CNF) i.e. [25] - the conjunction of a set of disjunctive formulas. An example of a CNF is where , , .

We define the FSA-based safe set at state with as


Here we assume each predicate is written in the form . The goal of the CBF is to keep the agent from entering the trap state during learning

For the example in Figure 2, at state , , . Therefore, .

V-D Tl-Clf-Cbf Qp

We modify the known system dynamics to


is the action provided by the RL agent. and can be found by solving the following quadratic program


where , captures extra safety constraints defined by the user, are TL spec related constraints constructed with Equation (14), is constructed with Equation (13). A summary of our algorithm is provided in Algorithm 1.

1:Inputs: scTLTL task specification , MDP , episode horizon , RL agent , parameterized policy , action bounds , QP parameters , , , , reward weighting parameter
2:Construct the FSA augmented MDP using Definition 3
3:Initialize policy , empty episode buffer
4:for i = 0 to number of training iterations do
5:     Reset environment
6:     Receive initial observation
7:     for  do
9:         Construct Lyapunov function and safe set Equation (14), (13)
10:         , Equation (16)
11:          step the environment to get next observation
12:         Construct reward from Equation (11)
14:     end for
15:      update the policy with chosen RL agent
16:end for
17:return optimal policy parameters
Algorithm 1 TL Guided Safe RL

This framework provides much flexibility in defining a task. It incorporates prior knowledge about the task domain in the form of temporal logic specifications as well as regular reward functions. The user can also specify hard and soft constraints (encapsulated in the CBF and CLF). The system also combines model-based planning and model-free learning so that knowledge about the system dynamics can be taken advantage of while unknown dynamics can also be handled.

Vi Case Study

Vi-a Experiment Setup

Figure 3shows our simulation environment. It consists of one agent with known dynamics, safety constraints in the form of two straight lines forming a channel that the agent has to stay within, three circular goal regions whose positions are kept fixed in a episode but are randomized between episodes, and two obstacles that move in the vicinity of the channel and whose dynamics are unknown.


Fig. 3 : Simulation environment setup.

Vi-B Agent Dynamics

The MDP state of the agent consists of its position and orientation as well as the position of all goals and moving obstacles (13 continuous dimensions). Combined with 1 discrete dimension for the FSA state (from construction of the FSA augmented MDP), the total state space is 14 dimensional. The agent’s controls are the forward velocity and steering speed .

The agent moves according to the unicycle model


This underactuated system is uncontrollable. As in [26][27], we use a reference point different from the robot center with coordinates in the robot frame. It is shown that the relationship below holds


where are the coordinates of the reference point (orientation is the same as the center). With , we can rewrite the above fully actuated dynamics in the form in Equation (15). Having obtained , the robot’s center coordinates can be calculated via a simple transformation.

Vi-C Task Definition

The task is defined as


In English, entails that the agent is to eventually visit or and eventually and is not to be visited before or and constraints and are to be satisfied at all times before is visited. The predicates are defined as where is the coordinate of the agent, is the center coordinate of goal region . , define the safety constraints as channel of width that the agent needs to navigate within.

We define an additional reward as the minimum distance between the agent and the moving obstacles. This reward is part of the objective in Equation (10).

Vi-D Implementation Details

For the RL component, we use proximal policy optimization (PPO) [28]

as the learning algorithm. The policy is represented by a feed-forward neural network with 3 hidden layers of 300, 200, 100 ReLU units respectively. The value function is of the same architecture. Each episode has horizon

steps and the positions of the goal regions are randomized between episodes (goals may initiate outside the safe channel). We collect a batch of 5 trajectories for each update iteration. During learning, an episode terminates only when the horizon is reached or the task is completed. The agent is allowed to travel outside the safety channel and collide with the moving obstacles during learning (to receive a penalty). However, this is not allowed during evaluation.

Vi-E Results and Discussion

In this section, we study the effect of each component in our system in Figure 1on the learning progress. Specifically, we train the system in four configurations - only RL, RL with CBF, RL with CLF and RL with CBF and CLF.

Figure 4

shows the learning curve in terms of the discounted return. We can see that all configurations are able to achieve similar end performance. The configurations with CLF exhibit faster rising time at the beginning of the learning process. This implies that the RL agent is able to find a useful policy faster with the help of CLF. However, it can also be observed that the learning curves with CLFs show higher variances and volatility. This is mainly due to the fact that the CLF is not aware of moving obstacles or safety constraints, its only job is to guide the agent to the nearest next goal that promotes progress on the FSA according to Equation (


). Therefore, at moments of near collision and safety violations, the RL agent will need to fight the actions provided by the CLFs and hence adding difficulty to the learning problem. This is most apparent with the configuration RL+CLF+CBF where the RL agent will need to learn to work in harmony with the CLF and CBF controls.


Fig. 4 :

Discounted return distribution for different experiment configurations. The figure reports the mean and 1 standard deviation calculated from a batch of 5 trajectories at each update iteration.

Figure 5shows the maximum violation of the safety constraints and (a value smaller than zero indicates violation). We can see that the CBF is able to prevent the agent from violating the safety constraints at all times during learning. For configurations without CBF, because a violation penalty is imposed on the agent (Equation (11)), it also gradually learns to minimize violation. Figure 5(a) shows that the RL+CLF configuration exhibits a sudden decrease in performance (same time as in Figure 4). This occurs when the nearest next goal happens to be outside the safety region and the RL agent has not learned enough to counter the guiding actions of the CLF.


Fig. 5 : Minimum value of the safety constraints for (a) and (b) calculated from a batch of 5 trajectories at each update iteration. A value smaller than 0 indicates violation of the constraints.

Figure 6 shows the minimum distance between the agent and the moving obstacles as a function of policy updates. As learning progresses, the agent learns to stay away from the obstacles. The agent is able to keep a further distance from the obstacles in configurations without CBF as this configuration allows the agent to travel outside the safe zone.


Fig. 6 : Minimum agent distance to moving obstacles.

Figure 7 shows sample trajectories of the agent in various configurations. The upper sub-figures (a) and (b) illustrate trajectories for pure RL and pure CBF. We can see that both agents behave similarly by going to (the nearer of and ) then . The difference is that the RL agent chooses an obstacle free path and tries to trade off between accomplishing the task, avoiding obstacles and minimizing safety violation (controlled by the weights introduced in the previous section). Figure 7(c) shows the agent trajectory with CLF and CBF. CLF still commands the agent to , however, CBF stops the agent at the border of the safety zone and the agent is stuck. Collision with moving obstacles occurs in this case because nor the CLF or the CBF know the obstacle dynamics which is required for avoidance. Figure 7(d) shows the agent trajectory with RL, CLF and CBF. The agent starts close to and tries to move towards . However, it has learned that if it keeps trying to get to it will get stuck at the border and receive a low return. Therefore, near the border it chooses to instead move towards and eventually finishes the task. An agent trained with RL and CBF (not shown here) can potentially also obtain similar behavior. As discussed previously, adding CLF promotes exploration at the beginning of learning. A video can be accessed at https://youtu.be/lUsE3hGpLAk.


Fig. 7 : Sample trajectories for agents with configuration (a) RL (b) CLF (c) CLF with CBF (d) RL, CLF and CBF. Agent trajectory is in purple, moving obstacle trajectories are in green. The stars represent starting positions.

Figure 8shows the evaluation success rate over 20 trials. Different from the training process, during evaluation an episode terminates in three circumstances - horizon is reached, task accomplished and the agent comes in collision with the moving obstacles (defined by a minimum threshold on relative distance). To ensure safety, CBF is always turned on during evaluation. The results show that the agents trained with CBF exhibits higher success rates. This is because agents trained without CBF sometimes rely on traveling outside the safe zone to avoid obstacles and get to goals. During evaluation, this is not an option and therefore these agents are more prone to getting stuck at the safe zone boundaries.


Fig. 8 : Success rates over 20 evaluation trials.

Lastly, we want to discuss some practical issues of our approach and a possible area of future work. The system presented in Figure 1

integrates multiple components together. Central to controlling how the components interact with each other is a set of hyperparameters. This includes

in Equation (11), in Equation (10), the coefficients , , and the control bounds in Equation (16). We find that the learning process and the final performance of the policy is noticeably influenced by these parameters. Taking the control bounds as an example, a good practice is to set a wide bound for the CBF actions as these actions will need to dominate at the boundary of the safety constraints. The bounds for the CLF actions should be set narrower than that of the RL actions. This is to ensure that the RL agent is able to overcome the CLF actions when it needs to. A better strategy is to shrink the CLF action bounds as learning progresses such that it can guide exploration at the beginning of training and diminishes its effect as the agent learns. This type of guidance scheduling is also adopted in [29] with demonstrations. An important future work is therefore to alleviate the burden of parameter tuning from the user.

Vii Conclusions

In this paper, we explore the combination of temporal logic, reinforcement learning, control Lyapunov functions and control barrier functions to form an expressive, safe and learnable control system. We propose techniques that allow an FSA to simultaneously provide rewards, objectives and safety constraints to each component in the system. We analyzed the effect of each component on the learning process and evaluate the resultant policy. In a simulation experiment with known agent dynamics and unknown environmental dynamics, we showed that our system is able to learn a policy that accomplishes the given task expressed as a logic formula while ensuring safety during learning.