Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations

05/31/2019 ∙ by Brijen Thananjeyan, et al. ∙ berkeley college 19

Reinforcement learning (RL) for robotics is challenging due to the difficulty in hand-engineering a dense cost function, which can lead to unintended behavior, and dynamical uncertainty, which makes it hard to enforce constraints during learning. We address these issues with a new model-based reinforcement learning algorithm, safety augmented value estimation from demonstrations (SAVED), which uses supervision that only identifies task completion and a modest set of suboptimal demonstrations to constrain exploration and learn efficiently while handling complex constraints. We derive iterative improvement guarantees for SAVED under known stochastic nonlinear systems. We then compare SAVED with 3 state-of-the-art model-based and model-free RL algorithms on 6 standard simulation benchmarks involving navigation and manipulation and 2 real-world tasks on the da Vinci surgical robot. Results suggest that SAVED outperforms prior methods in terms of success rate, constraint satisfaction, and sample efficiency, making it feasible to safely learn complex maneuvers directly on a real robot in less than an hour. For tasks on the robot, baselines succeed less than 5 over 75

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

To use RL in the real world, algorithms need to be efficient, easy to use, and safe, motivating methods which are reliable even with significant uncertainty. Deep model-based reinforcement learning (deep MBRL) is an area of current interest because of its sample efficiency advantages over model-free methods in a variety of tasks, such as assembly, locomotion, and manipulation NNDynamics; handful-of-trials; MPCRacing; OneShotMBRL; PILCO; DeepVisuomotor; Lenz2015DeepMPCLD. However, past work in deep MBRL typically requires dense hand-engineered cost functions, which are hard to design and can lead to unintended behavior amodei2016concrete

. It would be easier to simply specify task completion in the cost function, but this setting is challenging due to the lack of expressive supervision. This motivates using demonstrations, which allow the user to roughly specify desired behavior without extensive engineering effort. Furthermore, in many robotic tasks, specifically in domains such as surgery, safe exploration is critical to ensure that the robot does not damage itself or cause harm to its surroundings. To enable this, deep MBRL algorithms also need the ability to satisfy complex constraints. We develop a method to efficiently use deep MBRL in dynamically uncertain environments with both sparse costs and complex constraints. We address the difficulty of hand-engineering cost functions by using a small number of suboptimal demonstrations to provide a signal about delayed costs in sparse cost environments, which is updated based on agent experience. Then, to enable stable policy improvement and constraint satisfaction, we impose two probabilistic constraints to (1) constrain exploration by ensuring that the agent can plan back to regions in which it is confident in task completion with high probability and (2) leverage uncertainty estimates in the learned dynamics to implement chance constraints 

nemirovski2012safe during learning. The probabilistic implementation of constraints makes this approach broadly applicable, since it can handle settings with significant dynamical uncertainty, where enforcing constraints exactly is difficult. We introduce a new algorithm motivated by deep model predictive control (MPC) and robust control, safety augmented value estimation from demonstrations (SAVED), which enables efficient learning for sparse cost tasks given a small number of suboptimal demonstrations while satisfying provided constraints. We specifically consider tasks with a tight start state distribution and fixed, known goal set, and only use supervision that indicates task completion. We then show that under mild assumptions and given known stochastic nonlinear dynamics, SAVED has guaranteed iterative improvement in expected performance, extending prior analysis of similar methods for linear dynamics with additive noise StochasticMPC; SampleBasedLMPC. The contributions of this work are (1) a novel method for constrained exploration driven by confidence in task completion, (2) a technique for leveraging model uncertainty to probabilistically enforce complex constraints, enabling obstacle avoidance or optimizing demonstration trajectories while maintaining desired properties, (3) analysis of SAVED which provides iterative improvement guarantees in expected performance for known stochastic nonlinear systems, and (4) experimental evaluation against state-of-the-art model-free and model-based RL baselines on different environments, including simulated experiments and challenging physical maneuvers on the da Vinci surgical robot. Results suggest that SAVED achieves superior sample efficiency, success rate, and constraint satisfaction rate across all domains considered and can be applied efficiently and safely for learning directly on a real robot.

(a)
(b)
(c)
Figure 1: SAVED: (a) Task completion driven exploration: A density model is used to represent the region in state space where the agent has high confidence in task completion, where trajectory samples over the learned dynamics that do not have sufficient density at the end of the planning horizon are discarded; (b) Chance constraint enforcement: Implemented by sampling imagined rollouts over the learned dynamics for the same sequence of actions multiple times and estimating the probability of constraint violation by the percentage of rollouts that violate a constraint. In the above example, of the rollouts are constraint violating, making a poor control choice; (c) Setup for physical experiments: SAVED is able to safely learn complex surgical maneuvers on the da Vinci surgical robot, which is difficult to precisely control cable-driven.

2 Related work

Model-based reinforcement learning: There is significant interest in deep MBRL PILCO; GPS; SOLAR; Lenz2015DeepMPCLD; OneShotMBRL; Info-Theory-MPC; Plan-Online-Learn-Offline; handful-of-trials; NNDynamics due to the improvements in sample efficiency when planning over learned dynamics compared to model-free methods for continuous control SAC; TD3; DDPG; TRPO; PPO. However, most prior deep MBRL algorithms use hand-engineered dense cost functions to guide exploration, which we avoid by using demonstrations to provide signal about delayed costs. Additionally, in contrast to prior work, we enable probabilistic enforcement of complex constraints during learning, allowing constrained exploration from successful demonstrations while avoiding unsafe regions. Reinforcement learning from demonstrations: Demonstrations have been leveraged to accelerate learning for a variety of model-free RL algorithms, such as Deep Q Learning apexdqfd; LFD-DQN; DQN and DDPG DDPG; LFD-DDPG; overcoming-exp. However, these techniques are applied to model-free RL algorithms and may be inefficient compared to model-based methods. Furthermore, they cannot anticipate constraint violations since they use a reactive policy SampleComplexityLQR; GapMBRLMFRLOneShotMBRL

use a neural network prior from previous tasks and online adaptation to a new task using iLQR and a dense cost, distinct from the task completion based costs we consider. Finally, 

TREX use inverse reinforcement learning to significantly outperform suboptimal demonstrations, but do not explicitly optimize for constraint satisfaction or consistent task completion during learning. Iterative learning control: In Iterative learning control (ILC), the controller tracks a predefined reference trajectory and data from each iteration is used to improve closed-loop performance bristow2006survey. ILC has seen significant success in tasks such as quadrotor and vehicle control purwin2009performing; kapania2015design, and can have provable robustness to external disturbances and model mismatch lee2007iterative; tomizuka2008dealing; lin2015robust; chin2004two. StochasticMPC; SampleBasedLMPC; LearningMPC provide a reference-free algorithm to iteratively improve the performance of an initial trajectory by using a safe set and terminal cost to ensure recursive feasibility, stability, and local optimality given a known, deterministic nonlinear system or stochastic linear dynamics with additive disturbances under mild assumptions. We extend this analysis, and show that given task completion based costs, similar guarantees hold for stochastic nonlinear systems with bounded disturbances satisfying similar assumptions. Safe reinforcement learning: There has been significant interest in safe RL SafeRLSurvey, typically focusing on exploration while satisfying a set of explicit constraints SafeRLWithSupervision; SafeExpInMDPs; CPO, satisfying specific stability criteria StabilitySafeMBRL

, or formulating planning via a risk sensitive Markov decision process

RiskAversion; RobustRiskSensitiveMDP. Distinct from prior work in safe RL and control, SAVED can be successfully applied in settings with both highly uncertain dynamics and sparse costs while probabilistically enforcing constraint satisfaction and task completion during learning.

3 Assumptions and preliminaries

In this work, we consider stochastic, unknown dynamical systems with a cost function that only identifies task completion. We assume that (1) tasks are iterative in nature, and thus have a fixed low-variance start state distribution and fixed, known goal set

. This is common in a variety of repetitive tasks, such as assembly, surgical knot tying, and suturing. Additionally, we assume that (2) a modest set of suboptimal but successful demonstration trajectories are available, for example from imprecise human teleoperation or from a hand-tuned PID controller. This enables rough specification of desired behavior without having to design a dense cost function. Here we outline the framework for MBRL using a standard Markov decision process formulation. A finite-horizon Markov decision process (MDP) is a tuple where is the feasible state space and is the action space. The stochastic dynamics model

maps a state and action to a probability distribution over states,

is the task horizon, and is the cost function. A stochastic control policy maps an input state to a distribution over . We assume that the cost function only identifies task completion: , where defines a goal set in the state space. We define task success by convergence to at the end of the task horizon without violating constraints.

4 Safety augmented value estimation from demonstrations (SAVED)

This section describes how SAVED uses a set of suboptimal demonstrations to constrain exploration while satisfying user-specified state space constraints. First, we discuss how SAVED learns system dynamics and a value function to guide learning in sparse cost environments. Then, we motivate and discuss the method used to enforce constraints under uncertainty to both ensure task completion during learning and satisfy user-specified state space constraints. SAVED optimizes agent trajectories by using MPC to optimize costs over a sequence of actions at each state. However, when using MPC, since the current control is computed by solving a finite-horizon approximation to the infinite-horizon control problem, agents may take shortsighted control actions which may make it impossible to complete the task, such as planning the trajectory of a race car over a short horizon without considering an upcoming curve borrelli2017predictive. Thus, to guide exploration in temporally-extended tasks, we solve the problem in equation 4.0.1a subject to 4.0.1b. This corresponds to the standard objective in MPC with an appended value function , which provides a terminal cost estimate for the current policy at the end of the planning horizon. While prior work in deep MBRL NNDynamics; handful-of-trials has primarily focused only on planning over learned dynamics, we introduce a learned value function, which is initialized from demonstrations to provide initial signal, to guide exploration even in sparse cost settings. The learned dynamics model and value function are each represented with a probabilistic ensemble of neural networks, as is used to represent system dynamics in handful-of-trials. These functions are initialized from demonstrations and updated on each training iteration, and collectively define the current policy . See supplementary material for further details on how these networks are trained. The core novelties of SAVED are the additional probabilistic constraints in 4.0.1c to encourage task completion driven exploration and enforce user-specified chance constraints. First, a non-parametric density model enforces constrained exploration by requiring to fall in a region with high probability of task completion. This enforces cost-driven constrained exploration, which enables reliable performance even in sparse cost domains. Second, we require all elements of to fall in the feasible region with probability at least , which enables probabilistic enforcement of state space constraints. In Section 5.1, we discuss the methods used for task completion driven exploration and in Section 5.2, we discuss how probabilistic constraints are enforced during learning.

(4.0.1a)
s.t. (4.0.1b)
(4.0.1c)

We summarize SAVED in Algorithm 1. At each iteration, we sample a start state and then controls are generated by solving equation 4.0.1 using the cross-entropy method (CEM) CEM at each timestep. Transitions are collected in a replay buffer to update the dynamics, value function, and density model.

Require: Replay Buffer ; value function , dynamics model , and density model all seeded with demos; kernel and chance constraint parameters and .
for  do
     Sample from start state distribution
     for  do
         Pick by solving equation 4.0.1 using CEM, execute , and observe
         
     end for
     if  then
         Update density model with
     end if
     Optimize and with
end for
Algorithm 1 Safety augmented value estimation from demonstrations (SAVED)

5 Constrained exploration

5.1 Task completion driven exploration

Recent MPC literature LearningMPC motivates constraining exploration to regions in which the agent is confident in task completion, which gives rise to desirable theoretical properties. For a trajectory at iteration , given by , we define the sampled safe set as where is the set of indices of all successful trajectories before iteration as in LearningMPC. Thus, contains the states from all iterations before from which the agent controlled the system to and is initialized from demonstrations. Under mild assumptions, if states at the end of the MPC planning horizon are constrained to fall in , iterative improvement, controller feasibility, and convergence are guaranteed given known linear dynamics subject to additive disturbances. StochasticMPC; SampleBasedLMPC. In Section 6, we extend these results to show that, under the same assumptions, we can obtain similar guarantees in expectation for stochastic nonlinear systems if task completion based costs are used. The way we constrain exploration in SAVED builds off of this prior work, but we note that unlike LearningMPC; StochasticMPC; SampleBasedLMPC, SAVED is designed for settings in which dynamics are completely unknown. We develop a method to approximately implement the above constraint with a continuous approximation to using non-parametric density estimation, allowing SAVED to scale to more complex settings than prior work using similar cost-driven exploration techniques LearningMPC; StochasticMPC; SampleBasedLMPC. Since is a discrete set, we introduce a new continuous approximation by fitting a density model to and constraining , where is a kernel width parameter (constraint 4.0.1c

). Since the tasks considered in this work have sufficiently low (< 17) state space dimension, we find kernel density estimation provides a reasonable approximation. We implement a tophat kernel density model using a nearest neighbors classifier with a tuned kernel width

and use for all experiments. Thus, all states within Euclidean distance from the closest state in are considered safe under and represent states in which the agent has high confidence in task completion. As the policy improves, it may forget how to complete the task from very old states in , so very old states are evicted from to reflect the current policy when fitting . We discuss how these constraints are implemented in Section 5.2, with further details in the supplementary material. In future work, we will investigate implicit density estimation techniques such as EX2; CountBased-Explore; CountBasedIntrinsic to scale to high-dimensional settings.

5.2 Probabilistic constraint enforcement

SAVED leverages uncertainty estimates in the learned dynamics to enforce probabilistic constraints on its trajectories. This allows SAVED to handle complex, user-specified state space constraints to avoid obstacles or maintain certain properties of demonstrations without a user-shaped or time-varying cost function. We do this by sampling sequences of actions from a truncated Gaussian distribution that is iteratively updated using the cross-entropy method (CEM) 

handful-of-trials. Each action sequence is simulated multiple times over the stochastic dynamics model as in handful-of-trials and the average return of the simulations is used to score the sequence. However, unlike handful-of-trials, we implement chance constraints by discarding actions sequences if more than of the simulations violate constraints (constraint 4.0.1c), where is a user-specified tolerance. This is illustrated in Figure 0(b). The task completion constraint (Section 5.1) is implemented similarly, with action sequences discarded if any of the simulated rollouts do not terminate in a state with sufficient density under .

6 Theoretical analysis of SAVED

In prior work, it has been shown that under mild assumptions, the sampled safe set and the associated value function can be used to design a controller which guarantees constraint satisfaction, convergence to , and iterative improvement SampleBasedLMPC. This analysis specifically assumes known linear dynamics with bounded disturbances, that the limit of infinite data is used for policy evaluation at each iteration, and that the MPC objective can be solved exactly StochasticMPC; SampleBasedLMPC. We extend this analysis by showing that under the same assumptions, if task completion based costs (as defined in Section 3) are used and , then the same guarantees can be shown in expectation for SAVED for stochastic nonlinear systems. Define the closed-loop system with the policy defined by SAVED at iteration as for bounded disturbances and the expected cost-to-go as . In the supplementary material, we formally prove the following:

  1. Recursive Feasibility: The trajectory generated by the closed-loop system at iteration satisfies problem constraints. Equivalently stated, (see Lemma 1).

  2. Convergence in Probability: If the closed-loop system converges in probability to at the initial iteration, then it converges in probability at all subsequent iterations: (see Lemma 2).

  3. Iterative Improvement: The expected cost-to-go for SAVED is non-increasing: (see Theorem 1).

7 Experiments

We evaluate SAVED on simulated continuous control benchmarks and on real robotic tasks with the da Vinci Research Kit (dVRK) kazanzides-chen-etal-icra-2014 against state-of-the-art deep RL algorithms and demonstrate that SAVED outperforms all baselines in terms of sample efficiency, success rate, and constraint satisfaction during learning. All tasks use (Section 3), which is equivalent to the time spent outside the goal set. All algorithms are given the same demonstrations, are evaluated on iteration cost, success rate, and constraint satisfaction rate (if applicable), and run 3 times to control for stochasticity in training. Tasks are only considered successfully completed if the agent reaches and stays in until the end of the episode without ever violating constraints. For all simulated tasks, we give model-free methods 10,000 iterations since they take much longer to converge but sometimes have better asymptotic performance. See supplementary material for videos, and ablations with respect to choice of ,

, and demonstration quantity. We also include further details on baselines, network architectures, hyperparameters, and training procedures.

7.1 Baselines

We consider the following set of model-free and model-based baseline algorithms. To enforce constraints for model-based baselines, we augment the algorithms with the simulation based method described in Section 5.2. Because model-free baselines have no such mechanism to readily enforce constraints, we instead apply a very negative reward when constraints are violated. See supplementary material for an ablation of the reward function used for model-free baselines.

  1. Behavior cloning (Clone)

    : Supervised learning on demonstrator trajectories.

  2. PETS from demonstrations (PETSfD): Probabilistic ensemble trajectory sampling (PETS) from Chua et al handful-of-trials with the dynamics model initialized with demo trajectories and planning horizon long enough to plan to the goal (judged by best performance of SAVED).

  3. PETSfD Dense: PETSfD with access to hand-engineered dense cost.

  4. Soft actor critic from demonstrations (SACfD): Model-free algorithm, Soft Actor Critic (SAC)SAC, where only demo transitions are used for training on the first iteration.

  5. Overcoming exploration in reinforcement learning from demonstrations (OEFD): Model-free algorithm from overcoming-exp which uses DDPG DDPG with Hindsight Experience Replay (HER)HER and a behavior cloning loss to accelerate RL with demonstrations.

  6. SAVED (No SS): SAVED without the sampled safe set constraint described in Section 5.1.

7.2 Simulated navigation

To demonstrate if SAVED can efficiently and safely learn temporally extended tasks with complex constraints, we consider a set of tasks in which a point mass navigates to a unit ball centered at the origin. The agent can exert force in cardinal directions and experiences drag and Gaussian process noise in the dynamics. For each task, we supply 50-100 suboptimal demonstrations generated by running LQR along a hand-tuned safe trajectory. SAVED has a higher success rate than all other RL baselines using sparse costs, even including model-free baselines over the first 10,000 iterations, while never violating constraints across all navigation tasks. Only Clone and PETSfD Dense ever achieve a higher success rate, but Clone but does not improve upon demonstration performance (Figure 2) and PETSfD Dense has additional information about the task. Furthermore, SAVED learns significantly more efficiently than all RL baselines on all navigation tasks except for tasks 1 and 3, in which PETSfD Dense with a Euclidean norm cost function finds a better solution. While SAVED (No SS) can complete the tasks, it has a much lower success rate than SAVED, especially in environments with obstacles as expected, demonstrating the importance of the sampled safe set constraint. Note that SACfD, OEFD, and PETSfD make essentially no progress in the first 100 iterations and never complete any of the tasks in this time, although they mostly satisfy constraints.

Figure 2: Navigation Domains: SAVED is evaluated on 4 navigation tasks. Tasks 2-4 contain obstacles, and task 3 contains a channel for passage to near the x-axis. SAVED learns significantly faster than all RL baselines on tasks 2 and 4. In tasks 1 and 3, SAVED has lower iteration cost than baselines using sparse costs, but does worse than PETSfD Dense, which is given dense Euclidean norm costs to find the shortest path to the goal. For each task and algorithm, we report success and constraint satisfaction rates over the first 100 training iterations and also over the first 10,000 iterations for SACfD and OEFD. We observe that SAVED has higher success and constraint satisfaction rates than other RL algorithms using sparse costs across all tasks, and even achieves higher rates in the first 100 training iterations than model-free algorithms over the first 10,000 iterations.

7.3 Simulated robot experiments

To evaluate whether SAVED outperforms baselines even on standard unconstrained environments, we consider sparse versions of two common simulated robot tasks: the PR2 Reacher environment used in handful-of-trials with a fixed goal and on a pick and place task with a simulated, position-controlled Fetch robot  wise2016fetch; gymrobotics. The reacher task involves controlling the end-effector of a simulated PR2 robot to a small ball in . The pick and place task involves picking up a block from a fixed location on a table and also guiding it to a small ball in . The task is simplified by automating the gripper motion, which is difficult for SAVED to learn due to bimodality of gripper controls, which is hard to capture with the unimodal truncated Gaussian distribution used during CEM sampling. SAVED still learns faster than all baselines on both tasks (Figure 3) and exhibits significantly more stable learning in the first 100 and 250 iterations for the reacher and pick and place tasks respectively.

Figure 3: Simulated Robot Experiments Performance: SAVED achieves better performance than all baselines on both tasks. We use 20 demonstrations with average iteration cost of 94.6 for the reacher task and 100 demonstrations with average iteration cost of 34.4 for the pick and place task. For the reacher task, the safe set constraint does not improve performance, likely because the task is very simple, but for pick and place, we see that the safe set constraint adds significant training stability.

8 Physical robot experiments

We evaluate the ability of SAVED to learn temporally-extended trajectory optimization tasks with nonconvex state space constraints on the da Vinci Research Kit (dVRK) kazanzides-chen-etal-icra-2014. The dVRK is cable-driven and has relatively imprecise controls, motivating model learning cable-driven. Furthermore, safety is paramount due to the cost and delicate structure of the arms. The goal here is to speed up demo trajectories by constraining learned trajectories to fall within a tight, 1 cm tube of the demos, making this difficult for many RL algorithms. Additionally, robot experiments are very time consuming, so training RL algorithms on limited physical hardware is difficult without sample efficient algorithms.

Figure-8:

Here, the agent tracks a figure-8 in state space. However, because there is an intersection in the middle of the desired trajectory, SAVED finds a shortcut to the goal state. Thus, the trajectory is divided into non-intersecting segments before SAVED separately optimizes each one. At execution-time, the segments are stitched together and we find that SAVED is robust enough to handle the uncertainty at the transition point. We hypothesize that this is because the dynamics and value function exhibit good generalization. SAVED quickly learns to smooth out demo trajectories while satisfying constraints with a success rate of over while baselines violate constraints on nearly every iteration and never complete the task, as shown in Figure 4. Note that PETSfD almost always violates constraints, even though it enforces constraints in the same way as SAVED. We hypothesize that since we need to give PETSfD a long planning horizon to make it possible to complete the task (since it has no value function), this makes it unlikely that a constraint satisfying trajectory is sampled with CEM. See supplementary material for the other segment and the full combined trajectory.

Figure 4: Figure-8: Training Performance: After just iterations, SAVED consistently succeeds and converges to an iteration cost of , faster than demos which took an average of steps. Neither baseline ever completes the task in the first iterations; Trajectories: Demo trajectories satisfy constraints, but are noisy and inefficient. SAVED learns to speed up with only occasional constraint violations and stabilizes in the goal set.
Surgical knot tying:

SAVED is used to optimize demonstrations of a surgical knot tying task on the dVRK, using the same multilateral motion as in knot-tying-surgery. Demonstrations are hand-engineered for the task, and then policies are optimized for one arm (arm 1), while a hand-engineered policy is used for the other arm (arm 2). We do this because while arm 1 wraps the thread around arm 2, arm 2 simply moves down, grasps the other end of the thread, and pulls it out of the phantom as shown in Figure 5. Thus, we only expect significant performance gain by optimizing the policy for the portion of the arm 1 trajectory which involves wrapping the thread around arm 2. We only model the motion of the end-effectors in 3D space. SAVED quickly learns to smooth out demo trajectories, with a success rate of over (Figure 5) during training, while baselines are unable to make sufficient progress in this time. PETSfD rarely violates constraints, but also almost never succeeds, while SACfD almost always violates constraints and never completes the task. Training SAVED directly on the real robot for iterations takes only about an hour, making it practical to train on a real robot for tasks where data collection is expensive. At execution-time, we find that SAVED is very consistent, successfully tying a knot in trials with average iteration cost of and maximum iteration cost of for the arm 1 learned policy, significantly more efficient than demos which have an average iteration cost of . See supplementary material for trajectory plots of the full knot tying trajectory.

Figure 5: Surgical Knot Tying: Knot tying motion: Arm 1 wraps the thread around arm 2, which grasps the other end of the thread and tightens the knot; Training Performance: After just iterations, the agent completes the task relatively consistently with only a few failures, and converges to a iteration cost of , faster than demos, which have an average iteration cost of . In the first iterations, both baselines mostly fail, and are less efficient than demos when they do succeed; Trajectories: SAVED quickly learns to speed up with only occasional constraint violations and stabilizes in the goal set.

9 Discussion and future work

This work presents SAVED, a model-based RL algorithm that can efficiently learn a variety of robotic control tasks in the presence of dynamical uncertainties, sparse cost feedback, and complex constraints. SAVED uses a small set of suboptimal demonstrations and a learned state-value function to guide learning with a novel method to constrain exploration to regions in which the agent is confident in task completion. We present iterative improvement guarantees in expectation for SAVED for stochastic nonlinear systems, extending prior work providing similar guarantees for stochastic linear systems. We then demonstrate that SAVED can handle complex state space constraints under uncertainty. We empirically evaluate SAVED on 6 simulated benchmarks and 2 complex maneuvers on a real surgical robot. Results suggest that SAVED is more sample efficient and has higher success and constraint satisfaction rates than all RL baselines and can be efficiently and safely trained on a real robot. We believe this work opens up opportunities to further study probabilistically safe RL, and we are particularly interested in exploring how these ideas can be extended to image space planning and multi-goal settings in future work.

10 Theoretical Results

10.1 Definitions

Consider the system

(10.1.1)

where the state , the input and the disturbance . We define the sampled safe set as:

(10.1.2)

where is the set of indices of all successful trajectories up to iteration as in LearningMPC. Recursively define the value function of (SAVED at iteration ) in closed-loop with (10.1.1) as:

The solution to these modified Bellman equations computes the state-value function for the infinite task horizon controller. In practice, we train the value function using the standard TD-1 error corresponding to the standard Bellman equations. The penalty for states leaving the sampled safe set is approximated by the sampled safe set constraint described in the main paper. In future work, we will investigate using this information recursively to train the value function to capture this as well. We assume that we can solve the system of equations defining . Finally we define:

(10.1.3)
(10.1.4)
(10.1.5)

Equation 10.1.4 is the value of equation 10.1.3, which is the MPC cost-to-go at time during iteration . Equation 10.1.5 is the expected performance of the policy at iteration .

10.2 Assumptions

  1. Known dynamics with disturbances: The dynamics in (10.1.1) are known with disturbance realizations where the support of is bounded.

  2. Optimization over the set of causal feedback policies: We assume that at each timestep, we optimize over the set of causal feedback policies and not over individual controls. In SAVED, we optimize over the set of constant feedback policies . We set (robust constraints) and exactly constrain terminal states to robustly fall within the sampled safe set. Specifically, the optimization problem at time of iteration is

    (10.2.6)

    is the policy (SAVED) at iteration , where

    (10.2.7)

    is the control applied at state . is defined as the value of 10.2.6. We assume we can solve this problem at each timestep. We also assume that we can exactly compute .

  3. Robust Control Invariant Goal Set: is a robust control invariant set with respect to the dynamics and policy class as defined above. This means that from any state in , there exists a policy that keeps the system in for all possible disturbance realizations.

  4. Robust Control Invariant Sampled Safe Set: We assume that is a robust control invariant set with respect to the dynamics and policy class for all . This is a strong assumption, but it can be shown in the limit of infinite samples from the control policy at each iteration, the sampled safe is robust control invariant SampleBasedLMPC. Finally, if is a robust control invariant set and since , . This also implies that the demonstrations provide enough information to construct such that it is robust control invariant. The amount of data needed to approximately meet this assumption is related to the stochasticity of the environment.

  5. Constant Start State: The start state is constant across iterations. This assumption is reasonable in the setting considered in this paper, since in all experiments, the start state distribution has low variance.

  6. Completion Cost Specification: ) and . This is true in the experiments we consider in this paper, where all costs are specified as above with .

10.3 Proofs of theoretical analysis

We will show that under the set of strong assumptions above, which are similar to those use in SampleBasedLMPC, the control strategy presented in this paper guarantees iterative improvement of expected performance. We proceed similarly to SampleBasedLMPC.

Lemma 1.

Recursive Feasibility: Consider system (10.1.1) in closed-loop with (10.2.7). Let the sampled safe set be defined as in (10.1.2). Let assumptions (1)-(6) hold, then the controller (10.2.6) and (10.2.7) is feasible for and . Equivalently stated, .

Proof of Lemma 1:
We proceed by induction. By assumption 4, . Let for some

. Conditioning on the random variable

:

(10.3.8)
(10.3.9)
(10.3.10)
(10.3.11)
(10.3.12)

Equation 10.3.8 follows from the definition in 10.1.4, equation 10.3.10 follows from the definition of . The inner expectation in equation LABEL:P1.4 conditions on the random variable , and the outer expectation integrates it out. The inequality in LABEL:P1.4 follows from the fact that is a possible solution to LABEL:P1.4. Equation 10.3.12 follows from the definition in equation 10.1.4. By induction, . Therefore, the controller is feasible at iteration . ∎

Lemma 2.

Convergence in Probability: Consider the closed-loop system (10.1.1) and (10.2.7). Let the sampled safe set be defined as in (10.1.2). Let assumptions (1)-(6) hold. If the closed-loop system converges in probability to at the initial iteration, then it converges in probability at all subsequent iterations. Precisely, at iteration :

Proof of Lemma 2:
By Lemma 1, assuming a cost satisfying assumption 6, ,

(10.3.13)
(10.3.14)

Line 10.3.14 follows from rearranging 10.3.13 and applying assumption 6. Because is robust control invariant by assumption 3, is a non-increasing sequence. Suppose (the limit must exist by the Monotone Convergence Theorem). Then , s.t. . By the Archimedean principle, the RHS of 10.3.14 can be driven arbitrarily negative, which is impossible. By contradiction, .∎

Theorem 1.

Iterative Improvement: Consider system (10.1.1) in closed-loop with (10.2.7). Let the sampled safe set be defined as in (10.1.2). Let assumptions (1)-(6) hold, then the expected cost-to-go (10.1.5) associated with the control policy (10.2.7) is non-increasing in iterations. More formally:

Furthermore, is a convergent sequence.

Proof of Theorem 1:
Let

(10.3.15)
(10.3.16)
(10.3.17)
(10.3.18)
(10.3.19)
(10.3.20)

Equations 10.3.15 and 10.3.16 follow from repeated application of lemma 1 (10.3.12). Equation 10.3.17 follows from iterated expectation, equation 10.3.18 follows from the cost function assumption (6). Equation 10.3.19 follows again from the cost function assumption (incur a cost of at least for not being at the goal at time ). Then, Equation 10.3.20 follows from lemma 2. Using the above inequality with the definition of ,

(10.3.21)
(10.3.22)
(10.3.23)

Equation 10.3.21 follows from equation 10.3.20, equation 10.3.22 follows from taking the minimum over all possible -length sequences of policies in the policy class . Equation 10.3.23 follows from equation 10.3.20. By induction, this proves the theorem. If the limit is not dropped in 10.3.16, then we can roughly quantify a rate of improvement:

By the Monotone Convergence Theorem, this also implies convergence of .∎

11 Experimental details for SAVED and baselines

For all experiments, we run each algorithm 3 times to control for stochasticity in training and plot the mean iteration cost vs. time with error bars indicating the standard deviation over the 3 runs. Additionally, when reporting task success rate and constraint satisfaction rate, we show bar plots indicating the median value over the 3 runs with error bars between the lowest and highest value over the 3 runs. Experiments are run on an Nvidia DGX-1 and on a desktop running Ubuntu 16.04 with a 3.60 GHz Intel Core i7-6850K, 12 core CPU and an NVIDIA GeForce GTX 1080. When reporting the iteration cost of SAVED and all baselines, any constraint violating trajectory is reported by assigning it the maximum possible iteration cost

, where is the task horizon. Thus, any constraint violation is treated as a catastrophic failure. We plan to explore soft constraints as well in future work.

11.1 Saved

11.1.1 Dynamics and value function

For all environments, dynamics models and value functions are each represented with a probabilistic ensemble of 5, 3 layer neural networks with 500 hidden units per layer with swish activations as used in handful-of-trials. To plan over the dynamics, the TS- trajectory sampling method from handful-of-trials

is used. We use 5 and 30 training epochs for dynamics and value function training when initializing from demonstrations. When updating the models after each training iteration, 5 and 15 epochs are used for the dynamics and value functions respectively. All models are trained using the Adam optimizer with learning rate 0.00075 and 0.001 for the dynamics and value functions respectively. Value function initialization is done by training the value function using the true cost-to-go estimates from demonstrations. However, when updated on-policy, the value function is trained using temporal difference error (TD-1) on a buffer containing all prior states. Since we use a probabilistic ensemble of neural networks to represent dynamics models and value functions, we built off of the provided implementation

PETS_github of PETS in handful-of-trials.

11.1.2 Constrained exploration

Define states from which the system was successfully stabilized to the goal in the past as safe states. We train density model on a fixed history of safe states, where this history is tuned based on the experiment. We have found that simply training on all prior safe states works well in practice on all experiments in this work. We represent the density model using kernel density estimation with a tophat kernel. Instead of modifying for each environment, we set (keeping points with positive density), and modify (the kernel parameter/width). We find that this works well in practice, and allows us to speed up execution by using a nearest neighbors algorithm implementation from scikit-learn. We are experimenting with locality sensitive hashing, implicit density estimation as in EX2, and have had some success with Gaussian kernels as well (at significant additional computational cost).

11.2 Behavior cloning

We represent the behavior cloning policy with a neural network with 3 layers of 200 hidden units each for navigation tasks and pick and place, and 2 layers of 20 hidden units each for the PR2 Reacher task. We train on the same demonstrations provided to SAVED and other baselines for 50 epochs.

11.3 PETSfD and PETSfD Dense

PETSfD and PETSfD Dense use the same network architectures and training procedure as SAVED and the same parameters for each task unless otherwise noted, but just omit the value function and density model for enforcing constrained exploration. PETSfD uses a planning horizon that is long enough to complete the task, while PETSfD Dense uses the same planning horizon as SAVED.

11.4 SACfD

We use the rlkit implementation SAC_github of soft actor critic with the following parameters: batch size=128, discount=, soft target , policy learning rate = , Q function learning rate = , and value function learning rate = , batch size = , replay buffer size = , discount factor =

. All networks are two-layer multi-layer perceptrons (MLPs) with 300 hidden units. On the first training iteration, only transitions from demonstrations are used to train the critic. After this, SACfD is trained via rollouts from the actor network as usual. We use a similar reward function to that of SAVED, with a reward of -1 if the agent is not in the goal set and 0 if the agent is in the goal set. Additionally, for environments with constraints, we impose a reward of -100 when constraints are violated to encourage constraint satisfaction. The choice of collision reward is ablated in section 

16.2. This reward is set to prioritize constraint satisfaction over task success, which is consistent with the selection of in the model-based algorithms considered.

11.5 Oefd

We use the implementation of OEFD provided by OEFD_github with the following parameters: learning rate = , polyak averaging coefficient = , and L2 regularization coefficient = . During training, the random action selection rate is and the noise added to policy actions is distributed as . All networks are three-layer MLPs with 256 hidden units. Hindsight experience replay uses the “future” goal replay and selection strategy with HER. Here controls the ratio of HER data to data coming from normal experience replay in the replay buffer. We use a similar reward function to that of SAVED, with a reward of -1 if the agent is not in the goal set and 0 if the agent is in the goal set. Additionally, for environments with constraints, we impose a reward of -100 when constraints are violated to encourage constraint satisfaction. The choice of collision reward is ablated in section 16.2. This reward is set to prioritize constraint satisfaction over task success, which is consistent with the selection of in the model-based algorithms considered.

12 Simulated experimental details

12.1 Navigation

We consider a 4-dimensional (, , , ) navigation task in which a point mass is navigating to a goal set, which is a unit ball centered at the origin. The agent can exert force in cardinal directions and experiences drag coefficient and Gaussian process noise in the dynamics. We use and in all experiments in this domain. Demonstrations trajectories are generated by guiding the robot along a suboptimal hand-tuned trajectory for the first half of the trajectory before running LQR on a quadratic approximation of the true cost. Gaussian noise is added to the demonstrator policy. We train state density estimator on all prior successful trajectories for the navigation tasks. Additionally, we use a planning horizon of 15 for SAVED and 25, 30, 30, 35 for PETSfD for tasks 1-4 respectively. The 4 experiments run on this environment are:

  1. Long navigation task to the origin. For experiments, 50 demonstrations with average return of were used for training. We use kernel width . SACfD and OEFD on average achieve a best iteration cost of over 10,000 iterations of training averaged over the 3 runs.

  2. and a large obstacle blocking the axis. This environment is difficult for approaches that use a Euclidean norm cost function due to local minima. For experiments, 50 demonstrations with average return of were used for training. We use kernel width and chance constraint parameter . SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs.

  3. and a large obstacle near the path directly to the origin with a small channel near the axis for passage. This environment is difficult for the algorithm to optimally solve since the iterative improvement of paths taken by the agent is constrained. For experiments, 50 demonstrations with average return of were used for training. We use kernel width and chance constraint parameter . SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs.

  4. and a large obstacle surrounds the goal set with a small channel for entry. This environment is extremely difficult to solve without demonstrations. We use demonstrations with average return of and kernel width and chance constraint parameter . SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs.

12.2 PR2 reacher

We use demonstrations for training, with no demonstration achieving total iteration cost less than , and with average iteration cost of . We use for all experiments. No other constraints are imposed, so the chance constraint parameter is not used. The state representation consists of joint positions, joint velocities, and the goal position. The goal set is specified by a radius Euclidean ball in state space. SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs. We train state density estimator on all prior successful trajectories for the PR2 reacher task. Additionally we use a planning horizon of for both SAVED and PETSfD.

12.3 Fetch pick and place

We use demonstrations generated by a hand-tuned PID controller with average iteration cost of . For SAVED, we set . No other constraints are imposed, so the chance constraint parameter is not used. The state representation for the task consists of (end effector relative position to object, object relative position to goal, gripper jaw positions). We find the gripper closing motion to be difficult to learn with SAVED, so we automate this motion by automatically closing it when the end effector is close enough to the object. We hypothesize that this is due to a combination of instability in the value function in this region and the difficulty of sampling bimodal behavior using CEM (open and close). SACfD and OEFD achieve a best iteration cost of over 10,000 iterations of training averaged over the 3 runs. We train state density estimator on the last safe states (100 trajectories) for the Fetch pick and place task. Additionally we use a planning horizon of for SAVED and for PETSfD.

13 Physical experimental details

For all experiments, and a set of hand-coded trajectories with a small amount of Gaussian noise added to the controls is generated. For all physical experiments, we use for PETSfD since we found this gave the best performance when no signal from the value function was provided. In all tasks, the goal set is represented with a 1 cm ball in . The dVRK is controlled via delta-position control, with a maximum action magnitude set to 1 cm during learning for safety. We train state density estimator on all prior successful trajectories for the physical robot experiments.

13.1 Figure-8

The agent is constrained to remain within a 1 cm pipe around a reference trajectory with chance constraint parameter for SAVED and for PETSfD. We use inefficient but successful and constraint-satisfying demonstrations with average iteration cost of steps for both segments. Additionally we use a planning horizon of for SAVED and for PETSfD.

13.2 Knot tying

The agent is again constrained to remain within a 1 cm tube around a reference trajectory as in prior experiments with chance constraint parameter for SAVED and for PETSfD. Provided demonstrations are feasible but noisy and inefficient due to hand-engineering and have average iteration cost of steps. Additionally we use a planning horizon of for SAVED and for PETSfD.

14 Simulated experiments additional results

In Figure 6, we show the task success rate for the PR2 reacher and Fetch pick and place tasks for SAVED and baselines. We note that SAVED outperforms RL baselines (except SAVED (No SS) for the reacher task, most likely because the task is relatively simple so the sampled safe set constraint has little effect) in the first 100 and 250 iterations for the reacher and pick and place tasks respectively. Note that although behavior cloning has a higher success rate, it does not improve upon demonstration performance. However, although SAVED’s success rate is not as different from the baselines in these environments as those with constraints, this result shows that SAVED can be used effectively in a general purpose way, and still learns more efficiently than baselines in unconstrained environments as seen in the main paper.

Figure 6: SAVED has comparable success rate to Clone, PETSfD, and SAVED (No SS) on the reacher task in the first 100 iterations. For the pick and place task, SAVED outperforms all baselines in the first 250 iterations except for Clone, which does not improve upon demonstration performance.

15 Physical experiments additional results

15.1 Figure-8

For the other segment of the Figure-8, SAVED still quickly learns to smooth out demo trajectories while satisfying constraints, with a success rate of over while baselines violate constraints on nearly every iteration and never complete the task, as shown in Figure 7.

Figure 7: Figure-8: Training Performance: After iterations, the agent consistently completes the task and converges to an iteration cost of around , faster than demos which took an average of steps. Neither baseline ever completed the task in the first iterations; Trajectories: Demo trajectories are constraint-satisfying, but noisy and inefficient. SAVED quickly learns to speed up demos with only occasional constraint violations and successfully stabilizes in the goal set. Note that due to the difficulty of the tube constraint, it is hard to avoid occasional constraint violations during learning, which are reflected by spikes in the iteration cost.

In Figure 8

, we show the full trajectory for the figure-8 task when both segments are combined at execution-time. This is done by rolling out the policy for the first segment, and then starting the policy for the second segment as soon as the policy for the first segment reaches the goal set. We see that even given uncertainty in the dynamics and end state for the first policy (it could end anywhere in a 1 cm ball around the goal position), SAVED is able to smoothly navigate these issues and interpolate between the two segments at execution-time to successfully stabilize at the goal at the end of the second segment. Each segment of the trajectory is shown in a different color for clarity. We suspect that SAVED’s ability to handle this transition is reflective of good generalization of the learned dynamics and value functions.

Figure 8: Full figure-8 trajectory: We show the full figure-8 trajectory, obtained by evaluating learned policies for the first and second figure-8 segments in succession. Even when segmenting the task, the agent can smoothly interpolate between the segments, successfully navigating the uncertainty in the transition at execution-time and stabilizing in the goal set.

15.2 Knot tying

In Figure 9, we show the full trajectory for both arms for the surgical knot tying task. We see that the learned policy for arm 1 smoothly navigates around arm 2, after which arm 1 is manually moved down along with arm 2, which grasps the thread and pulls it up through the resulting loop in the thread, completing the knot.

Figure 9: Knot-Tying full trajectories: (a) Arm 1 trajectory: We see that the learned part of the arm 1 trajectory is significantly smoothed compared to the demonstrations at execution-time as well, consistent with the training results. Then, in the hand-coded portion of the trajectory, arm 1 is simply moved down towards the phantom along with arm 2, which grasps the thread and pulls it up; (b) Arm 2 trajectory: This trajectory is hand-coded to move down towards the phantom after arm 1 has fully wrapped the thread around it, grasp the thread, and pull it up.

16 Ablations

16.1 Saved

Figure 10: SAVED Ablations on Navigation Task 2: Number of Demonstrations: We see that SAVED is able to complete the task with just 20 demonstrations, but more demonstrations result in increased stability during learning; Kernel width : We see that must be chosen to be high enough such that SAVED is able to explore enough to find the goal set, but not so high that SAVED starts to explore unsafe regions of the state space; Chance constraint parameter : Decreasing results in many more collisions with the obstacle. Ignoring the obstacle entirely results in the majority of trials ending in collision or failure.

We investigate the impact of kernel width , chance constraint parameter , and the number of demonstrator trajectories used on navigation task 2. Results are shown in Figure 10. We see that SAVED is able to complete the task well even with just 20 demonstrations, but is more consistent with more demonstrations. We also notice that SAVED is relatively sensitive to the setting of kernel width . When is set too low, we see that SAVED is overly conservative, and thus can barely explore at all. This makes it difficult to discover regions near the goal set early on and leads to significant model mismatch, resulting in poor task performance. Setting too low can also make it difficult for SAVED to plan to regions with high density further along the task, resulting in SAVED failing to make progress. On the other extreme, making too large causes a lot of initial instability as the agent explores unsafe regions of the state space. Thus, must be chosen such that SAVED is able to sufficiently explore, but does not explore so aggressively that it starts visiting states from which it has low confidence in being able reach the goal set. Reducing allows the agent to take more risks, but this results in many more collisions. With , most rollouts end in collision or failure as expected. In the physical experiments, we find that allowing the agent to take some risk during exploration is useful due to the difficult tube constraints on the state space.

16.2 Model-Free

Figure 11: A high cost for constraint violations results in conservative behavior that learns to avoid the obstacle, but also makes it take longer to learn to perform the task. Setting the cost low results in riskier behavior that more often achieves task success.

To convey information about constraints to model-free methods, we provide an additional cost for constraint violations. We ablate this parameter for navigation task 2 in Figure 11. We find that a high cost for constraint violations results in conservative behavior that learns to avoid the obstacle, but also takes much longer to achieve task success. Setting the cost low results in riskier behavior that succeeds more often. This trade-off is also present for model-based methods, as seen in the prior ablations.