To use RL in the real world, algorithms need to be efficient, easy to use, and safe, motivating methods which are reliable even with significant uncertainty. Deep model-based reinforcement learning (deep MBRL) is an area of current interest because of its sample efficiency advantages over model-free methods in a variety of tasks, such as assembly, locomotion, and manipulation NNDynamics; handful-of-trials; MPCRacing; OneShotMBRL; PILCO; DeepVisuomotor; Lenz2015DeepMPCLD. However, past work in deep MBRL typically requires dense hand-engineered cost functions, which are hard to design and can lead to unintended behavior amodei2016concrete
. It would be easier to simply specify task completion in the cost function, but this setting is challenging due to the lack of expressive supervision. This motivates using demonstrations, which allow the user to roughly specify desired behavior without extensive engineering effort. Furthermore, in many robotic tasks, specifically in domains such as surgery, safe exploration is critical to ensure that the robot does not damage itself or cause harm to its surroundings. To enable this, deep MBRL algorithms also need the ability to satisfy complex constraints. We develop a method to efficiently use deep MBRL in dynamically uncertain environments with both sparse costs and complex constraints. We address the difficulty of hand-engineering cost functions by using a small number of suboptimal demonstrations to provide a signal about delayed costs in sparse cost environments, which is updated based on agent experience. Then, to enable stable policy improvement and constraint satisfaction, we impose two probabilistic constraints to (1) constrain exploration by ensuring that the agent can plan back to regions in which it is confident in task completion with high probability and (2) leverage uncertainty estimates in the learned dynamics to implement chance constraintsnemirovski2012safe during learning. The probabilistic implementation of constraints makes this approach broadly applicable, since it can handle settings with significant dynamical uncertainty, where enforcing constraints exactly is difficult. We introduce a new algorithm motivated by deep model predictive control (MPC) and robust control, safety augmented value estimation from demonstrations (SAVED), which enables efficient learning for sparse cost tasks given a small number of suboptimal demonstrations while satisfying provided constraints. We specifically consider tasks with a tight start state distribution and fixed, known goal set, and only use supervision that indicates task completion. We then show that under mild assumptions and given known stochastic nonlinear dynamics, SAVED has guaranteed iterative improvement in expected performance, extending prior analysis of similar methods for linear dynamics with additive noise StochasticMPC; SampleBasedLMPC. The contributions of this work are (1) a novel method for constrained exploration driven by confidence in task completion, (2) a technique for leveraging model uncertainty to probabilistically enforce complex constraints, enabling obstacle avoidance or optimizing demonstration trajectories while maintaining desired properties, (3) analysis of SAVED which provides iterative improvement guarantees in expected performance for known stochastic nonlinear systems, and (4) experimental evaluation against state-of-the-art model-free and model-based RL baselines on different environments, including simulated experiments and challenging physical maneuvers on the da Vinci surgical robot. Results suggest that SAVED achieves superior sample efficiency, success rate, and constraint satisfaction rate across all domains considered and can be applied efficiently and safely for learning directly on a real robot.
2 Related work
Model-based reinforcement learning: There is significant interest in deep MBRL PILCO; GPS; SOLAR; Lenz2015DeepMPCLD; OneShotMBRL; Info-Theory-MPC; Plan-Online-Learn-Offline; handful-of-trials; NNDynamics due to the improvements in sample efficiency when planning over learned dynamics compared to model-free methods for continuous control SAC; TD3; DDPG; TRPO; PPO. However, most prior deep MBRL algorithms use hand-engineered dense cost functions to guide exploration, which we avoid by using demonstrations to provide signal about delayed costs. Additionally, in contrast to prior work, we enable probabilistic enforcement of complex constraints during learning, allowing constrained exploration from successful demonstrations while avoiding unsafe regions. Reinforcement learning from demonstrations: Demonstrations have been leveraged to accelerate learning for a variety of model-free RL algorithms, such as Deep Q Learning apexdqfd; LFD-DQN; DQN and DDPG DDPG; LFD-DDPG; overcoming-exp. However, these techniques are applied to model-free RL algorithms and may be inefficient compared to model-based methods. Furthermore, they cannot anticipate constraint violations since they use a reactive policy SampleComplexityLQR; GapMBRLMFRL. OneShotMBRL
use a neural network prior from previous tasks and online adaptation to a new task using iLQR and a dense cost, distinct from the task completion based costs we consider. Finally,TREX use inverse reinforcement learning to significantly outperform suboptimal demonstrations, but do not explicitly optimize for constraint satisfaction or consistent task completion during learning. Iterative learning control: In Iterative learning control (ILC), the controller tracks a predefined reference trajectory and data from each iteration is used to improve closed-loop performance bristow2006survey. ILC has seen significant success in tasks such as quadrotor and vehicle control purwin2009performing; kapania2015design, and can have provable robustness to external disturbances and model mismatch lee2007iterative; tomizuka2008dealing; lin2015robust; chin2004two. StochasticMPC; SampleBasedLMPC; LearningMPC provide a reference-free algorithm to iteratively improve the performance of an initial trajectory by using a safe set and terminal cost to ensure recursive feasibility, stability, and local optimality given a known, deterministic nonlinear system or stochastic linear dynamics with additive disturbances under mild assumptions. We extend this analysis, and show that given task completion based costs, similar guarantees hold for stochastic nonlinear systems with bounded disturbances satisfying similar assumptions. Safe reinforcement learning: There has been significant interest in safe RL SafeRLSurvey, typically focusing on exploration while satisfying a set of explicit constraints SafeRLWithSupervision; SafeExpInMDPs; CPO, satisfying specific stability criteria StabilitySafeMBRL
, or formulating planning via a risk sensitive Markov decision processRiskAversion; RobustRiskSensitiveMDP. Distinct from prior work in safe RL and control, SAVED can be successfully applied in settings with both highly uncertain dynamics and sparse costs while probabilistically enforcing constraint satisfaction and task completion during learning.
3 Assumptions and preliminaries
In this work, we consider stochastic, unknown dynamical systems with a cost function that only identifies task completion. We assume that (1) tasks are iterative in nature, and thus have a fixed low-variance start state distribution and fixed, known goal set. This is common in a variety of repetitive tasks, such as assembly, surgical knot tying, and suturing. Additionally, we assume that (2) a modest set of suboptimal but successful demonstration trajectories are available, for example from imprecise human teleoperation or from a hand-tuned PID controller. This enables rough specification of desired behavior without having to design a dense cost function. Here we outline the framework for MBRL using a standard Markov decision process formulation. A finite-horizon Markov decision process (MDP) is a tuple where is the feasible state space and is the action space. The stochastic dynamics model
maps a state and action to a probability distribution over states,is the task horizon, and is the cost function. A stochastic control policy maps an input state to a distribution over . We assume that the cost function only identifies task completion: , where defines a goal set in the state space. We define task success by convergence to at the end of the task horizon without violating constraints.
4 Safety augmented value estimation from demonstrations (SAVED)
This section describes how SAVED uses a set of suboptimal demonstrations to constrain exploration while satisfying user-specified state space constraints. First, we discuss how SAVED learns system dynamics and a value function to guide learning in sparse cost environments. Then, we motivate and discuss the method used to enforce constraints under uncertainty to both ensure task completion during learning and satisfy user-specified state space constraints. SAVED optimizes agent trajectories by using MPC to optimize costs over a sequence of actions at each state. However, when using MPC, since the current control is computed by solving a finite-horizon approximation to the infinite-horizon control problem, agents may take shortsighted control actions which may make it impossible to complete the task, such as planning the trajectory of a race car over a short horizon without considering an upcoming curve borrelli2017predictive. Thus, to guide exploration in temporally-extended tasks, we solve the problem in equation 4.0.1a subject to 4.0.1b. This corresponds to the standard objective in MPC with an appended value function , which provides a terminal cost estimate for the current policy at the end of the planning horizon. While prior work in deep MBRL NNDynamics; handful-of-trials has primarily focused only on planning over learned dynamics, we introduce a learned value function, which is initialized from demonstrations to provide initial signal, to guide exploration even in sparse cost settings. The learned dynamics model and value function are each represented with a probabilistic ensemble of neural networks, as is used to represent system dynamics in handful-of-trials. These functions are initialized from demonstrations and updated on each training iteration, and collectively define the current policy . See supplementary material for further details on how these networks are trained. The core novelties of SAVED are the additional probabilistic constraints in 4.0.1c to encourage task completion driven exploration and enforce user-specified chance constraints. First, a non-parametric density model enforces constrained exploration by requiring to fall in a region with high probability of task completion. This enforces cost-driven constrained exploration, which enables reliable performance even in sparse cost domains. Second, we require all elements of to fall in the feasible region with probability at least , which enables probabilistic enforcement of state space constraints. In Section 5.1, we discuss the methods used for task completion driven exploration and in Section 5.2, we discuss how probabilistic constraints are enforced during learning.
We summarize SAVED in Algorithm 1. At each iteration, we sample a start state and then controls are generated by solving equation 4.0.1 using the cross-entropy method (CEM) CEM at each timestep. Transitions are collected in a replay buffer to update the dynamics, value function, and density model.
5 Constrained exploration
5.1 Task completion driven exploration
Recent MPC literature LearningMPC motivates constraining exploration to regions in which the agent is confident in task completion, which gives rise to desirable theoretical properties. For a trajectory at iteration , given by , we define the sampled safe set as where is the set of indices of all successful trajectories before iteration as in LearningMPC. Thus, contains the states from all iterations before from which the agent controlled the system to and is initialized from demonstrations. Under mild assumptions, if states at the end of the MPC planning horizon are constrained to fall in , iterative improvement, controller feasibility, and convergence are guaranteed given known linear dynamics subject to additive disturbances. StochasticMPC; SampleBasedLMPC. In Section 6, we extend these results to show that, under the same assumptions, we can obtain similar guarantees in expectation for stochastic nonlinear systems if task completion based costs are used. The way we constrain exploration in SAVED builds off of this prior work, but we note that unlike LearningMPC; StochasticMPC; SampleBasedLMPC, SAVED is designed for settings in which dynamics are completely unknown. We develop a method to approximately implement the above constraint with a continuous approximation to using non-parametric density estimation, allowing SAVED to scale to more complex settings than prior work using similar cost-driven exploration techniques LearningMPC; StochasticMPC; SampleBasedLMPC. Since is a discrete set, we introduce a new continuous approximation by fitting a density model to and constraining , where is a kernel width parameter (constraint 4.0.1c
). Since the tasks considered in this work have sufficiently low (< 17) state space dimension, we find kernel density estimation provides a reasonable approximation. We implement a tophat kernel density model using a nearest neighbors classifier with a tuned kernel widthand use for all experiments. Thus, all states within Euclidean distance from the closest state in are considered safe under and represent states in which the agent has high confidence in task completion. As the policy improves, it may forget how to complete the task from very old states in , so very old states are evicted from to reflect the current policy when fitting . We discuss how these constraints are implemented in Section 5.2, with further details in the supplementary material. In future work, we will investigate implicit density estimation techniques such as EX2; CountBased-Explore; CountBasedIntrinsic to scale to high-dimensional settings.
5.2 Probabilistic constraint enforcement
SAVED leverages uncertainty estimates in the learned dynamics to enforce probabilistic constraints on its trajectories. This allows SAVED to handle complex, user-specified state space constraints to avoid obstacles or maintain certain properties of demonstrations without a user-shaped or time-varying cost function. We do this by sampling sequences of actions from a truncated Gaussian distribution that is iteratively updated using the cross-entropy method (CEM)handful-of-trials. Each action sequence is simulated multiple times over the stochastic dynamics model as in handful-of-trials and the average return of the simulations is used to score the sequence. However, unlike handful-of-trials, we implement chance constraints by discarding actions sequences if more than of the simulations violate constraints (constraint 4.0.1c), where is a user-specified tolerance. This is illustrated in Figure 0(b). The task completion constraint (Section 5.1) is implemented similarly, with action sequences discarded if any of the simulated rollouts do not terminate in a state with sufficient density under .
6 Theoretical analysis of SAVED
In prior work, it has been shown that under mild assumptions, the sampled safe set and the associated value function can be used to design a controller which guarantees constraint satisfaction, convergence to , and iterative improvement SampleBasedLMPC. This analysis specifically assumes known linear dynamics with bounded disturbances, that the limit of infinite data is used for policy evaluation at each iteration, and that the MPC objective can be solved exactly StochasticMPC; SampleBasedLMPC. We extend this analysis by showing that under the same assumptions, if task completion based costs (as defined in Section 3) are used and , then the same guarantees can be shown in expectation for SAVED for stochastic nonlinear systems. Define the closed-loop system with the policy defined by SAVED at iteration as for bounded disturbances and the expected cost-to-go as . In the supplementary material, we formally prove the following:
Recursive Feasibility: The trajectory generated by the closed-loop system at iteration satisfies problem constraints. Equivalently stated, (see Lemma 1).
Convergence in Probability: If the closed-loop system converges in probability to at the initial iteration, then it converges in probability at all subsequent iterations: (see Lemma 2).
Iterative Improvement: The expected cost-to-go for SAVED is non-increasing: (see Theorem 1).
We evaluate SAVED on simulated continuous control benchmarks and on real robotic tasks with the da Vinci Research Kit (dVRK) kazanzides-chen-etal-icra-2014 against state-of-the-art deep RL algorithms and demonstrate that SAVED outperforms all baselines in terms of sample efficiency, success rate, and constraint satisfaction during learning. All tasks use (Section 3), which is equivalent to the time spent outside the goal set. All algorithms are given the same demonstrations, are evaluated on iteration cost, success rate, and constraint satisfaction rate (if applicable), and run 3 times to control for stochasticity in training. Tasks are only considered successfully completed if the agent reaches and stays in until the end of the episode without ever violating constraints. For all simulated tasks, we give model-free methods 10,000 iterations since they take much longer to converge but sometimes have better asymptotic performance. See supplementary material for videos, and ablations with respect to choice of ,
, and demonstration quantity. We also include further details on baselines, network architectures, hyperparameters, and training procedures.
We consider the following set of model-free and model-based baseline algorithms. To enforce constraints for model-based baselines, we augment the algorithms with the simulation based method described in Section 5.2. Because model-free baselines have no such mechanism to readily enforce constraints, we instead apply a very negative reward when constraints are violated. See supplementary material for an ablation of the reward function used for model-free baselines.
Behavior cloning (Clone)
: Supervised learning on demonstrator trajectories.
PETS from demonstrations (PETSfD): Probabilistic ensemble trajectory sampling (PETS) from Chua et al handful-of-trials with the dynamics model initialized with demo trajectories and planning horizon long enough to plan to the goal (judged by best performance of SAVED).
PETSfD Dense: PETSfD with access to hand-engineered dense cost.
Soft actor critic from demonstrations (SACfD): Model-free algorithm, Soft Actor Critic (SAC)SAC, where only demo transitions are used for training on the first iteration.
Overcoming exploration in reinforcement learning from demonstrations (OEFD): Model-free algorithm from overcoming-exp which uses DDPG DDPG with Hindsight Experience Replay (HER)HER and a behavior cloning loss to accelerate RL with demonstrations.
SAVED (No SS): SAVED without the sampled safe set constraint described in Section 5.1.
7.2 Simulated navigation
To demonstrate if SAVED can efficiently and safely learn temporally extended tasks with complex constraints, we consider a set of tasks in which a point mass navigates to a unit ball centered at the origin. The agent can exert force in cardinal directions and experiences drag and Gaussian process noise in the dynamics. For each task, we supply 50-100 suboptimal demonstrations generated by running LQR along a hand-tuned safe trajectory. SAVED has a higher success rate than all other RL baselines using sparse costs, even including model-free baselines over the first 10,000 iterations, while never violating constraints across all navigation tasks. Only Clone and PETSfD Dense ever achieve a higher success rate, but Clone but does not improve upon demonstration performance (Figure 2) and PETSfD Dense has additional information about the task. Furthermore, SAVED learns significantly more efficiently than all RL baselines on all navigation tasks except for tasks 1 and 3, in which PETSfD Dense with a Euclidean norm cost function finds a better solution. While SAVED (No SS) can complete the tasks, it has a much lower success rate than SAVED, especially in environments with obstacles as expected, demonstrating the importance of the sampled safe set constraint. Note that SACfD, OEFD, and PETSfD make essentially no progress in the first 100 iterations and never complete any of the tasks in this time, although they mostly satisfy constraints.
7.3 Simulated robot experiments
To evaluate whether SAVED outperforms baselines even on standard unconstrained environments, we consider sparse versions of two common simulated robot tasks: the PR2 Reacher environment used in handful-of-trials with a fixed goal and on a pick and place task with a simulated, position-controlled Fetch robot wise2016fetch; gymrobotics. The reacher task involves controlling the end-effector of a simulated PR2 robot to a small ball in . The pick and place task involves picking up a block from a fixed location on a table and also guiding it to a small ball in . The task is simplified by automating the gripper motion, which is difficult for SAVED to learn due to bimodality of gripper controls, which is hard to capture with the unimodal truncated Gaussian distribution used during CEM sampling. SAVED still learns faster than all baselines on both tasks (Figure 3) and exhibits significantly more stable learning in the first 100 and 250 iterations for the reacher and pick and place tasks respectively.
8 Physical robot experiments
We evaluate the ability of SAVED to learn temporally-extended trajectory optimization tasks with nonconvex state space constraints on the da Vinci Research Kit (dVRK) kazanzides-chen-etal-icra-2014. The dVRK is cable-driven and has relatively imprecise controls, motivating model learning cable-driven. Furthermore, safety is paramount due to the cost and delicate structure of the arms. The goal here is to speed up demo trajectories by constraining learned trajectories to fall within a tight, 1 cm tube of the demos, making this difficult for many RL algorithms. Additionally, robot experiments are very time consuming, so training RL algorithms on limited physical hardware is difficult without sample efficient algorithms.
Here, the agent tracks a figure-8 in state space. However, because there is an intersection in the middle of the desired trajectory, SAVED finds a shortcut to the goal state. Thus, the trajectory is divided into non-intersecting segments before SAVED separately optimizes each one. At execution-time, the segments are stitched together and we find that SAVED is robust enough to handle the uncertainty at the transition point. We hypothesize that this is because the dynamics and value function exhibit good generalization. SAVED quickly learns to smooth out demo trajectories while satisfying constraints with a success rate of over while baselines violate constraints on nearly every iteration and never complete the task, as shown in Figure 4. Note that PETSfD almost always violates constraints, even though it enforces constraints in the same way as SAVED. We hypothesize that since we need to give PETSfD a long planning horizon to make it possible to complete the task (since it has no value function), this makes it unlikely that a constraint satisfying trajectory is sampled with CEM. See supplementary material for the other segment and the full combined trajectory.
Surgical knot tying:
SAVED is used to optimize demonstrations of a surgical knot tying task on the dVRK, using the same multilateral motion as in knot-tying-surgery. Demonstrations are hand-engineered for the task, and then policies are optimized for one arm (arm 1), while a hand-engineered policy is used for the other arm (arm 2). We do this because while arm 1 wraps the thread around arm 2, arm 2 simply moves down, grasps the other end of the thread, and pulls it out of the phantom as shown in Figure 5. Thus, we only expect significant performance gain by optimizing the policy for the portion of the arm 1 trajectory which involves wrapping the thread around arm 2. We only model the motion of the end-effectors in 3D space. SAVED quickly learns to smooth out demo trajectories, with a success rate of over (Figure 5) during training, while baselines are unable to make sufficient progress in this time. PETSfD rarely violates constraints, but also almost never succeeds, while SACfD almost always violates constraints and never completes the task. Training SAVED directly on the real robot for iterations takes only about an hour, making it practical to train on a real robot for tasks where data collection is expensive. At execution-time, we find that SAVED is very consistent, successfully tying a knot in trials with average iteration cost of and maximum iteration cost of for the arm 1 learned policy, significantly more efficient than demos which have an average iteration cost of . See supplementary material for trajectory plots of the full knot tying trajectory.
9 Discussion and future work
This work presents SAVED, a model-based RL algorithm that can efficiently learn a variety of robotic control tasks in the presence of dynamical uncertainties, sparse cost feedback, and complex constraints. SAVED uses a small set of suboptimal demonstrations and a learned state-value function to guide learning with a novel method to constrain exploration to regions in which the agent is confident in task completion. We present iterative improvement guarantees in expectation for SAVED for stochastic nonlinear systems, extending prior work providing similar guarantees for stochastic linear systems. We then demonstrate that SAVED can handle complex state space constraints under uncertainty. We empirically evaluate SAVED on 6 simulated benchmarks and 2 complex maneuvers on a real surgical robot. Results suggest that SAVED is more sample efficient and has higher success and constraint satisfaction rates than all RL baselines and can be efficiently and safely trained on a real robot. We believe this work opens up opportunities to further study probabilistically safe RL, and we are particularly interested in exploring how these ideas can be extended to image space planning and multi-goal settings in future work.
10 Theoretical Results
Consider the system
where the state , the input and the disturbance . We define the sampled safe set as:
where is the set of indices of all successful trajectories up to iteration as in LearningMPC. Recursively define the value function of (SAVED at iteration ) in closed-loop with (10.1.1) as:
The solution to these modified Bellman equations computes the state-value function for the infinite task horizon controller. In practice, we train the value function using the standard TD-1 error corresponding to the standard Bellman equations. The penalty for states leaving the sampled safe set is approximated by the sampled safe set constraint described in the main paper. In future work, we will investigate using this information recursively to train the value function to capture this as well. We assume that we can solve the system of equations defining . Finally we define:
Known dynamics with disturbances: The dynamics in (10.1.1) are known with disturbance realizations where the support of is bounded.
Optimization over the set of causal feedback policies: We assume that at each timestep, we optimize over the set of causal feedback policies and not over individual controls. In SAVED, we optimize over the set of constant feedback policies . We set (robust constraints) and exactly constrain terminal states to robustly fall within the sampled safe set. Specifically, the optimization problem at time of iteration is
is the policy (SAVED) at iteration , where
is the control applied at state . is defined as the value of 10.2.6. We assume we can solve this problem at each timestep. We also assume that we can exactly compute .
Robust Control Invariant Goal Set: is a robust control invariant set with respect to the dynamics and policy class as defined above. This means that from any state in , there exists a policy that keeps the system in for all possible disturbance realizations.
Robust Control Invariant Sampled Safe Set: We assume that is a robust control invariant set with respect to the dynamics and policy class for all . This is a strong assumption, but it can be shown in the limit of infinite samples from the control policy at each iteration, the sampled safe is robust control invariant SampleBasedLMPC. Finally, if is a robust control invariant set and since , . This also implies that the demonstrations provide enough information to construct such that it is robust control invariant. The amount of data needed to approximately meet this assumption is related to the stochasticity of the environment.
Constant Start State: The start state is constant across iterations. This assumption is reasonable in the setting considered in this paper, since in all experiments, the start state distribution has low variance.
Completion Cost Specification: ) and . This is true in the experiments we consider in this paper, where all costs are specified as above with .
10.3 Proofs of theoretical analysis
We will show that under the set of strong assumptions above, which are similar to those use in SampleBasedLMPC, the control strategy presented in this paper guarantees iterative improvement of expected performance. We proceed similarly to SampleBasedLMPC.
Proof of Lemma 1:
We proceed by induction. By assumption 4, . Let for some
. Conditioning on the random variable:
Equation 10.3.8 follows from the definition in 10.1.4, equation 10.3.10 follows from the definition of . The inner expectation in equation LABEL:P1.4 conditions on the random variable , and the outer expectation integrates it out. The inequality in LABEL:P1.4 follows from the fact that is a possible solution to LABEL:P1.4. Equation 10.3.12 follows from the definition in equation 10.1.4. By induction, . Therefore, the controller is feasible at iteration . ∎
Convergence in Probability: Consider the closed-loop system (10.1.1) and (10.2.7). Let the sampled safe set be defined as in (10.1.2). Let assumptions (1)-(6) hold. If the closed-loop system converges in probability to at the initial iteration, then it converges in probability at all subsequent iterations. Precisely, at iteration :
Proof of Lemma 2:
By Lemma 1, assuming a cost satisfying assumption 6, ,
Line 10.3.14 follows from rearranging 10.3.13 and applying assumption 6. Because is robust control invariant by assumption 3, is a non-increasing sequence. Suppose (the limit must exist by the Monotone Convergence Theorem). Then , s.t. . By the Archimedean principle, the RHS of 10.3.14 can be driven arbitrarily negative, which is impossible. By contradiction, .∎
Iterative Improvement: Consider system (10.1.1) in closed-loop with (10.2.7). Let the sampled safe set be defined as in (10.1.2). Let assumptions (1)-(6) hold, then the expected cost-to-go (10.1.5) associated with the control policy (10.2.7) is non-increasing in iterations. More formally:
Furthermore, is a convergent sequence.
Proof of Theorem 1:
Equations 10.3.15 and 10.3.16 follow from repeated application of lemma 1 (10.3.12). Equation 10.3.17 follows from iterated expectation, equation 10.3.18 follows from the cost function assumption (6). Equation 10.3.19 follows again from the cost function assumption (incur a cost of at least for not being at the goal at time ). Then, Equation 10.3.20 follows from lemma 2. Using the above inequality with the definition of ,
Equation 10.3.21 follows from equation 10.3.20, equation 10.3.22 follows from taking the minimum over all possible -length sequences of policies in the policy class . Equation 10.3.23 follows from equation 10.3.20. By induction, this proves the theorem. If the limit is not dropped in 10.3.16, then we can roughly quantify a rate of improvement:
By the Monotone Convergence Theorem, this also implies convergence of .∎
11 Experimental details for SAVED and baselines
For all experiments, we run each algorithm 3 times to control for stochasticity in training and plot the mean iteration cost vs. time with error bars indicating the standard deviation over the 3 runs. Additionally, when reporting task success rate and constraint satisfaction rate, we show bar plots indicating the median value over the 3 runs with error bars between the lowest and highest value over the 3 runs. Experiments are run on an Nvidia DGX-1 and on a desktop running Ubuntu 16.04 with a 3.60 GHz Intel Core i7-6850K, 12 core CPU and an NVIDIA GeForce GTX 1080. When reporting the iteration cost of SAVED and all baselines, any constraint violating trajectory is reported by assigning it the maximum possible iteration cost, where is the task horizon. Thus, any constraint violation is treated as a catastrophic failure. We plan to explore soft constraints as well in future work.
11.1.1 Dynamics and value function
For all environments, dynamics models and value functions are each represented with a probabilistic ensemble of 5, 3 layer neural networks with 500 hidden units per layer with swish activations as used in handful-of-trials. To plan over the dynamics, the TS- trajectory sampling method from handful-of-trials
is used. We use 5 and 30 training epochs for dynamics and value function training when initializing from demonstrations. When updating the models after each training iteration, 5 and 15 epochs are used for the dynamics and value functions respectively. All models are trained using the Adam optimizer with learning rate 0.00075 and 0.001 for the dynamics and value functions respectively. Value function initialization is done by training the value function using the true cost-to-go estimates from demonstrations. However, when updated on-policy, the value function is trained using temporal difference error (TD-1) on a buffer containing all prior states. Since we use a probabilistic ensemble of neural networks to represent dynamics models and value functions, we built off of the provided implementationPETS_github of PETS in handful-of-trials.
11.1.2 Constrained exploration
Define states from which the system was successfully stabilized to the goal in the past as safe states. We train density model on a fixed history of safe states, where this history is tuned based on the experiment. We have found that simply training on all prior safe states works well in practice on all experiments in this work. We represent the density model using kernel density estimation with a tophat kernel. Instead of modifying for each environment, we set (keeping points with positive density), and modify (the kernel parameter/width). We find that this works well in practice, and allows us to speed up execution by using a nearest neighbors algorithm implementation from scikit-learn. We are experimenting with locality sensitive hashing, implicit density estimation as in EX2, and have had some success with Gaussian kernels as well (at significant additional computational cost).
11.2 Behavior cloning
We represent the behavior cloning policy with a neural network with 3 layers of 200 hidden units each for navigation tasks and pick and place, and 2 layers of 20 hidden units each for the PR2 Reacher task. We train on the same demonstrations provided to SAVED and other baselines for 50 epochs.
11.3 PETSfD and PETSfD Dense
PETSfD and PETSfD Dense use the same network architectures and training procedure as SAVED and the same parameters for each task unless otherwise noted, but just omit the value function and density model for enforcing constrained exploration. PETSfD uses a planning horizon that is long enough to complete the task, while PETSfD Dense uses the same planning horizon as SAVED.
We use the rlkit implementation SAC_github of soft actor critic with the following parameters: batch size=128, discount=, soft target , policy learning rate = , Q function learning rate = , and value function learning rate = , batch size = , replay buffer size = , discount factor =
. All networks are two-layer multi-layer perceptrons (MLPs) with 300 hidden units. On the first training iteration, only transitions from demonstrations are used to train the critic. After this, SACfD is trained via rollouts from the actor network as usual. We use a similar reward function to that of SAVED, with a reward of -1 if the agent is not in the goal set and 0 if the agent is in the goal set. Additionally, for environments with constraints, we impose a reward of -100 when constraints are violated to encourage constraint satisfaction. The choice of collision reward is ablated in section16.2. This reward is set to prioritize constraint satisfaction over task success, which is consistent with the selection of in the model-based algorithms considered.
We use the implementation of OEFD provided by OEFD_github with the following parameters: learning rate = , polyak averaging coefficient = , and L2 regularization coefficient = . During training, the random action selection rate is and the noise added to policy actions is distributed as . All networks are three-layer MLPs with 256 hidden units. Hindsight experience replay uses the “future” goal replay and selection strategy with HER. Here controls the ratio of HER data to data coming from normal experience replay in the replay buffer. We use a similar reward function to that of SAVED, with a reward of -1 if the agent is not in the goal set and 0 if the agent is in the goal set. Additionally, for environments with constraints, we impose a reward of -100 when constraints are violated to encourage constraint satisfaction. The choice of collision reward is ablated in section 16.2. This reward is set to prioritize constraint satisfaction over task success, which is consistent with the selection of in the model-based algorithms considered.
12 Simulated experimental details
We consider a 4-dimensional (, , , ) navigation task in which a point mass is navigating to a goal set, which is a unit ball centered at the origin. The agent can exert force in cardinal directions and experiences drag coefficient and Gaussian process noise in the dynamics. We use and in all experiments in this domain. Demonstrations trajectories are generated by guiding the robot along a suboptimal hand-tuned trajectory for the first half of the trajectory before running LQR on a quadratic approximation of the true cost. Gaussian noise is added to the demonstrator policy. We train state density estimator on all prior successful trajectories for the navigation tasks. Additionally, we use a planning horizon of 15 for SAVED and 25, 30, 30, 35 for PETSfD for tasks 1-4 respectively. The 4 experiments run on this environment are:
Long navigation task to the origin. For experiments, 50 demonstrations with average return of were used for training. We use kernel width . SACfD and OEFD on average achieve a best iteration cost of over 10,000 iterations of training averaged over the 3 runs.
and a large obstacle blocking the axis. This environment is difficult for approaches that use a Euclidean norm cost function due to local minima. For experiments, 50 demonstrations with average return of were used for training. We use kernel width and chance constraint parameter . SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs.
and a large obstacle near the path directly to the origin with a small channel near the axis for passage. This environment is difficult for the algorithm to optimally solve since the iterative improvement of paths taken by the agent is constrained. For experiments, 50 demonstrations with average return of were used for training. We use kernel width and chance constraint parameter . SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs.
and a large obstacle surrounds the goal set with a small channel for entry. This environment is extremely difficult to solve without demonstrations. We use demonstrations with average return of and kernel width and chance constraint parameter . SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs.
12.2 PR2 reacher
We use demonstrations for training, with no demonstration achieving total iteration cost less than , and with average iteration cost of . We use for all experiments. No other constraints are imposed, so the chance constraint parameter is not used. The state representation consists of joint positions, joint velocities, and the goal position. The goal set is specified by a radius Euclidean ball in state space. SACfD and OEFD achieve a best iteration cost of and respectively over 10,000 iterations of training averaged over the 3 runs. We train state density estimator on all prior successful trajectories for the PR2 reacher task. Additionally we use a planning horizon of for both SAVED and PETSfD.
12.3 Fetch pick and place
We use demonstrations generated by a hand-tuned PID controller with average iteration cost of . For SAVED, we set . No other constraints are imposed, so the chance constraint parameter is not used. The state representation for the task consists of (end effector relative position to object, object relative position to goal, gripper jaw positions). We find the gripper closing motion to be difficult to learn with SAVED, so we automate this motion by automatically closing it when the end effector is close enough to the object. We hypothesize that this is due to a combination of instability in the value function in this region and the difficulty of sampling bimodal behavior using CEM (open and close). SACfD and OEFD achieve a best iteration cost of over 10,000 iterations of training averaged over the 3 runs. We train state density estimator on the last safe states (100 trajectories) for the Fetch pick and place task. Additionally we use a planning horizon of for SAVED and for PETSfD.
13 Physical experimental details
For all experiments, and a set of hand-coded trajectories with a small amount of Gaussian noise added to the controls is generated. For all physical experiments, we use for PETSfD since we found this gave the best performance when no signal from the value function was provided. In all tasks, the goal set is represented with a 1 cm ball in . The dVRK is controlled via delta-position control, with a maximum action magnitude set to 1 cm during learning for safety. We train state density estimator on all prior successful trajectories for the physical robot experiments.
The agent is constrained to remain within a 1 cm pipe around a reference trajectory with chance constraint parameter for SAVED and for PETSfD. We use inefficient but successful and constraint-satisfying demonstrations with average iteration cost of steps for both segments. Additionally we use a planning horizon of for SAVED and for PETSfD.
13.2 Knot tying
The agent is again constrained to remain within a 1 cm tube around a reference trajectory as in prior experiments with chance constraint parameter for SAVED and for PETSfD. Provided demonstrations are feasible but noisy and inefficient due to hand-engineering and have average iteration cost of steps. Additionally we use a planning horizon of for SAVED and for PETSfD.
14 Simulated experiments additional results
In Figure 6, we show the task success rate for the PR2 reacher and Fetch pick and place tasks for SAVED and baselines. We note that SAVED outperforms RL baselines (except SAVED (No SS) for the reacher task, most likely because the task is relatively simple so the sampled safe set constraint has little effect) in the first 100 and 250 iterations for the reacher and pick and place tasks respectively. Note that although behavior cloning has a higher success rate, it does not improve upon demonstration performance. However, although SAVED’s success rate is not as different from the baselines in these environments as those with constraints, this result shows that SAVED can be used effectively in a general purpose way, and still learns more efficiently than baselines in unconstrained environments as seen in the main paper.
15 Physical experiments additional results
For the other segment of the Figure-8, SAVED still quickly learns to smooth out demo trajectories while satisfying constraints, with a success rate of over while baselines violate constraints on nearly every iteration and never complete the task, as shown in Figure 7.
In Figure 8
, we show the full trajectory for the figure-8 task when both segments are combined at execution-time. This is done by rolling out the policy for the first segment, and then starting the policy for the second segment as soon as the policy for the first segment reaches the goal set. We see that even given uncertainty in the dynamics and end state for the first policy (it could end anywhere in a 1 cm ball around the goal position), SAVED is able to smoothly navigate these issues and interpolate between the two segments at execution-time to successfully stabilize at the goal at the end of the second segment. Each segment of the trajectory is shown in a different color for clarity. We suspect that SAVED’s ability to handle this transition is reflective of good generalization of the learned dynamics and value functions.
15.2 Knot tying
In Figure 9, we show the full trajectory for both arms for the surgical knot tying task. We see that the learned policy for arm 1 smoothly navigates around arm 2, after which arm 1 is manually moved down along with arm 2, which grasps the thread and pulls it up through the resulting loop in the thread, completing the knot.
We investigate the impact of kernel width , chance constraint parameter , and the number of demonstrator trajectories used on navigation task 2. Results are shown in Figure 10. We see that SAVED is able to complete the task well even with just 20 demonstrations, but is more consistent with more demonstrations. We also notice that SAVED is relatively sensitive to the setting of kernel width . When is set too low, we see that SAVED is overly conservative, and thus can barely explore at all. This makes it difficult to discover regions near the goal set early on and leads to significant model mismatch, resulting in poor task performance. Setting too low can also make it difficult for SAVED to plan to regions with high density further along the task, resulting in SAVED failing to make progress. On the other extreme, making too large causes a lot of initial instability as the agent explores unsafe regions of the state space. Thus, must be chosen such that SAVED is able to sufficiently explore, but does not explore so aggressively that it starts visiting states from which it has low confidence in being able reach the goal set. Reducing allows the agent to take more risks, but this results in many more collisions. With , most rollouts end in collision or failure as expected. In the physical experiments, we find that allowing the agent to take some risk during exploration is useful due to the difficult tube constraints on the state space.
To convey information about constraints to model-free methods, we provide an additional cost for constraint violations. We ablate this parameter for navigation task 2 in Figure 11. We find that a high cost for constraint violations results in conservative behavior that learns to avoid the obstacle, but also takes much longer to achieve task success. Setting the cost low results in riskier behavior that succeeds more often. This trade-off is also present for model-based methods, as seen in the prior ablations.