Learning from demonstration is a powerful paradigm for enabling robots to perform complex tasks. Inverse optimal control and inverse reinforcement learning (IOC/IRL) ([25, 1, 5, 21]) methods have been used to learn a cost function to replicate the behavior of an expert demonstrator. However, planning problems generally also require knowledge of constraints, which define the states or trajectories that are safe. For example, to get a robot arm to efficiently transport a cup of coffee without spilling it, one can optimize a cost function describing the length of the path, subject to constraints on the pose of the end effector. Constraints can represent safety requirements more strictly than cost functions, especially in safety-critical situations: enforcing a hard constraint can enable the robot to guarantee safe behavior, as opposed to using a “softened” cost penalty term. Furthermore, learning a global constraint shared across many tasks can help the robot generalize. Consider the arm, which must avoid spilling the coffee regardless of where the cup started off or needs to go.
While constraints are important, it can be impractical to exhaustively program all the possible constraints a robot should obey across all tasks. Thus, we consider the problem of extracting the latent constraints within expert demonstrations that are shared across tasks. We adopt the insight of  that each safe, optimal demonstration induces a set of lower-cost trajectories that must be unsafe due to violation of an unknown constraint. As in 
, we sample these unsafe trajectories, ensuring that they are also consistent with the system dynamics, control constraints, and start/goal constraints. The unsafe trajectories are used together with the safe demonstrations in an “inverse” integer program that recovers an unsafe set consistent with the safe and unsafe trajectories. We make the following additional contributions in this paper. First, by using a (potentially known) parameterization of the constraints, our method enables the inference of safe and unsafe sets in high-dimensional constraint spaces. Second, we relax the known parametrization assumption and propose a means to incrementally grow a parameterization with the data. Third, we introduce a method for extracting volumes of states which are guaranteed safe or guaranteed unsafe according to the data and parameterization. Fourth, we provide theoretical analysis showing that our method is guaranteed to output conservative estimates of the unsafe and safe sets under mild assumptions. Finally, we evaluate our method on high-dimensional constraints for high-dimensional systems by learning constraints for 7-DOF arm, quadrotor, and planar pushing examples, showing that our method outperforms baseline approaches.
2 Related Work
Inverse optimal control [13, 14] (IOC) and inverse reinforcement learning (IRL)  aim to recover an objective function that replicates provided expert demonstrations when optimized. Our method is complementary to these approaches; if the demonstrator solves a constrained optimization problem, we are finding its constraints, given the cost; IOC/IRL finds the cost, given the constraints . Risk-sensitive IRL  is complementary to our work, which learns hard constraints. Similarly,  learns a state-space constraint shared across tasks as a penalty term in the reward function of an MDP. However, when representing a constraint as a penalty, it is unclear if a demonstrated action was made to avoid a penalty or to improve the trajectory cost in terms of the true cost function (or both). Thus, learning a penalty generalizing across cost functions becomes difficult. To avoid this, we assume a known cost function to explicitly reason about the constraint. Also relevant is safe reinforcement learning, which aims to perform exploration while minimizing visitation of unsafe states. Several methods [27, 31, 2] use Gaussian process models to incrementally explore safe regions in the state space. We take a complementary approach to safe learning by using demonstrations in place of exploration to guide the learning of safe behaviors. Methods exist for learning geometric state space constraints [6, 23], task space equality constraints [18, 17], and convex constraints , which our algorithm generalizes by being able to learn arbitrary nonconvex parametric inequality constraints defined in some constraint space (not limited to the state space). Other methods aim to learn local trajectory-based constraints [16, 19, 22, 33, 9, 8] by reasoning over the constraints within a single trajectory or task. In contrast, our method aims to learn a global constraint shared across tasks.
The method closest to our work is , which learns a global shared constraint on a gridded constraint space; hence, the resulting constraint recovery method scales exponentially with the constraint space dimension and cannot exploit any side information on the structure of the constraint. This often leads to very conservative estimates of the unsafe set, and only grid cells visited by demonstrations can be learned guaranteed safe. We overcome these shortcomings with a novel algorithm that exploits constraint parameterizations for scalability and integration of prior knowledge, and also enables learning volumes of guaranteed safe/unsafe states in the original non-discretized constraint space, yielding less conservative estimates of the safe/unsafe sets under weaker assumptions than .
3 Problem Setup
Consider a system with discrete-time dynamics or continuous-time dynamics , where and . The system performs tasks represented as constrained optimization problems over state/control trajectories / in state/control trajectory space /:
Problem 1 (Forward problem / “task” ).
where is a cost function for task , and is a known mapping from state-control trajectories to a constraint space , elements of which are referred to as “constraint states”. Mappings and are known and map to constraint spaces and , containing a known shared safe set and a known task-dependent safe set , respectively. In this paper, we take to be the set of trajectories satisfying start/goal state constraints and to be the set of dynamically-feasible trajectories obeying control constraints, though the dynamics may not be known in closed form. is an unknown safe set defined by an unknown parameter and a possibly unknown parameterization . A demonstration, , is a state-control trajectory which approximately solves Problem 1, i.e. it satisfies all constraints and its cost is at most a factor of above the cost of a globally optimal solution , i.e. . For convenience, we summarize our frequently used notation in Appendix F. In this paper, our goal is to recover the safe set and its complement, the unsafe set , given demonstrations , inferred unsafe trajectories , the cost function , task-dependent constraints , and a simulator generating dynamically-feasible trajectories satisfying control constraints.
In this section, we describe our method (a full algorithm block is presented in Appendix A). In Section 4.1, we describe how to sample unsafe trajectories. In Sections 4.2 and 4.3, we present mixed integer programs which recover a consistent constraint for a fixed parameterization and extract volumes of guaranteed safe/unsafe states. In Section 4.4, we present how our method can be extended to the case of unknown parameterizations.
4.1 Sampling lower-cost trajectories
In this section, we describe the general sampling framework presented in  while also relaxing the assumption of known closed-form dynamics made in . We define the set of unsafe state-control trajectories induced by an optimal, safe demonstration , , as the set of state-control trajectories of lower cost that obey the known constraints, . We sample from to obtain lower-cost trajectories obeying the known constraints using hit-and-run sampling 
, a method guaranteeing convergence to a uniform distribution of samples overin the limit; an illustration is shown in Fig. 4.1. Hit-and-run starts from an initial point within the set, chooses a direction uniformly at random, moves a random amount in that direction such that the new point remains within the set, and repeats . We sample from indirectly by sampling control sequences and rolling them out through the dynamics to generate dynamically-feasible trajectories. We emphasize that does not need to be known in closed form. Given a control sequence sampled by hit-and-run, a simulator can instead be used to output the resulting dynamically-feasible trajectory, which can then be checked for membership in exactly as if the dynamics were known in closed form. Also, -suboptimality of the demonstration can be handled in this framework by sampling instead from . Optimal substructure in the cost function can be exploited to sample unsafe sub-trajectories over shorter time windows on the demonstrations; shorter unsafe trajectories provide less ambiguous information regarding and can better reduce the ill-posedness of the constraint recovery problem .
4.2 Recovering the constraint
Recall that the unsafe set can be described by some parameterization , where we assume for now that is known, and are parameters to be learned. Intuitively, tells us if constraint state (which is any element of constraint space ) is safe according to parameter . Then, a feasibility problem can be written to find a consistent with the data:
Problem 2 (Parametric constraint recovery problem).
Constraint (2a) enforces that each safe constraint state lies outside and constraint (2b) enforces that at least one constraint state on each unsafe trajectory lies inside . Denote as the feasible set of Problem 2. Further denote and as the set of constraint states which are learned guaranteed unsafe and safe, respectively; that is, a constraint state or if
is classified unsafe or safe for all:
In Problem 2, it is possible to learn that a constraint state is guaranteed safe/unsafe even if it does not lie directly on a demonstration/unsafe trajectory. This is due to the parameterization: for the given set of safe and unsafe trajectories, there may be no feasible where is classified unsafe/safe. It is precisely this extrapolation which will enable us to learn constraints in high-dimensional spaces. We now identify classes of parameterizations for which Problem 2 can be efficiently solved:
Problem 3 (Polytopic constraint recovery problem).
Linear case: is defined by a Boolean conjunction of linear inequalities, i.e. can be defined as the union and intersection of half-spaces. For this case, mixed-integer programming can be employed. If is a single polytope, i.e. , where and are affine in , we can solve Problem 3, a mixed integer feasibility problem, to find a feasible . In Problem 3, is a large positive number and
is a vector of ones of length, where is the number of rows in . Constraints (5a) and (5b) use the big-M formulation  to enforce that each safe constraint state lies outside and that at least one constraint state on each unsafe trajectory lies inside . Similar problems can be solved when the safe/unsafe set can be described by unions of polytopes. As an alternative to integer programming, satisfiability modulo theories (SMT) solvers  can also be used to solve Problem 2 if is defined by a Boolean conjunction of linear inequalities.
Convex case: is defined by a Boolean conjunction of convex inequalities, i.e. can be described as the union and intersection of convex sets. For this case, satisfiability modulo convex optimization (SMC)  can be employed to find a feasible .
For suboptimal demonstrations / imperfect lower-cost trajectory sampling, Problem 3 can become infeasible. To address this, slack variables can be introduced: replace constraint with and change the feasibility problem to minimization of ; this finds a that is consistent with as many unsafe trajectories as possible.
4.3 Extracting guaranteed safe and unsafe states
One can check if a constraint state or by adding a constraint or to Problem 2 and checking feasibility of the resulting program; if the program is infeasible, or . In other words, solving this modified integer program can be seen as querying an oracle about the safety of a constraint state . The oracle can then return that is guaranteed safe (program infeasible after forcing to be unsafe), guaranteed unsafe (program infeasible after forcing to be safe), or unsure (program remains feasible despite forcing to be safe or unsafe).
Unlike the gridded formulation in , Problem 2 works in the continuous constraint space. Thus, it is not possible to exhaustively check if each or . To address this, the neighborhood of some constraint state can be checked for membership in by solving the following problem:
Problem 4 (Volume extraction).
In words, Problem 4 finds the smallest -hypercube centered at containing a ; thus, any hypercube of size is contained within : . We can write a similar problem to check the neighborhood of for membership in . For some common parameterizations (axis-aligned hyper-rectangles, convex sets), there are even more efficient methods for recovering subsets of and , which are described in Appendix B. Volumes of safe/unsafe space can thus be produced by repeatedly solving Problem 4 for different , and these volumes can be passed to a planner to generate new trajectories that are guaranteed safe.
4.4 Unknown parameterizations
For many realistic applications, we do not have access to a known parameterization which can represent the unsafe set. Despite this, complex unsafe/safe sets can often be approximated as the union of many simple unsafe/safe sets. Along this line of thought, we present a method for incrementally growing a parameterization based on the complexity of the demonstrations and unsafe trajectories.
Suppose that the true parameterization of the unsafe set is unknown but can be exactly or approximately expressed as the union of simple sets , where each simple set has a known parameterization and , the minimum number of simple sets needed to reconstruct , is unknown.
A lower bound on , , can be estimated by incrementally adding simple sets until Problem 2 becomes feasible. However, for , the extracted and are not guaranteed to be conservative estimates of and (Theorem 5.3), and and are only guaranteed to be conservative if , where is the chosen number of simple sets (see Theorem 5.2). Unfortunately, inferring a guaranteed overestimation of only from data is not possible, as there can always be subsets of the constraint which are not activated by the given demonstrations. Two facts mitigate this:
If an upper bound on the number of simple sets needed to describe , , is known (where this bound can be trivially loose), and by using simple sets in solving Problem 2. Hence, by using , and can be made guaranteed conservative (see Theorem 5.2), at the cost of the resulting and being potentially small.
As the demonstrations begin to cover the space, . Hence, by using simple sets, and are asymptotically conservative.
In our experiments, we choose our simple sets as axis-aligned hyper-rectangles in , which is motivated by: 1) any open set in can be approximated as a countable/finite union of open axis-aligned hyper-rectangles ; 2) unions of hyper-rectangles are easily representable in Problem 3.
5 Theoretical Analysis
In this section, we present theoretical analysis on our parametric constraint learning algorithm. In particular, we analyze the conditions under which our algorithm is guaranteed to learn a conservative estimate of the safe and unsafe sets. For space, the proofs and additional results on conservativeness (Section C.2) and the learnability of a constraint (Section C.1) are presented in the appendix. We develop the theory for for legibility, but the results can be easily extended to general .
Theorem 5.1 (Conservativeness: Known parameterization).
Suppose the parameterization is known exactly. Then, for a discrete-time system, extracting and (as defined in (3) and (4), respectively) from the feasible set of Problem 2 returns and . Further, if the known parameterization is and in Problem 3 is chosen to be greater than
then extracting and from the feasible set of Problem 3 recovers and .
We also present conservativeness results for continuous-time dynamics in Corollary C.2.
Now, let’s consider the case where the true parameterization is not known and we use the incremental method described in Section 4.4, where is the simple parameterization. We consider the over-parameterized case (Theorem 5.2) and the under-parameterized case (Theorem 5.3). We analyze the case where the true, under-, and over-parameterization are defined respectively as:
Theorem 5.2 (Conservativeness: Over-parameterization).
Theorem 5.3 (Conservativeness: Under-parameterization).
Suppose the true parameterization and under-parameterization are defined as in (6) and (7). Furthermore, assume that we incrementally grow the parameterization as described in Section 4.4. Then, the following are true:
and are not guaranteed to be contained in (unsafe set) and (safe set), respectively.
Each recovered simple unsafe set , , for any , touches the true unsafe set (there are no spurious simple unsafe sets): for , for , ( is as defined in Section 4.4).
We evaluate our method, showing that our method can be applied to constraints with unknown parameterizations (Section 6.1), high-dimensional constraints defined for high-dimensional systems (Section 6.2), and settings where the dynamics are not known in closed form (Section 6.3
). We also compare our performance with a neural network (NN) baseline111In all experiments, 1) the NN is trained with the safe/unsafe trajectories and predicts at test time if a queried constraint state is safe/unsafe; 2) error bars are generated by initializing the NN with 10 different random seeds and evaluating accuracy after training. The architectures/training details are presented in Appendix E.. We further compare with the grid-based method  on the 2D examples. For space, experimental details are provided in Appendix E.
6.1 Unknown parameterization
U-shape: We first present a kinematic 2D example where a U-shape is to be learned, but the number of simple unsafe sets needed to represent (three) is unknown. In Row 1, Column 1 of Fig. 6.1, we outline in black and overlay , , and the six provided demonstrations, synthetically generated via trajectory optimization. We note that due to the chosen control constraints and U-shape, there are parts of (a subset of the white region in Fig. 6.1, Row 1, Column 1) which cannot be implied unsafe by sampled unsafe trajectories and the parameterization (see Theorem C.1). As a result, may not fully cover , even with more demonstrations (Fig. 6.1, Row 1, Column 3). Note that the decrease in coverage222Coverage is measured as the intersection over union (IoU) of the relevant sets (see legends for exact formula). at the third demonstration is due to a increase from a two-box parameterization to a three-box parameterization. Likewise, the accuracy333In all experiments, computed accuracies are: IP (safe) = , IP (unsafe) = , NN (safe) = , NN (unsafe) = , where are query states sampled from and is the indicator function. Note that NN accuracy is computed only on . decreases at the second demonstration due to over-approximation of with two boxes (Fig. 6.1, Row 1, Column 4), but this over-approximation vanishes when switching to the three-box parameterization (which is exact; hence and are guaranteed conservative, c.f. Theorem 5.1). The grid-based method in  always has perfect accuracy, since it does not extrapolate beyond the observed trajectories. However, as a result of that, it also yields low coverage (Fig. 6.1, Row 1, Column 2). The NN baseline achieves lower accuracy for the unsafe set as it misclassifies some corners of the U. Recovering a feasible using a multi-box variant of Problem 3 recovers exactly (Fig. 6.1, Row 1, Column 5). Finally, we note that in this (and future) examples, demonstrations were specifically chosen to be informative about the constraint. We present a version of this example in Appendix D with random demonstrations and show that the constraint is still learned (albeit needing more demonstrations).
Infinite boxes: To show that our method can still learn a constraint that cannot be easily expressed using a chosen parameterization, we limit our parameterization to an unknown number of axis-aligned boxes and attempt to learn a diagonal “I” unsafe set (see Fig. 6.1, Row 2). This is a particularly difficult example, since an infinite number of axis-aligned boxes will be needed to recover exactly. However, for finite data, only a finite number of boxes will be needed; in particular, for 1, 2, 3, and 4 demonstrations (which are synthetically generated assuming kinematic system constraints), 3, 5, 6, and 6 boxes are required to generate a parameterization consistent with the data (see Fig. 6.1, Row 2, Column 1). Also overlaid in Fig. 6.1, Row 2, Column 1 are and , which are approximated by solving Problem 4 for randomly sampled . Compared to the gridded formulation in  (see Fig. 6.1, Row 2, Column 3), and cover and far better due to the parameterization enabling the IP to extrapolate more from the demonstrations. Furthermore, we note that while the gridded case has perfect accuracy for the safe set, it does not for the unsafe set, due to grid alignment . Overall, the multi-box variant of Problem 3 recovers well (Fig. 6.1, Row 2, Column 5), and the remaining gap can be improved with more data. Last, we note that the NN baseline reaches comparable accuracies here (Fig. 6.1, Row 2, Column 4), since our method suffers from a few disadvantages for this particular example. First, attempting to represent the “I” with a finite number of boxes introduces a modeling bias that the NN does not have. Second, since the system is kinematic and the constraint is low-dimensional, many unsafe trajectories can be sampled, providing good coverage of the unsafe set. We show later that for higher dimensional constraints/systems with highly constrained dynamics, it becomes difficult to gather enough data for the NN to perform well.
6.2 High-dimensional examples
6D pose constraint for a 7-DOF robot arm: In this example, we learn a 6D hyper-rectangular pose constraint for the end effector of a 7-DOF Kuka iiwa arm. One such setting is when the robot is to bring a cup to a human while ensuring its contents do not spill (angle constraint) and proxemics constraints (i.e. the end effector never gets too close to the human) are satisfied (position constraint). We examine this problem for the cases of optimal and suboptimal demonstrations.
Demonstration setup: The end effector orientation (parametrized in Euler angles) and position are constrained to satisfy and (see Fig. 6.2, Column 1). For the optimal case, we synthetically generate seven demonstrations minimizing joint-space trajectory length. For the suboptimal case, five suboptimal continuous-time demonstrations approximately optimizing joint-space trajectory length are recorded in a virtual reality environment, where a human demonstrator moves the arm from desired start to goal end effector configurations using an HTC Vive (see Fig. E.1). The demonstrations are time-discretized for lower-cost trajectory sampling . In both cases, the constraint is recovered with Problem 3, where and . For the suboptimal case, slack variables are added to ensure feasibility of Problem 3, and for a suboptimal demonstration of cost , we only use trajectories of cost less than as unsafe trajectories.
Results: The coverage plots (Fig. 6.2, Rows 1 and 3, Col. 2) show that as the number of demonstrations increases, / approach the true safe/unsafe sets / 444For the unsafe sets, the IoUs are computed between and , as in high dimensions, the IoU changes more smoothly for the complements than the IoU between and , so we plot the the former for visual clarity.. For the suboptimal case, the low IoU values for lower numbers of demonstrations is due to overapproximation of the unsafe set in the component (arising from continuous-time discretization and imperfect knowledge of the suboptimality bound); the fifth demonstration, where takes values near greatly reduces this overapproximation. The accuracy plots (Fig. 6.2, Rows 2 and 4, Col. 2) present results consistent with the theory: for the optimal case, all constraint states in and are truly safe and unsafe (Theorem 5.1), and the small over-approximation for the suboptimal case is consistent with the continuous-time conservativeness (Theorem C.2). Note that the NN accuracy is lower and can oscillate with demonstrations, since it finds just a single constraint which is approximately consistent with the data, while our method classifies safety by consulting all possible constraints which are exactly consistent with the data, thus performing more consistently. The NN performs better on the suboptimal case than it does on the optimal case, as more unsafe trajectories are sampled due to the suboptimality, improving coverage of the unsafe set. The projections of (Fig. 6.2, Cols. 3-4, in red), where is obtained using the method in Appendix B, are compared to the safe set (blue outline), showing that the two match nearly exactly (though the gap for the suboptimal case is larger), and the gap can be likely reduced with more demonstrations. The projections of (Fig. 6.2, Col. 5) match exactly with for the optimal case (true safe set is outlined in blue) and match closely for the suboptimal case. Note that , as is the case for all axis-aligned box parameterizations.
3D constraint for 12D quadrotor model: We learn a 3D box angular velocity constraint for a quadrotor with discrete-time 12D dynamics (see Appendix E for details). In this scenario, the quadrotor must avoid an a priori known unsafe set in position space while also ensuring that angular velocities are below a threshold: . The safe set is to be inferred from two demonstrations (see Fig. 6.3). The constraint is recovered with Problem 3, where and . Fig. 6.4 shows that with more demonstrations, approaches the true safe set and approaches the true unsafe set , respectively. Consistent with Theorem 5.1, our method has perfect accuracy in and . Here, the NN struggles more compared to the arm examples since due to the more constrained dynamics, fewer unsafe trajectories can be sampled, and a parameterization needs to be leveraged in order to say more about the unsafe set. The remaining columns of Fig. 6.4 show that we recover and exactly (the true safe set is outlined in blue).
6.3 Planar pushing example
In this section, using the FetchPush-v1 environment in OpenAI Gym , we aim to learn a 2D box unsafe set on the center-of-mass (CoM) of a block pushed by the Fetch arm (see Fig. 6.5) using two demonstrations. Here, the dynamics of the block CoM are not known in closed form, but rollouts can still be sampled using the simulator. Since the block CoM is highly underactuated, it is not possible to sample short sub-trajectories. Thus, without leveraging a parameterization, the constraint recovery problem is very ill-posed. Furthermore, while our method can explicitly consider the unsafeness in longer unsafe trajectories (at least one state is unsafe), the NN struggles with this example as it fails to accurately model that fact. Overall, Fig. 6.5 presents that / match up well with /, and our classification accuracy for safeness/unsafeness is perfect across demonstrations.
7 Discussion and Conclusion
In this paper, we present a method capable of learning parametric constraints in high-dimensional spaces with and without known parameterizations. We also present a method for extracting volumes of guaranteed safe and guaranteed unsafe states, information which can be directly used in a planner to enforce safety constraints. We analyze our algorithm, showing that these recovered guaranteed safe/unsafe states are truly safe/unsafe under mild assumptions. We evaluate the method by learning a variety of constraints defined in high-dimensional spaces for systems with high-dimensional dynamics. One shortcoming of our work is scalability with the amount of data, due to the number of integer variables growing linearly with the number of safe/unsafe trajectories. As a result, learning constraints without extensive sampling of unsafe trajectories is a direction of future work.
This work was supported in part by a National Defense Science and Engineering Graduate (NDSEG) Fellowship, Office of Naval Research (ONR) grants N00014-18-1-2501 and N00014-17-1-2050, and National Science Foundation (NSF) grants ECCS-1553873 and IIS-1750489.
-  (2004) Apprenticeship learning via inverse reinforcement learning. See DBLP:conf/icml/2004, External Links: Cited by: §1.
-  (2014-12) Reachability-based safe learning with gaussian processes. In Conference on Decision and Control, Vol. . External Links: Cited by: §2.
-  (2016) Thickness control in structural optimization via a level set method. Struct. and Multidisciplinary Optimization. External Links: Cited by: Definition C.6.
-  (2017) Repeated inverse reinforcement learning. In Neural Information Processing Systems, pp. 1813–1822. Cited by: §2.
-  (2009) A survey of robot learning from demonstration. Robotics and Autonomous Systems 57 (5), pp. 469–483. Cited by: §1.
-  (2017) Efficient learning of constraints and generic null space policies. In International Conference on Robotics and Automation , pp. 1520–1526. Cited by: §2.
-  (1997) Introduction to linear optimization. 1st edition, Athena Scientific. External Links: Cited by: §4.2.
-  (2007) Incremental learning of gestures by imitation in a humanoid robot. See DBLP:conf/hri/2007, pp. 255–262. External Links: Cited by: §2.
-  (2008) A probabilistic programming by demonstration framework handling constraints in joint space and task space. See DBLP:conf/iros/2008, External Links: Cited by: §2.
-  (2018) Learning constraints from demonstrations. Workshop on the Algorithmic Foundations of Robotics (WAFR). External Links: Cited by: §C.2, §1, §2, §4.1, §4.3, Figure 6.1, §6.1, §6.1, §6.2, §6.
-  (2008) Z3: an efficient SMT solver. See DBLP:conf/tacas/2008, pp. 337–340. External Links: Cited by: §4.2.
-  (2017) Inverse kkt: learning cost functions of manipulation tasks from demonstrations. International Journal of Robotics Research 36 (13-14), pp. 1474–1488. External Links: Cited by: §2.
-  (1964-03-01) When is a linear control system optimal?. Journal of Basic Engineering 86 (1), pp. 51–60. External Links: Cited by: §2.
-  (2011) Imputing a convex objective function. In International Symposium on Intelligent Control, pp. 613–619. Cited by: §2.
-  (2011) An analysis of a variation of hit-and-run for uniform sampling from general regions. Transactions on Modeling and Computer Simulation. Cited by: §4.1.
-  (2016) Learning object orientation constraints and guiding constraints for narrow passages from one demonstration. In International Symposium on Experimental Robotics, Cited by: §2.
-  (2015) Learning null space projections. In IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, pp. 2613–2619. Cited by: §2.
-  (2017) Learning task constraints in operational space formulation. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29 - June 3, 2017, pp. 309–315. Cited by: §2.
-  (2016-12) Inferring and assisting with constraints in shared autonomy. In (Conference on Decision and Control), Vol. , pp. 6689–6696. External Links: Cited by: §2.
-  (2018) Predictive modeling by infinite-horizon constrained inverse optimal control with application to a human manipulation task. CoRR abs/1812.11600. External Links: Cited by: §2.
Algorithms for inverse reinforcement learning.
International Conference on Machine Learning ’00, San Francisco, CA, USA, pp. 663–670. Cited by: §1, §2.
-  (2013) Learning robot skills through motion segmentation and constraints extraction. Note: International Conference on Human Robot Interaction Cited by: §2.
-  (2017) C-LEARN: learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy. In International Conference on Robotics and Automation, Cited by: §2.
-  (2018) Multi-goal reinforcement learning: challenging robotics environments and request for research. CoRR abs/1802.09464. External Links: Cited by: §6.3.
-  (2006) Maximum margin planning. See DBLP:conf/icml/2006, pp. 729–736. External Links: Cited by: §1.
-  (2015) Quadrotor control: modeling, nonlinear control design, and simulation. Cited by: 1st item.
Safe exploration for active learning with gaussian processes. In European Conference on Machine Learning, Cited by: §2.
-  (2018) SMC: satisfiability modulo convex programming. Proceedings of the IEEE 106 (9), pp. 1655–1679. Cited by: §4.2.
-  (2018) Risk-sensitive inverse reinforcement learning via semi- and non-parametric methods. International Journal of Robotics Research 37 (13-14). External Links: Cited by: §2.
-  (2016) Analysis ii. Springer Texts and Readings in Mathematics, Springer Singapore. Cited by: §4.4.
Safe exploration in finite markov decision processes with gaussian processes. In Neural Information Processing Systems, pp. 4305–4313. Cited by: §2.
-  (2006) On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106 (1), pp. 25–57. External Links: Cited by: 5th item, 5th item.
-  (2011) Demonstration-guided motion planning. See DBLP:conf/isrr/2011, External Links: Cited by: §2.
Appendix A Detailed algorithm block
Appendix B Extraction of and
In this section, we discuss specific ways of extracting sets of guaranteed safe/unsafe states for axis-aligned hyper-rectangles (this method is used for all numerical examples in Section 6.2 and Section 6.3) and for convex parameterizations.
b.1 Axis-aligned hyper-rectangle parameterization
In this parameterization, , , and , where and . Here, and are the lower and upper bounds of the hyper-rectangle for coordinate .
As the set of axis-aligned hyper-rectangles is closed under intersection, is also an axis-aligned hyper-rectangle, the axis-aligned bounding box of any two constraint states is also contained in . This also implies that can be fully described by finding the top and bottom corners and . Suppose we start with a known . Then, finding amounts to performing a binary search for each of the dimensions, and the same holds for finding .
Recovering is not as straightforward, as the complement of axis-aligned boxes is not closed under intersection. While we can still solve Problem 4 to recover , an inner approximation of can be more efficiently obtained: starting at a constraint state , line searches can be performed to find the two points of transition to in each constraint coordinate. Denote as the complement of the axis-aligned bounding box of these points; is an inner approximation of , as , where denotes the axis-aligned bounding box of a set of points. For example, consider the scenario in Fig. B.1 where there are only two feasible parameters, and . Here, is and under-approximates the safe set ( is in general not representable as the complement of an axis-aligned box).
b.2 Convex parameterization
In this parameterization, for fixed , is convex.
While apart from solving Problem 4 it is hard to recover exactly, an inner approximation of can be extracted more efficiently by taking the convex hull of any , as the convex hull is the minimal convex set containing .
The same approaches apply for recovering when it is instead the safe set which is an axis-aligned hyper-rectangle or a convex set.
Appendix C Theoretical Analysis (Expanded)
In this section, we present theoretical analysis on our parametric constraint learning algorithm. In particular, we analyze the limits of what constraint states can be learned guaranteed unsafe/safe (Section C.1) as well as the conditions under which our algorithm is guaranteed to learn a conservative estimate of the safe and unsafe sets (Section C.2). For ease of reading, we repeat the theorem statements from the main body (the corresponding theorem numbers from the main body are listed in the theorem statement). We develop the theory for for legibility, but the results can be easily extended to general .
In this section, we develop results for learnability of the unsafe set in the parametric case. We begin with the following notation:
Definition C.1 (Signed distance).
Signed distance from point to set , if ; if .
Definition C.2 (-shell).
For a discrete time system satisfying for all , denote the shell of the unsafe set as: .
Definition C.3 (Implied unsafe set).
For some set , denote as the set of states that are implied unsafe by restricting the parameter set to . In words, is the set of states for which all mark as unsafe.
Definition C.4 (Feasible set ).
Definition C.5 (Learnability and learnable set ).
A state is learnable if there exists any set of demonstrations and unsafe trajectories sampled using the hit-and-run method presented in Section 4.1, where and may be infinite, such that . Accordingly, we define the learnable set of unsafe states as the union of all learnable states. Note that by this definition, a state is always learnable, since there always exists some safe demonstration passing through .
Suppose , for some other set . Then, .
Each unsafe trajectory with start and goal states in the safe set contains at least one state in the -shell : .
For each unsafe trajectory with start and goal states in the safe set, there exists . Further, if there exists , then there also exists . For contradiction, suppose there exists a time for which and for which . But this implies or , i.e. to skip deeper than into the unsafe set without first entering the shell, the state must have changed by more than in a single time-step. Contradiction. An analogous argument holds for the continuous-time case. ∎
The following result states that in discrete time, the learnable set of unsafe states is contained by the set of states which must be implied unsafe by setting as unsafe. Furthermore, in continuous time, the same holds, except the is replaced by the boundary of the unsafe set, .
Theorem C.1 (Discrete time learnability for parametric constraints).
For trajectories generated by discrete time systems, , where
Corollary C.1 (Continuous-time learnability for parametric constraints).
For trajectories generated by continuous time systems, , where
Since going from discrete time to continuous time implies , . Then, the logic from the proof of Theorem C.1 can be similarly applied to show the result. ∎