1 Introduction
Learning from demonstration is a powerful paradigm for enabling robots to perform complex tasks. Inverse optimal control and inverse reinforcement learning (IOC/IRL) (
[25, 1, 5, 21]) methods have been used to learn a cost function to replicate the behavior of an expert demonstrator. However, planning problems generally also require knowledge of constraints, which define the states or trajectories that are safe. For example, to get a robot arm to efficiently transport a cup of coffee without spilling it, one can optimize a cost function describing the length of the path, subject to constraints on the pose of the end effector. Constraints can represent safety requirements more strictly than cost functions, especially in safetycritical situations: enforcing a hard constraint can enable the robot to guarantee safe behavior, as opposed to using a “softened” cost penalty term. Furthermore, learning a global constraint shared across many tasks can help the robot generalize. Consider the arm, which must avoid spilling the coffee regardless of where the cup started off or needs to go.While constraints are important, it can be impractical to exhaustively program all the possible constraints a robot should obey across all tasks. Thus, we consider the problem of extracting the latent constraints within expert demonstrations that are shared across tasks. We adopt the insight of [10] that each safe, optimal demonstration induces a set of lowercost trajectories that must be unsafe due to violation of an unknown constraint. As in [10]
, we sample these unsafe trajectories, ensuring that they are also consistent with the system dynamics, control constraints, and start/goal constraints. The unsafe trajectories are used together with the safe demonstrations in an “inverse” integer program that recovers an unsafe set consistent with the safe and unsafe trajectories. We make the following additional contributions in this paper. First, by using a (potentially known) parameterization of the constraints, our method enables the inference of safe and unsafe sets in highdimensional constraint spaces. Second, we relax the known parametrization assumption and propose a means to incrementally grow a parameterization with the data. Third, we introduce a method for extracting volumes of states which are guaranteed safe or guaranteed unsafe according to the data and parameterization. Fourth, we provide theoretical analysis showing that our method is guaranteed to output conservative estimates of the unsafe and safe sets under mild assumptions. Finally, we evaluate our method on highdimensional constraints for highdimensional systems by learning constraints for 7DOF arm, quadrotor, and planar pushing examples, showing that our method outperforms baseline approaches.
2 Related Work
Inverse optimal control [13, 14] (IOC) and inverse reinforcement learning (IRL) [21] aim to recover an objective function that replicates provided expert demonstrations when optimized. Our method is complementary to these approaches; if the demonstrator solves a constrained optimization problem, we are finding its constraints, given the cost; IOC/IRL finds the cost, given the constraints [12]. Risksensitive IRL [29] is complementary to our work, which learns hard constraints. Similarly, [4] learns a statespace constraint shared across tasks as a penalty term in the reward function of an MDP. However, when representing a constraint as a penalty, it is unclear if a demonstrated action was made to avoid a penalty or to improve the trajectory cost in terms of the true cost function (or both). Thus, learning a penalty generalizing across cost functions becomes difficult. To avoid this, we assume a known cost function to explicitly reason about the constraint. Also relevant is safe reinforcement learning, which aims to perform exploration while minimizing visitation of unsafe states. Several methods [27, 31, 2] use Gaussian process models to incrementally explore safe regions in the state space. We take a complementary approach to safe learning by using demonstrations in place of exploration to guide the learning of safe behaviors. Methods exist for learning geometric state space constraints [6, 23], task space equality constraints [18, 17], and convex constraints [20], which our algorithm generalizes by being able to learn arbitrary nonconvex parametric inequality constraints defined in some constraint space (not limited to the state space). Other methods aim to learn local trajectorybased constraints [16, 19, 22, 33, 9, 8] by reasoning over the constraints within a single trajectory or task. In contrast, our method aims to learn a global constraint shared across tasks.
The method closest to our work is [10], which learns a global shared constraint on a gridded constraint space; hence, the resulting constraint recovery method scales exponentially with the constraint space dimension and cannot exploit any side information on the structure of the constraint. This often leads to very conservative estimates of the unsafe set, and only grid cells visited by demonstrations can be learned guaranteed safe. We overcome these shortcomings with a novel algorithm that exploits constraint parameterizations for scalability and integration of prior knowledge, and also enables learning volumes of guaranteed safe/unsafe states in the original nondiscretized constraint space, yielding less conservative estimates of the safe/unsafe sets under weaker assumptions than [10].
3 Problem Setup
Consider a system with discretetime dynamics or continuoustime dynamics , where and . The system performs tasks represented as constrained optimization problems over state/control trajectories / in state/control trajectory space /:
Problem 1 (Forward problem / “task” ).
(1) 
where is a cost function for task , and is a known mapping from statecontrol trajectories to a constraint space , elements of which are referred to as “constraint states”. Mappings and are known and map to constraint spaces and , containing a known shared safe set and a known taskdependent safe set , respectively. In this paper, we take to be the set of trajectories satisfying start/goal state constraints and to be the set of dynamicallyfeasible trajectories obeying control constraints, though the dynamics may not be known in closed form. is an unknown safe set defined by an unknown parameter and a possibly unknown parameterization . A demonstration, , is a statecontrol trajectory which approximately solves Problem 1, i.e. it satisfies all constraints and its cost is at most a factor of above the cost of a globally optimal solution , i.e. . For convenience, we summarize our frequently used notation in Appendix F. In this paper, our goal is to recover the safe set and its complement, the unsafe set , given demonstrations , inferred unsafe trajectories , the cost function , taskdependent constraints , and a simulator generating dynamicallyfeasible trajectories satisfying control constraints.
4 Method
In this section, we describe our method (a full algorithm block is presented in Appendix A). In Section 4.1, we describe how to sample unsafe trajectories. In Sections 4.2 and 4.3, we present mixed integer programs which recover a consistent constraint for a fixed parameterization and extract volumes of guaranteed safe/unsafe states. In Section 4.4, we present how our method can be extended to the case of unknown parameterizations.
4.1 Sampling lowercost trajectories
In this section, we describe the general sampling framework presented in [10] while also relaxing the assumption of known closedform dynamics made in [10]. We define the set of unsafe statecontrol trajectories induced by an optimal, safe demonstration , , as the set of statecontrol trajectories of lower cost that obey the known constraints, . We sample from to obtain lowercost trajectories obeying the known constraints using hitandrun sampling [15]
, a method guaranteeing convergence to a uniform distribution of samples over
in the limit; an illustration is shown in Fig. 4.1. Hitandrun starts from an initial point within the set, chooses a direction uniformly at random, moves a random amount in that direction such that the new point remains within the set, and repeats [10]. We sample from indirectly by sampling control sequences and rolling them out through the dynamics to generate dynamicallyfeasible trajectories. We emphasize that does not need to be known in closed form. Given a control sequence sampled by hitandrun, a simulator can instead be used to output the resulting dynamicallyfeasible trajectory, which can then be checked for membership in exactly as if the dynamics were known in closed form. Also, suboptimality of the demonstration can be handled in this framework by sampling instead from . Optimal substructure in the cost function can be exploited to sample unsafe subtrajectories over shorter time windows on the demonstrations; shorter unsafe trajectories provide less ambiguous information regarding and can better reduce the illposedness of the constraint recovery problem [10].4.2 Recovering the constraint
Recall that the unsafe set can be described by some parameterization , where we assume for now that is known, and are parameters to be learned. Intuitively, tells us if constraint state (which is any element of constraint space ) is safe according to parameter . Then, a feasibility problem can be written to find a consistent with the data:
Problem 2 (Parametric constraint recovery problem).
find  
s.t.  (2a)  
(2b) 
Constraint (2a) enforces that each safe constraint state lies outside and constraint (2b) enforces that at least one constraint state on each unsafe trajectory lies inside . Denote as the feasible set of Problem 2. Further denote and as the set of constraint states which are learned guaranteed unsafe and safe, respectively; that is, a constraint state or if
is classified unsafe or safe for all
:In Problem 2, it is possible to learn that a constraint state is guaranteed safe/unsafe even if it does not lie directly on a demonstration/unsafe trajectory. This is due to the parameterization: for the given set of safe and unsafe trajectories, there may be no feasible where is classified unsafe/safe. It is precisely this extrapolation which will enable us to learn constraints in highdimensional spaces. We now identify classes of parameterizations for which Problem 2 can be efficiently solved:
Problem 3 (Polytopic constraint recovery problem).
find  
s.t.  
(5a)  
(5b) 
Linear case: is defined by a Boolean conjunction of linear inequalities, i.e. can be defined as the union and intersection of halfspaces. For this case, mixedinteger programming can be employed. If is a single polytope, i.e. , where and are affine in , we can solve Problem 3, a mixed integer feasibility problem, to find a feasible . In Problem 3, is a large positive number and
is a vector of ones of length
, where is the number of rows in . Constraints (5a) and (5b) use the bigM formulation [7] to enforce that each safe constraint state lies outside and that at least one constraint state on each unsafe trajectory lies inside . Similar problems can be solved when the safe/unsafe set can be described by unions of polytopes. As an alternative to integer programming, satisfiability modulo theories (SMT) solvers [11] can also be used to solve Problem 2 if is defined by a Boolean conjunction of linear inequalities.Convex case: is defined by a Boolean conjunction of convex inequalities, i.e. can be described as the union and intersection of convex sets. For this case, satisfiability modulo convex optimization (SMC) [28] can be employed to find a feasible .

[leftmargin=*]

For suboptimal demonstrations / imperfect lowercost trajectory sampling, Problem 3 can become infeasible. To address this, slack variables can be introduced: replace constraint with and change the feasibility problem to minimization of ; this finds a that is consistent with as many unsafe trajectories as possible.

In addition to recovering sets of guaranteed learned unsafe and safe constraint states, a probability distribution over possibly unsafe constraint states can be estimated by sampling unsafe sets
from the feasible set of Problem 2 using hitandrun sampling, starting from a feasible .
4.3 Extracting guaranteed safe and unsafe states
One can check if a constraint state or by adding a constraint or to Problem 2 and checking feasibility of the resulting program; if the program is infeasible, or . In other words, solving this modified integer program can be seen as querying an oracle about the safety of a constraint state . The oracle can then return that is guaranteed safe (program infeasible after forcing to be unsafe), guaranteed unsafe (program infeasible after forcing to be safe), or unsure (program remains feasible despite forcing to be safe or unsafe).
Unlike the gridded formulation in [10], Problem 2 works in the continuous constraint space. Thus, it is not possible to exhaustively check if each or . To address this, the neighborhood of some constraint state can be checked for membership in by solving the following problem:
Problem 4 (Volume extraction).
In words, Problem 4 finds the smallest hypercube centered at containing a ; thus, any hypercube of size is contained within : . We can write a similar problem to check the neighborhood of for membership in . For some common parameterizations (axisaligned hyperrectangles, convex sets), there are even more efficient methods for recovering subsets of and , which are described in Appendix B. Volumes of safe/unsafe space can thus be produced by repeatedly solving Problem 4 for different , and these volumes can be passed to a planner to generate new trajectories that are guaranteed safe.
4.4 Unknown parameterizations
For many realistic applications, we do not have access to a known parameterization which can represent the unsafe set. Despite this, complex unsafe/safe sets can often be approximated as the union of many simple unsafe/safe sets. Along this line of thought, we present a method for incrementally growing a parameterization based on the complexity of the demonstrations and unsafe trajectories.
Suppose that the true parameterization of the unsafe set is unknown but can be exactly or approximately expressed as the union of simple sets , where each simple set has a known parameterization and , the minimum number of simple sets needed to reconstruct , is unknown.
A lower bound on , , can be estimated by incrementally adding simple sets until Problem 2 becomes feasible. However, for , the extracted and are not guaranteed to be conservative estimates of and (Theorem 5.3), and and are only guaranteed to be conservative if , where is the chosen number of simple sets (see Theorem 5.2). Unfortunately, inferring a guaranteed overestimation of only from data is not possible, as there can always be subsets of the constraint which are not activated by the given demonstrations. Two facts mitigate this:

[leftmargin=*]

If an upper bound on the number of simple sets needed to describe , , is known (where this bound can be trivially loose), and by using simple sets in solving Problem 2. Hence, by using , and can be made guaranteed conservative (see Theorem 5.2), at the cost of the resulting and being potentially small.

As the demonstrations begin to cover the space, . Hence, by using simple sets, and are asymptotically conservative.
In our experiments, we choose our simple sets as axisaligned hyperrectangles in , which is motivated by: 1) any open set in can be approximated as a countable/finite union of open axisaligned hyperrectangles [30]; 2) unions of hyperrectangles are easily representable in Problem 3.
5 Theoretical Analysis
In this section, we present theoretical analysis on our parametric constraint learning algorithm. In particular, we analyze the conditions under which our algorithm is guaranteed to learn a conservative estimate of the safe and unsafe sets. For space, the proofs and additional results on conservativeness (Section C.2) and the learnability of a constraint (Section C.1) are presented in the appendix. We develop the theory for for legibility, but the results can be easily extended to general .
Theorem 5.1 (Conservativeness: Known parameterization).
Suppose the parameterization is known exactly. Then, for a discretetime system, extracting and (as defined in (3) and (4), respectively) from the feasible set of Problem 2 returns and . Further, if the known parameterization is and in Problem 3 is chosen to be greater than
then extracting and from the feasible set of Problem 3 recovers and .
We also present conservativeness results for continuoustime dynamics in Corollary C.2.
Now, let’s consider the case where the true parameterization is not known and we use the incremental method described in Section 4.4, where is the simple parameterization. We consider the overparameterized case (Theorem 5.2) and the underparameterized case (Theorem 5.3). We analyze the case where the true, under, and overparameterization are defined respectively as:
(6) 
(7) 
Theorem 5.2 (Conservativeness: Overparameterization).
Theorem 5.3 (Conservativeness: Underparameterization).
Suppose the true parameterization and underparameterization are defined as in (6) and (7). Furthermore, assume that we incrementally grow the parameterization as described in Section 4.4. Then, the following are true:

[leftmargin=*]

and are not guaranteed to be contained in (unsafe set) and (safe set), respectively.

Each recovered simple unsafe set , , for any , touches the true unsafe set (there are no spurious simple unsafe sets): for , for , ( is as defined in Section 4.4).
6 Results
We evaluate our method, showing that our method can be applied to constraints with unknown parameterizations (Section 6.1), highdimensional constraints defined for highdimensional systems (Section 6.2), and settings where the dynamics are not known in closed form (Section 6.3
). We also compare our performance with a neural network (NN) baseline
^{1}^{1}1In all experiments, 1) the NN is trained with the safe/unsafe trajectories and predicts at test time if a queried constraint state is safe/unsafe; 2) error bars are generated by initializing the NN with 10 different random seeds and evaluating accuracy after training. The architectures/training details are presented in Appendix E.. We further compare with the gridbased method [10] on the 2D examples. For space, experimental details are provided in Appendix E.6.1 Unknown parameterization
Ushape: We first present a kinematic 2D example where a Ushape is to be learned, but the number of simple unsafe sets needed to represent (three) is unknown. In Row 1, Column 1 of Fig. 6.1, we outline in black and overlay , , and the six provided demonstrations, synthetically generated via trajectory optimization. We note that due to the chosen control constraints and Ushape, there are parts of (a subset of the white region in Fig. 6.1, Row 1, Column 1) which cannot be implied unsafe by sampled unsafe trajectories and the parameterization (see Theorem C.1). As a result, may not fully cover , even with more demonstrations (Fig. 6.1, Row 1, Column 3). Note that the decrease in coverage^{2}^{2}2Coverage is measured as the intersection over union (IoU) of the relevant sets (see legends for exact formula). at the third demonstration is due to a increase from a twobox parameterization to a threebox parameterization. Likewise, the accuracy^{3}^{3}3In all experiments, computed accuracies are: IP (safe) = , IP (unsafe) = , NN (safe) = , NN (unsafe) = , where are query states sampled from and is the indicator function. Note that NN accuracy is computed only on . decreases at the second demonstration due to overapproximation of with two boxes (Fig. 6.1, Row 1, Column 4), but this overapproximation vanishes when switching to the threebox parameterization (which is exact; hence and are guaranteed conservative, c.f. Theorem 5.1). The gridbased method in [10] always has perfect accuracy, since it does not extrapolate beyond the observed trajectories. However, as a result of that, it also yields low coverage (Fig. 6.1, Row 1, Column 2). The NN baseline achieves lower accuracy for the unsafe set as it misclassifies some corners of the U. Recovering a feasible using a multibox variant of Problem 3 recovers exactly (Fig. 6.1, Row 1, Column 5). Finally, we note that in this (and future) examples, demonstrations were specifically chosen to be informative about the constraint. We present a version of this example in Appendix D with random demonstrations and show that the constraint is still learned (albeit needing more demonstrations).
Infinite boxes: To show that our method can still learn a constraint that cannot be easily expressed using a chosen parameterization, we limit our parameterization to an unknown number of axisaligned boxes and attempt to learn a diagonal “I” unsafe set (see Fig. 6.1, Row 2). This is a particularly difficult example, since an infinite number of axisaligned boxes will be needed to recover exactly. However, for finite data, only a finite number of boxes will be needed; in particular, for 1, 2, 3, and 4 demonstrations (which are synthetically generated assuming kinematic system constraints), 3, 5, 6, and 6 boxes are required to generate a parameterization consistent with the data (see Fig. 6.1, Row 2, Column 1). Also overlaid in Fig. 6.1, Row 2, Column 1 are and , which are approximated by solving Problem 4 for randomly sampled . Compared to the gridded formulation in [10] (see Fig. 6.1, Row 2, Column 3), and cover and far better due to the parameterization enabling the IP to extrapolate more from the demonstrations. Furthermore, we note that while the gridded case has perfect accuracy for the safe set, it does not for the unsafe set, due to grid alignment [10]. Overall, the multibox variant of Problem 3 recovers well (Fig. 6.1, Row 2, Column 5), and the remaining gap can be improved with more data. Last, we note that the NN baseline reaches comparable accuracies here (Fig. 6.1, Row 2, Column 4), since our method suffers from a few disadvantages for this particular example. First, attempting to represent the “I” with a finite number of boxes introduces a modeling bias that the NN does not have. Second, since the system is kinematic and the constraint is lowdimensional, many unsafe trajectories can be sampled, providing good coverage of the unsafe set. We show later that for higher dimensional constraints/systems with highly constrained dynamics, it becomes difficult to gather enough data for the NN to perform well.
6.2 Highdimensional examples
6D pose constraint for a 7DOF robot arm: In this example, we learn a 6D hyperrectangular pose constraint for the end effector of a 7DOF Kuka iiwa arm. One such setting is when the robot is to bring a cup to a human while ensuring its contents do not spill (angle constraint) and proxemics constraints (i.e. the end effector never gets too close to the human) are satisfied (position constraint). We examine this problem for the cases of optimal and suboptimal demonstrations.
Demonstration setup: The end effector orientation (parametrized in Euler angles) and position are constrained to satisfy and (see Fig. 6.2, Column 1). For the optimal case, we synthetically generate seven demonstrations minimizing jointspace trajectory length. For the suboptimal case, five suboptimal continuoustime demonstrations approximately optimizing jointspace trajectory length are recorded in a virtual reality environment, where a human demonstrator moves the arm from desired start to goal end effector configurations using an HTC Vive (see Fig. E.1). The demonstrations are timediscretized for lowercost trajectory sampling [10]. In both cases, the constraint is recovered with Problem 3, where and . For the suboptimal case, slack variables are added to ensure feasibility of Problem 3, and for a suboptimal demonstration of cost , we only use trajectories of cost less than as unsafe trajectories.
Results: The coverage plots (Fig. 6.2, Rows 1 and 3, Col. 2) show that as the number of demonstrations increases, / approach the true safe/unsafe sets / ^{4}^{4}4For the unsafe sets, the IoUs are computed between and , as in high dimensions, the IoU changes more smoothly for the complements than the IoU between and , so we plot the the former for visual clarity.. For the suboptimal case, the low IoU values for lower numbers of demonstrations is due to overapproximation of the unsafe set in the component (arising from continuoustime discretization and imperfect knowledge of the suboptimality bound); the fifth demonstration, where takes values near greatly reduces this overapproximation. The accuracy plots (Fig. 6.2, Rows 2 and 4, Col. 2) present results consistent with the theory: for the optimal case, all constraint states in and are truly safe and unsafe (Theorem 5.1), and the small overapproximation for the suboptimal case is consistent with the continuoustime conservativeness (Theorem C.2). Note that the NN accuracy is lower and can oscillate with demonstrations, since it finds just a single constraint which is approximately consistent with the data, while our method classifies safety by consulting all possible constraints which are exactly consistent with the data, thus performing more consistently. The NN performs better on the suboptimal case than it does on the optimal case, as more unsafe trajectories are sampled due to the suboptimality, improving coverage of the unsafe set. The projections of (Fig. 6.2, Cols. 34, in red), where is obtained using the method in Appendix B, are compared to the safe set (blue outline), showing that the two match nearly exactly (though the gap for the suboptimal case is larger), and the gap can be likely reduced with more demonstrations. The projections of (Fig. 6.2, Col. 5) match exactly with for the optimal case (true safe set is outlined in blue) and match closely for the suboptimal case. Note that , as is the case for all axisaligned box parameterizations.
3D constraint for 12D quadrotor model: We learn a 3D box angular velocity constraint for a quadrotor with discretetime 12D dynamics (see Appendix E for details). In this scenario, the quadrotor must avoid an a priori known unsafe set in position space while also ensuring that angular velocities are below a threshold: . The safe set is to be inferred from two demonstrations (see Fig. 6.3). The constraint is recovered with Problem 3, where and . Fig. 6.4 shows that with more demonstrations, approaches the true safe set and approaches the true unsafe set , respectively. Consistent with Theorem 5.1, our method has perfect accuracy in and . Here, the NN struggles more compared to the arm examples since due to the more constrained dynamics, fewer unsafe trajectories can be sampled, and a parameterization needs to be leveraged in order to say more about the unsafe set. The remaining columns of Fig. 6.4 show that we recover and exactly (the true safe set is outlined in blue).
6.3 Planar pushing example
In this section, using the FetchPushv1 environment in OpenAI Gym [24], we aim to learn a 2D box unsafe set on the centerofmass (CoM) of a block pushed by the Fetch arm (see Fig. 6.5) using two demonstrations. Here, the dynamics of the block CoM are not known in closed form, but rollouts can still be sampled using the simulator. Since the block CoM is highly underactuated, it is not possible to sample short subtrajectories. Thus, without leveraging a parameterization, the constraint recovery problem is very illposed. Furthermore, while our method can explicitly consider the unsafeness in longer unsafe trajectories (at least one state is unsafe), the NN struggles with this example as it fails to accurately model that fact. Overall, Fig. 6.5 presents that / match up well with /, and our classification accuracy for safeness/unsafeness is perfect across demonstrations.
7 Discussion and Conclusion
In this paper, we present a method capable of learning parametric constraints in highdimensional spaces with and without known parameterizations. We also present a method for extracting volumes of guaranteed safe and guaranteed unsafe states, information which can be directly used in a planner to enforce safety constraints. We analyze our algorithm, showing that these recovered guaranteed safe/unsafe states are truly safe/unsafe under mild assumptions. We evaluate the method by learning a variety of constraints defined in highdimensional spaces for systems with highdimensional dynamics. One shortcoming of our work is scalability with the amount of data, due to the number of integer variables growing linearly with the number of safe/unsafe trajectories. As a result, learning constraints without extensive sampling of unsafe trajectories is a direction of future work.
This work was supported in part by a National Defense Science and Engineering Graduate (NDSEG) Fellowship, Office of Naval Research (ONR) grants N000141812501 and N000141712050, and National Science Foundation (NSF) grants ECCS1553873 and IIS1750489.
References
 [1] (2004) Apprenticeship learning via inverse reinforcement learning. See DBLP:conf/icml/2004, External Links: Document Cited by: §1.
 [2] (201412) Reachabilitybased safe learning with gaussian processes. In Conference on Decision and Control, Vol. . External Links: Document, ISSN 01912216 Cited by: §2.
 [3] (2016) Thickness control in structural optimization via a level set method. Struct. and Multidisciplinary Optimization. External Links: Link Cited by: Definition C.6.
 [4] (2017) Repeated inverse reinforcement learning. In Neural Information Processing Systems, pp. 1813–1822. Cited by: §2.
 [5] (2009) A survey of robot learning from demonstration. Robotics and Autonomous Systems 57 (5), pp. 469–483. Cited by: §1.
 [6] (2017) Efficient learning of constraints and generic null space policies. In International Conference on Robotics and Automation , pp. 1520–1526. Cited by: §2.
 [7] (1997) Introduction to linear optimization. 1st edition, Athena Scientific. External Links: ISBN 1886529191 Cited by: §4.2.
 [8] (2007) Incremental learning of gestures by imitation in a humanoid robot. See DBLP:conf/hri/2007, pp. 255–262. External Links: Document Cited by: §2.
 [9] (2008) A probabilistic programming by demonstration framework handling constraints in joint space and task space. See DBLP:conf/iros/2008, External Links: Document Cited by: §2.
 [10] (2018) Learning constraints from demonstrations. Workshop on the Algorithmic Foundations of Robotics (WAFR). External Links: Link, 1812.07084 Cited by: §C.2, §1, §2, §4.1, §4.3, Figure 6.1, §6.1, §6.1, §6.2, §6.
 [11] (2008) Z3: an efficient SMT solver. See DBLP:conf/tacas/2008, pp. 337–340. External Links: Link, Document Cited by: §4.2.
 [12] (2017) Inverse kkt: learning cost functions of manipulation tasks from demonstrations. International Journal of Robotics Research 36 (1314), pp. 1474–1488. External Links: Document Cited by: §2.
 [13] (19640301) When is a linear control system optimal?. Journal of Basic Engineering 86 (1), pp. 51–60. External Links: ISSN 00982202, Document Cited by: §2.
 [14] (2011) Imputing a convex objective function. In International Symposium on Intelligent Control, pp. 613–619. Cited by: §2.
 [15] (2011) An analysis of a variation of hitandrun for uniform sampling from general regions. Transactions on Modeling and Computer Simulation. Cited by: §4.1.
 [16] (2016) Learning object orientation constraints and guiding constraints for narrow passages from one demonstration. In International Symposium on Experimental Robotics, Cited by: §2.
 [17] (2015) Learning null space projections. In IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 2630 May, 2015, pp. 2613–2619. Cited by: §2.
 [18] (2017) Learning task constraints in operational space formulation. In 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, May 29  June 3, 2017, pp. 309–315. Cited by: §2.
 [19] (201612) Inferring and assisting with constraints in shared autonomy. In (Conference on Decision and Control), Vol. , pp. 6689–6696. External Links: Document, ISSN Cited by: §2.
 [20] (2018) Predictive modeling by infinitehorizon constrained inverse optimal control with application to a human manipulation task. CoRR abs/1812.11600. External Links: Link, 1812.11600 Cited by: §2.

[21]
(2000)
Algorithms for inverse reinforcement learning.
In
International Conference on Machine Learning ’00
, San Francisco, CA, USA, pp. 663–670. Cited by: §1, §2.  [22] (2013) Learning robot skills through motion segmentation and constraints extraction. Note: International Conference on Human Robot Interaction Cited by: §2.
 [23] (2017) CLEARN: learning geometric constraints from demonstrations for multistep manipulation in shared autonomy. In International Conference on Robotics and Automation, Cited by: §2.
 [24] (2018) Multigoal reinforcement learning: challenging robotics environments and request for research. CoRR abs/1802.09464. External Links: Link, 1802.09464 Cited by: §6.3.
 [25] (2006) Maximum margin planning. See DBLP:conf/icml/2006, pp. 729–736. External Links: Document Cited by: §1.
 [26] (2015) Quadrotor control: modeling, nonlinear control design, and simulation. Cited by: 1st item.

[27]
(2015)
Safe exploration for active learning with gaussian processes
. In European Conference on Machine Learning, Cited by: §2.  [28] (2018) SMC: satisfiability modulo convex programming. Proceedings of the IEEE 106 (9), pp. 1655–1679. Cited by: §4.2.
 [29] (2018) Risksensitive inverse reinforcement learning via semi and nonparametric methods. International Journal of Robotics Research 37 (1314). External Links: Document Cited by: §2.
 [30] (2016) Analysis ii. Springer Texts and Readings in Mathematics, Springer Singapore. Cited by: §4.4.

[31]
(2016)
Safe exploration in finite markov decision processes with gaussian processes
. In Neural Information Processing Systems, pp. 4305–4313. Cited by: §2.  [32] (2006) On the implementation of an interiorpoint filter linesearch algorithm for largescale nonlinear programming. Math. Program. 106 (1), pp. 25–57. External Links: Document Cited by: 5th item, 5th item.
 [33] (2011) Demonstrationguided motion planning. See DBLP:conf/isrr/2011, External Links: Document Cited by: §2.
Appendix A Detailed algorithm block
Appendix B Extraction of and
In this section, we discuss specific ways of extracting sets of guaranteed safe/unsafe states for axisaligned hyperrectangles (this method is used for all numerical examples in Section 6.2 and Section 6.3) and for convex parameterizations.
b.1 Axisaligned hyperrectangle parameterization
In this parameterization, , , and , where and . Here, and are the lower and upper bounds of the hyperrectangle for coordinate .
As the set of axisaligned hyperrectangles is closed under intersection, is also an axisaligned hyperrectangle, the axisaligned bounding box of any two constraint states is also contained in . This also implies that can be fully described by finding the top and bottom corners and . Suppose we start with a known . Then, finding amounts to performing a binary search for each of the dimensions, and the same holds for finding .
Recovering is not as straightforward, as the complement of axisaligned boxes is not closed under intersection. While we can still solve Problem 4 to recover , an inner approximation of can be more efficiently obtained: starting at a constraint state , line searches can be performed to find the two points of transition to in each constraint coordinate. Denote as the complement of the axisaligned bounding box of these points; is an inner approximation of , as , where denotes the axisaligned bounding box of a set of points. For example, consider the scenario in Fig. B.1 where there are only two feasible parameters, and . Here, is and underapproximates the safe set ( is in general not representable as the complement of an axisaligned box).
b.2 Convex parameterization
In this parameterization, for fixed , is convex.
While apart from solving Problem 4 it is hard to recover exactly, an inner approximation of can be extracted more efficiently by taking the convex hull of any , as the convex hull is the minimal convex set containing .
The same approaches apply for recovering when it is instead the safe set which is an axisaligned hyperrectangle or a convex set.
Appendix C Theoretical Analysis (Expanded)
In this section, we present theoretical analysis on our parametric constraint learning algorithm. In particular, we analyze the limits of what constraint states can be learned guaranteed unsafe/safe (Section C.1) as well as the conditions under which our algorithm is guaranteed to learn a conservative estimate of the safe and unsafe sets (Section C.2). For ease of reading, we repeat the theorem statements from the main body (the corresponding theorem numbers from the main body are listed in the theorem statement). We develop the theory for for legibility, but the results can be easily extended to general .
c.1 Learnability
In this section, we develop results for learnability of the unsafe set in the parametric case. We begin with the following notation:
Definition C.1 (Signed distance).
Signed distance from point to set , if ; if .
Definition C.2 (shell).
For a discrete time system satisfying for all , denote the shell of the unsafe set as: .
Definition C.3 (Implied unsafe set).
For some set , denote as the set of states that are implied unsafe by restricting the parameter set to . In words, is the set of states for which all mark as unsafe.
Definition C.4 (Feasible set ).
Definition C.5 (Learnability and learnable set ).
A state is learnable if there exists any set of demonstrations and unsafe trajectories sampled using the hitandrun method presented in Section 4.1, where and may be infinite, such that . Accordingly, we define the learnable set of unsafe states as the union of all learnable states. Note that by this definition, a state is always learnable, since there always exists some safe demonstration passing through .
Lemma C.1.
Suppose , for some other set . Then, .
Proof.
By definition,
∎
Lemma C.2.
Each unsafe trajectory with start and goal states in the safe set contains at least one state in the shell : .
Proof.
For each unsafe trajectory with start and goal states in the safe set, there exists . Further, if there exists , then there also exists . For contradiction, suppose there exists a time for which and for which . But this implies or , i.e. to skip deeper than into the unsafe set without first entering the shell, the state must have changed by more than in a single timestep. Contradiction. An analogous argument holds for the continuoustime case. ∎
The following result states that in discrete time, the learnable set of unsafe states is contained by the set of states which must be implied unsafe by setting as unsafe. Furthermore, in continuous time, the same holds, except the is replaced by the boundary of the unsafe set, .
Theorem C.1 (Discrete time learnability for parametric constraints).
For trajectories generated by discrete time systems, , where
Proof.
Corollary C.1 (Continuoustime learnability for parametric constraints).
For trajectories generated by continuous time systems, , where
Proof.
Since going from discrete time to continuous time implies , . Then, the logic from the proof of Theorem C.1 can be similarly applied to show the result. ∎
c.2 Conservativeness: Parametric
We write conditions for conservative recovery of the unsafe set and safe set when solving Problems 2 and 3 for discrete time and continuous time systems.
Comments
There are no comments yet.