I Introduction
Teleoperation is enabling robotic systems to become pervasive in settings where full autonomy is currently out of reach [18, 47, 15]. Compelling applications include minimally invasive surgery [44, 45], space exploration [46], remote vehicle operations [16] and disaster relief scenarios [34]. A human teleoperator can control a robot through tasks that have complex semantics and are currently difficult to explicitly program or to learn to solve efficiently without supervision.
A downside of teleoperation is that it requires continuous errorfree [7]
operator attention even for highly repetitive tasks. This problem can be addressed through LearningfromDemonstrations (LfD) or Imitation Learning techniques
[41, 9]where a control law needs to be inferred from a small number of demonstrations. Such a law can then bootstrap dataefficient reinforcement learning for challenging tasks
[47]. The demonstrator attempts to ensure that the robot’s motions capture the relevant semantics of the task rather than requiring the robot to understand the semantics. The learnt control law should take over from the teleoperator and enable the robot to repeatedly execute the desired task even in dynamically changing conditions. For example, the origin of a picking task and the goal of a placing task may dynamically shift to configurations unseen during training, and moving obstacles may be encountered during execution. The latter is particularly relevant in collaborative humanrobot workspaces where safety guarantees are paramount. In such situations, when faced with an obstacle, the robot cannot follow the demonstration path anymore and needs to recompute a new motion trajectory in realtime to avoid collision and still attempt to accomplish the desired task.Such realtime adaptivity can be elegantly achieved by associating demonstrations with a dynamical system [40, 43, 25, 24, 27]: a vector field defines a closedloop velocity control law. From any state that the robot finds itself in, the vector field can then steer the robot back towards the desired imitation behavior, without the need for path replanning with classical approaches. Furthermore, the learnt vector field can be modulated in realtime [26, 19, 28] in order to avoid collisions with obstacles.
At first glance, the problem of imitation learning of a smooth dynamical system, from samples appears to be a straightforward regression problem: simply minimize imitation loss over a suitable family of vector valued maps, . However, a naive supervised regression approach may be woefully inadequate, as illustrated in the middle panel of Figure 2 where the goal is to have a KUKA arm imitate a circular trajectory. As can be seen, estimating vector fields from a small number of trajectories potentially leads to instability – the estimated field easily diverges when the initial conditions are even slightly different from those encountered during training. Therefore, unsurprisingly, learning with stability constraints has been the technical core of existing dynamical systems based LfD approaches, e.g. see [24, 27, 40, 43]. However, these methods have one or more of the following limitations: (1) they involve nonconvex optimization for dynamics fitting and constructing Lyapunov stability certificates respectively and, hence, have no endtoend optimality guarantees, (2) the notion of stability is not trajectorycentric, but rather focused on reaching a single desired equilibrium point, and (3) they are computationally infeasible when formulated in continuoustime. With this context, our contributions in this paper include the following:

We formulate a novel continuous time optimization problem over vector fields involving an imitation loss subject to a generalizationenforcing constraint that turns the neighborhood of demonstrations into contracting regions [32]. Within this region, all trajectories are guaranteed to coalesce towards the demonstration exponentially fast.

We show that our formulation leads to an instance of timevarying semidefinite programming [3] for which a sumofsquares relaxation [30, 37, 6] turns out to be exact! Hence, we can find the globally optimal polynomial vector field that has the lowest imitation loss among all polynomial vector fields of a given degree that are contracting on a region around the demonstrations in continuous time.

On benchmark handwriting imitation tasks [1], our method outperforms competing approaches in terms of a variety of imitation quality metrics.

We demonstrate our methods on a 7DOF KUKA pickandplace LfD task where task completeness is accomplished despite dynamic obstacles in the environment, changing initial poses and moving destinations. By contrast, without contraction constraints, the vector field tends to move far from the demonstrated trajectory activating emergency breaks on the arm and failing to complete the task.
Our “dirty laundry” includes: (1) we cannot handle high degree polynomials due to the scalability limitations of current SDP solvers, and (2) our notion of incremental stability is local, even though our method generalizes well in the sense that a wide contraction tube is setup around the demonstrations.
Ii Problem Statement
We are interested in estimating an unknown continuous time autonomous dynamical system
(1) 
where is an unknown continuously differentiable function.
We assume that we have access to one or several sample trajectories that satisfy , where and . These trajectories (, for ) constitute our training data, and our goal is to search for an approximation of the vector field in a class of functions of interest that reproduces trajectories as close as possible to the ones observed during training. In other words, we seek to solve the following continuous time least squares optimization problem (LSP):
(LSP) 
In addition to consistency with , we want our learned vector field to generalize in conditions that were not seen in the training data. Indeed, the LSP problem generally admits multiple solutions, as it only dictates how the vector field should behave on the sample trajectories. This underspecification can easily lead to overfitting, especially if the class of function is expressive enough. The example of Figure 2 reinforces this phenomenon even for a simple circular motion. Note that standard dataindependent regularization (e.g., regularizer) is insufficient to resolve the divergence illustrated here: a stronger stabilizer ensuring convergence, not just smoothness, of trajectories is needed. The notion of stability of interest to us in this paper is contraction which we now briefly review.
Iia Incremental Stability and Contraction Analysis
Notions of stability called incremental stability and associated contraction analysis tools [21, 32] are concerned with the convergence of system trajectories with respect to each other, as opposed to classical Lyapunov stability which is with respect to a single equilibrium. Contraction analysis derives sufficient and necessary conditions under which the displacement between any two trajectories will go to zero. We give in this section a brief presentation of this notion based on [8].
Contraction analysis of a system is best explained by considering the dynamics of , the infinitesimal displacement between two trajectories:
From this equation we can easily derive the rate of change of the infinitesimal squared distance between two trajectories as follows:
(2) 
More generally, we can consider the infinitesimal squared distance with respect to a metric that is different from the Euclidian metric. A metric is given by smooth, matrixvalued function that is uniformly positive definite, i.e. there exists such that
(3) 
where
is the identity matrix and the relation
between two symmetric matrices andis used to denote that the smallest eigenvalue of their difference
is nonnegative. For the clarity of presentation, we only consider metric functions that do not depend on time .The squared norm of an infinitesimal displacement between two trajectories with respect to this metric is given by . The Euclidean metric corresponds to the case where is constant and equal to the identity matrix.
Similarly to (2), the rate of change of the squared norm of an infinitesimal displacement with respect to a metric follows the following dynamics:
(4) 
where denotes for any square matrix and is the matrix whose is . This motivates the following definition of contraction.
Definition 1 (Contraction)
For a positive constant and a subset of the system is said to be on the region with respect to a metric if
(5) 
Remark 1
When the vector field is a linear function , and the metric is constant , it is easy to see that contraction condition (5) is in fact equivalent to global stability condition,
(6) 
Given a vector field with respect to a metric , we can conclude from the dynamics in (4) that
Integrating both sides yields,
Hence, any infinitesimal length (and by assumption (3), ) converges exponentially to zero as time goes to infinity. This implies that in a contraction region, trajectories will tend to converge together towards a nominal path. If the entire statespace is contracting and a finite equilibrium exists, then this equilibrium is unique and all trajectories converge to this equilibrium.
In the next section, we explain how to globally solve the following continuoustime vector field optimization problem to fit a contracting vector field to the training data given some fixed metric . We refer to this as the least squares problem with contraction (LSPC):
(LSPC)  
s.t. is contracting on a region  
containing the demonstrations  
with respect to the metric . 
The search for a contraction metric itself may be interpreted as the search for a Lyapunov function of the specific form . As is the case with Lyapunov analysis in general, finding such an incremental stability certificate for a given dynamical system is a nontrivial problem; see [8] and references therein. If one wishes to find the vector field and a corresponding contraction metric at the same time, then the problem becomes nonconvex. A common approach to handle this kind of problems is to optimize over one parameter at a time and fix the other one to its latest value and then alternate (i.e. fix a contraction metric and fit the vector field, then fix the vector field and improve on the contraction metric.)
Iii Learning Contracting Vector Fields as a TimeVarying Convex Problem
In this section we explain how to formulate and solve the problem of learning a contracting vector field from demonstrations described in (LSPC). We will first see that we can formulate it as a timevarying semidefinite problem. We will then describe how to use tools from sum of squares programming to solve it.
Iiia TimeVarying Semidefinite Problems
We call timevarying semidefinite problems (TVSDP) optimization programs of the form
(TVSDP)  
s.t. 
where the variable
stands for time, the loss function
in the objective is assumed to be convex and the () are linear functionals that map an element to a matrixvalued function . We will restrict the space of functions to be the space of functions whose components are polynomials of degree :(7) 
and we make the assumption that is a matrix with polynomial entries. Our interest in this setting stems from the fact that polynomial functions can approximate most functions reasonably well. Moreover, polynomials are suitable for algorithmic operations as we will see in the next section. See [3] for a more indepth treatment of timevarying semidefinite programs with polynomial data.
Let us now show how to reformulate the problem in (LSPC) of fitting a vector field to sample trajectories as a (TVSDP). For this problem to fit within our framework, we start by approximating each trajectory with a polynomial function of time . Our decision variable is the polynomial vector field and we seek to optimize the following objective function
(8) 
which is already convex (in fact convex quadratic). In order to impose the contraction of the vector field over some region around the trajectories in demonstration, we use a smoothness argument to claim that it is sufficient to impose contraction only on the trajectories themselves. See Proposition 1 later for a more quantitative statement of this claim. To be concrete, we take
(9) 
where is some known contraction metric.
IiiB SumOfSquares Programming
In this section we review the notions of sumofsquares (SOS) programming and its applications to polynomial optimization, and how we apply it for learning a contracting polynomial vector field. SOS techniques have found several applications in Robotics: constructing Lyapunov functions [2], locomotion planning [39], design and verification of provably safe controllers [33], grasping and manipulation [13], inverse optimal control [38] and modeling 3D geometry [4].
Let be the ring of polynomials in real variables with real coefficients of degree at most . A polynomial is nonnegative if for every . In many applications, including the one we cover in this paper, we seek to find the coefficients of one (or several) polynomials without violating some nonnegativity constraints. While the notion of nonnegativity is conceptually easy to understand, even testing whether a given polynomial is nonnegative is known to be NPhard as soon as the degree and the number of variables .
A polynomial , with even, is a sumofsquares (SOS) if there exists polynomials such that
(10) 
An attractive feature of the set of SOS polynomials is that optimizing over it can be cast as a semidefinite program of tractable size, for which many solvers already exist. Indeed, it is known [30][37] that a polynomial of degree can be decomposed as in (10) if and only if there exists a positive semidefinite matrix such that
where is the vector of monomials of up to degree , and the equality between the two sides of the equation is equivalent to a set of linear equalities in the coefficients of the polynomial and the entries of the matrix .
Sumof Squares Matrices: If a polynomial is SOS, then it is obviously nonnegative, and the matrix acts as a certificate of this fact, making it easy to check that the polynomial at hand is nonnegative for every value of the vector . In order to use similar techniques to impose contraction of a vector field, we need a slight generalization of this concept to ensure that a matrixvalued polynomial (i.e. a matrix whose entries are polynomial functions) is positive semidefinite (PSD) for all values of . We can equivalently consider the scalarvalued polynomial , where is a vector of new indeterminates, as positive semidefiniteness of is equivalent to the nonnegativity of . If is SOS, then we say that is a sumofsquares matrix (SOSM) [29, 17, 42]. Consequently, optimizing over SOSM matrices is a tractable problem.
Exact Relaxation: A natural question here is how much we lose by restricting ourselves to the set of SOSM matrices as opposed the set of PSD matrices. In general, these two sets are quite different [10]. In our case however, all the matrices considered are univariate as they depend only on the variable of time . It turns out that, in this special case, these two notions are equivalent!
Theorem 1 (See e.g. [11])
A matrixvalued polynomial is PSD everywhere (i.e. ) if and only if the associated polynomial is SOS.
The next theorem generalizes this result to the case where we need to impose PSDness only on the interval (as opposed to .)
Theorem 2 (See Theorem 2.5 of [14])
A matrixvalued polynomial of degree is PSD on the interval (i.e. ) if and only if can be written as
where and are SOSM. In the first case, and have degree at most , and in the second case (resp. ) has degree at most (resp. ). When that is the case, we say that is SOSM on .
IiiC Main Result and CVFP
The main result of this section is summarized in the following theorem that states that the problem of fitting a contracting polynomial vector field to polynomial data can be cast as a semidefinite program.
Theorem 3
To reiterate, the above sumofsquares relaxation leads to no loss of optimality: the SDP above returns the globally optimal solution to the problem stated in LSPC. Our numerical implementation uses the Splitting Conic Solver (SCS) [36] for solving largescale convex cone problems.
Remark 2
Note that the time complexity of solving the SDP defined in (LSPCSOS) is bounded above by a polynomial function of the number of trajectories, the dimension of the space where they live, and the degree of the candidate polynomial vector field. In practice however, only small to moderate values for and can be solved for as the exponents appearing in this polynomial are prohibitively large. Significant progress has been made in recent years in inventing more scalable alternatives to SDPs based on linear and second order cone programming that can be readily applied to our framework [5].
For the rest of this paper, our approach will be abbreviated as CVFP, standing for Polynomial Contracting Vector Fields.
IiiD Generalization Properties
The contraction property of CVFP generalizes to a wider region in the state space. The next proposition shows that any sufficiently smooth vector field that is feasible for the problem stated in LSPCSOS is contracting on a “tube” around the demonstrated trajectories.
Proposition 1 (A lower bound on the contraction tube)
If is a twice continuously differentiable vector field that satisfies
where is a compact region of , is a positive constant, is a positive definite metric, and is a path, then is with respect to the metric on the region defined by
where is positive scalar depending only and on the smoothness parameters of and and is defined explicitly in Eqn. 11.
For the proof we will need the following simple fact about symmetric matrices.
Lemma 1
For any symmetric matrices and
where denotes the smallest eigenvalue function.
[Proof of Proposition 1] Let , , and be as in the statement of Proposition 1. Define . Notice that since the metric is uniformly positive definite, then . Let us now define
(11) 
where is the scalar equal to
Fix , and let be a vector in such that . Our aim is to prove that the matrix defined by
is positive semidefinite. Notice that our choice for guarantees that the maps are for , therefore . Using Lemma 1 we conclude that the smallest eigenvalues of and are within a distance of of each other. Since we assumed that , then is at least , and therefore is at least . We conclude that our choice of in (11) guarantees that is positive semidefinite.
We note that the estimate obtained in this proposition is quite conservative. In practice the contraction tube is much larger than what is predicted here.
Iv Empirical Comparisons: Handwriting Imitation
We evaluate our methods on the LASA library of twodimensional human handwriting motions commonly used for benchmarking dynamical systems based movement generation techniques in imitation learning settings [27][31][40]. This dataset contains handwriting motions recorded with a pen input on a Tablet PC. For each motion, the user was asked to draw 7 demonstrations of a desired pattern, by starting from different initial positions and ending at the same final point. Each demonstration trajectory comprises of position () and velocity () measurements. We use 4 demonstrations for training and 3 demonstrations for testing as shown in Figure 3.
Metric  DMP  SEDS  CLFDM  CVFP 
Reproduction Accuracy  
TrainingTrajectoryError  4.1  7.2  4.9  6.5 
TrainingVelocityError  7.4  14.6  11.0  13.9 
TestTrajectoryError  5.5  4.6  12.2  3.8 
TestVelocityError  8.7  11.3  15.5  11.4 
Stability  
DistanceToGoal  3.6  3.2  6.7  2.5 
DurationToGoal    3.9  4.3  3.3 
NumberReachedGoal  0/7  7/7  7/7  7/7 
GridDuration (sec)  5.9  3.7  9.7  1.9 
GridFractionReachedGoal  6%  100%  100%  100% 
GridDistanceToGoal  3.3  1.0  1.0  1.0 
GridDTWD ()  2.4  1.4  1.4  2.0 
Training and Integration Speed (in seconds)  
TrainingTime  0.05  2.1  2.8  0.2 
IntegrationSpeed  0.21  0.06  0.15  0.01 
We report in Table I comparisons on the Angle shape against state of the art methods for estimating stable dynamical systems, the Stable Estimator of Dynamical Systems (SEDS) [24], Control Lyapunov Functionbased Dynamic Movements (CLFDM) [25] and Dynamic Movement Primitives (DMP) [20]
. The training process in these methods involves nonconvex optimization with no global optimality guarantees. Additionally, DMPs can only be trained from one demonstration one degreeoffreedom at a time. For all experiments, we learn degree 5 CVFPs with
and . We report the following imitation quality metrics.Reproduction Accuracy: How well does the vector field reproduce positions and velocities in training and test demonstrations, when started from same initial conditions and integrated for the same amount of time as the human movement duration (). Specifically, we measure reproduction error with respect to demonstration trajectories as,
The metrics TrainingTrajectoryError, TestTrajectoryError, TrainingVelocityError, TestVelocityError report these measures with respect to training and test demonstrations. At the end of the integration duration (), we also report DistanceToGoal: how far the final state is from the goal (origin). Finally, to account for the situation where the learnt dynamics is somewhat slower than the human demonstration, we also generate trajectories for a much longer time horizon () and report : the time it took for the state to enter a ball of radius around the goal, and how often this happened for the demonstrations (NumReachedGoal).
Stability: To measure stability properties, we evolve the dynamical system from random positions on a grid enclosing the demonstrations for a long integration time horizon (30T). We report the fraction of trajectories that reach the goal (); the mean duration to reach the goal when that happens (); the mean distance to the Goal () and the closest proximity of the generated trajectories to a human demonstration, as measured using Dynamic Time Warping Distance () [23] (since in this case trajectories are likely of lengths different from demonstrations).
Training and Integration Speed: We measure both training time as well as time to evaluate the dynamical system which translates to integration speed.
It can be seen that our approach is highly competitive on most metrics: reproduction quality, stability, and training and inference speed. In particular, it returns the best mean dynamic time warping distance to the demonstrations when initialized from points on a grid. A comparison of GridDTWD on a few other shapes is shown in Figure. 4.
V PickandPlace with Obstacles
We consider a kitting task shown in Figure 5 where objects are picked from a known location and placed into a box. A teleoperator quickly demonstrates a few trajectories guiding a 7DOF KUKA IIWA arm to grasp objects and place them in a box. After learning from demonstrations, the robot is expected to continually fill boxes to be verified and moved by a human working in close proximity freely moving obstacles in and out of the workspace. The arm is velocitycontrolled in joint space at Hz.
Va Demonstration Trajectory
Figure 5(a) shows the demonstration pick and place trajectory collected from the user. This trajectory was collected using an HTC Vive controller operated by a user standing in front of and watching the robot move through the demonstration as it is produced. Different buttons on the remote were used to open/close the gripper, send the arm to the Home position, and indicate the start of a new trajectory. The pick and place task was collected as two separate trajectories, one for the pick motion and another for the place motion.
VB Learning a Composition of Pick and Place CVFPs
Using the demonstration trajectory, two different polynomial contracting vector fields (CVFPs) were fit to the data, one for the pick motion, one for the place. These trajectories were fit to a degree 2 polynomial with and , using an SCS solver run for 2500 iterations. For the ease of visualization, we show the trajectories in cartesian space in Figure 6. The CVFP was fit to the trajectory in the 7dimensional joint space. The arm was then run through using the vector field eight times starting from the home position. Each trajectory was allowed to run until the norm of the arm joint velocities dropped below a threshold of . At that point, the arm would begin to move using the second vector field. The trajectories taken by the arm are shown in Figure 5(b). The eight runs have very little deviation from each other.
VC Generalization to Different Initial Poses
Next, noise is added to the home position of the arm, and again the vector field is used to move the arm through the task. Figure 5(c) noise is added uniformly from the range [0.05, 0.05] radians to each value of each joint of the arm’s starting home position. Figure 5(d), shows these same trajectories overlaid on the Kuka arm. In Figure 5(e) uniform noise is added in the same manner from the range . Due to contraction, trajectories are seen to converge from random initial conditions.
VD What happens without contraction constraints?
In Figure 5(g) the arm is run eight times using a vector field without contraction. While the arm is consistent in the trajectory that it takes, the arm moves far from the demonstrated trajectory, and eventually causes the emergency break to activate at joint limits, failing to finish the task.
In Figure 5(h) The arm is again run eight times without contraction with noise added uniformly from the range [0.05, 0.05] to each the value of each joint of the arm’s starting home position. The trajectory of the arm varies widely and had to be cut short as it was continually causing the emergency break to engage.
VE Wholebody Obstacle Avoidance
Here we enable a Kuka robot arm to follow demonstrated trajectories while avoiding obstacles unseen during training. In the system we describe below, collisions are avoided against any part of robot body. At every timestep, a commodity depth sensor like the Intel RealSense or PhaseSpace motion capture acquires a point cloud representation of the obstacle. Our setup is along the lines of [22], although we do not model occluded regions as occupied. At this point, our demonstrations and trajectories exist in joint space , while our obstacle pointclouds exists in Cartesian space with an origin at the base of the robot.
VE1 Cartesian to Joint Space Map
We precompute a setvalued inverse kinematic map that maps a position to a subset of containing all the joint configurations that would cause any part of the arm to occupy the position .
More formally, the obstacles positions are known in Cartesian space different from the control space of the robot. (e.g. we control the joint angles rather than endeffector pose.) The Kuka arm simulator allows us to query the forward kinematics map . To compute the inverse of this map, the joint space of the robot was discretized into 658,945 discrete positions. These discrete positions were created by regularly sampling each joint from a min to max angle using a step size of 0.1 radians. As shown in Figure 7, the robot was positioned at each point of the 658,945 discrete joint space points within pybullet[12], and the robot was voxelized using binvox[35]. This produced the map . We then compute .
VE2 Modulation of Contracting Vector Fields
The obstacle positions are then incorporated in a repulsive vectorfield to push the arm away from collision as it moves,
(12) 
where the integer control how fast the effect of this vector field decays as a function of distance (a high value of makes the effect of local, while a small value makes its effect more uniform.) This vector field is added to our learnt vectorfield to obtain a modulated vector field (depicted in Figure 8)
where is positive constant that is responsible for controlling the strength of the modulation, that is then fed to the Kuka arm. If the modulation is local and the obstacle is well within the jointspace contraction tube, we expect the motion to reconverge to the demonstrated behavior.
We point out that it is possible to use alternative modulation methods that come with different guarantees and drawbacks. In [26, 19] for instance, the authors use a multiplicative modulation function that preserves equilibrium points in the case of convex or concave obstacles.
While our approach does not enjoy the same guarantees, its additive nature allows us to handle a large number of obstacles as every term in Eqn. 12 can be computed in a distributed fashion, and furthermore, we do not need to impose any restrictions on the shape of the obstacles (convex/concave). This is particularly important as our control space is different from the space where the obstacle are observed, and the map that links between the two spaces can significantly alter the shape of an obstacle in general (e.g. a sphere in cartesian space can be mapped to a disconnected set in joint space).
VE3 Realtime Obstacle Avoidance
Here, using a realtime motion capture system, an obstacle is introduced to the robot’s workspace as shown in Figure 0(b). Eight trajectories were executed from the home position with the obstacle in the workspace, and the resultant trajectories are shown in Figure 5(f). At each timestep, the objects position was returned by the motion capture system. The point in Cartesian space was used to modulate the joint space vectorfield as described in Section VE. The tasks are accomplished as the arm avoids obstacles but remains within the jointspace contraction tube reconverging to the demonstrated behavior.
Vi Conclusion
This work presents a novel approach to teleoperator imitation using contracting vector fields that are globally optimal with respect to loss minimization and providing continuoustime guarantees on the behaviour of the system when started from within a contraction tube around the demonstration. Our approach compares favorably with other movement generation techniques. Additionally, we build a workspace cartesian to joint space map for the robot, and utilize it to update our CVF on the fly to avoid dynamic obstacles. We demonstrate how this approach enables the transfer of knowledge from humans to robots for accomplishing a real world robotic pick and place task. Future work includes greater scalability of our solution, composition of CVFs for more complex tasks, integrating with a perception module and helping bootstrap datahungry reinforcement learning approaches.
References
 [1] https://cs.stanford.edu/people/khansari/download.html.
 [2] Amir Ali Ahmadi. Algebraic relaxations and hardness results in polynomial optimization and lyapunov analysis. arXiv preprint arXiv:1201.2892, 2012.
 [3] Amir Ali Ahmadi and Bachir El Khadir. Timevarying semidefinite programs. arXiv preprint arXiv:1808.03994, 2018.
 [4] Amir Ali Ahmadi, Georgina Hall, Ameesh Makadia, and Vikas Sindhwani. Geometry of 3d environments and sum of squares polynomials. arXiv preprint arXiv:1611.07369, 2016.
 [5] Amir Ali Ahmadi and Anirudha Majumdar. Dsos and sdsos optimization: Lp and socpbased alternatives to sum of squares optimization. In 2014 48th annual conference on information sciences and systems (CISS), pages 1–5. IEEE, 2014.
 [6] Amir Ali Ahmadi and Pablo A Parrilo. Towards scalable algorithms with formal guarantees for lyapunov analysis of control systems via algebraic optimization. In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC), pages 2272–2281. IEEE, 2014.
 [7] Christopher G Atkeson, BPW Babu, N Banerjee, D Berenson, CP Bove, X Cui, M DeDonato, R Du, S Feng, P Franklin, et al. What happened at the darpa robotics challenge, and why.
 [8] Erin M Aylward, Pablo A Parrilo, and JeanJacques E Slotine. Stability and robustness analysis of nonlinear systems via contraction metrics and sos programming. Automatica, 44(8):2163–2170, 2008.
 [9] Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal. Robot programming by demonstration. In Springer handbook of robotics, pages 1371–1394. Springer, 2008.
 [10] ManDuen Choi. Positive semidefinite biquadratic forms. Linear Algebra and its Applications, 12(2):95–100, 1975.
 [11] ManDuen Choi, TsitYuen Lam, and Bruce Reznick. Real zeros of positive semidefinite forms. I. Mathematische Zeitschrift, 171(1):1–26, 1980.

[12]
Erwin Coumans and Yunfei Bai.
pybullet, a python module for physics simulation, games, robotics and machine learning.
http://pybullet.org/, 2016–2017.  [13] Hongkai Dai, Anirudha Majumdar, and Russ Tedrake. Synthesis and optimization of force closure grasps via sequential semidefinite programming. In Robotics Research, pages 285–305. Springer, 2018.

[14]
Holger Dette and William J. Studden.
Matrix measures, moment spaces and Favard’s theorem for the interval [0,1] and [0,
). Linear Algebra and its Applications, 345(13):169–193, April 2002.  [15] Anca D Dragan and Siddhartha S Srinivasa. Formalizing assistive teleoperation. MIT Press, 2012.
 [16] Terrence Fong and Charles Thorpe. Vehicle teleoperation interfaces. Autonomous robots, 11(1):9–18, 2001.
 [17] Karin Gatermann and Pablo A Parrilo. Symmetry groups, semidefinite programs, and sums of squares. Journal of Pure and Applied Algebra, 192(13):95–128, 2004.
 [18] Ken Goldberg, Michael Mascha, Steve Gentner, Nick Rothenberg, Carl Sutter, and Jeff Wiegley. Desktop teleoperation via the world wide web. In Robotics and Automation, 1995. Proceedings., 1995 IEEE International Conference on, volume 1, pages 654–659. IEEE, 1995.
 [19] Lukas Huber, Aude Billard, and JeanJacques E. Slotine. Avoidance of convex and concave obstacles with convergence ensured through contraction. IEEE Robotics and Automation Letters, PP:1–1, 01 2019.
 [20] Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, and Stefan Schaal. Dynamical movement primitives: learning attractor models for motor behaviors. Neural computation, 25(2):328–373, 2013.
 [21] Jerome Jouffroy and Thor I Fossen. A tutorial on incremental stability analysis using contraction theory. Modeling, Identification and control, 31(3):93, 2010.
 [22] Daniel Kappler, Franziska Meier, Jan Issac, Jim Mainprice, Cristina Garcia Cifuentes, Manuel Wüthrich, Vincent Berenz, Stefan Schaal, Nathan Ratliff, and Jeannette Bohg. Realtime perception meets reactive motion generation. IEEE Robotics and Automation Letters, 3(3):1864–1871, 2018.
 [23] Eamonn Keogh and Chotirat Ann Ratanamahatana. Exact indexing of dynamic time warping. Knowledge and information systems, 7(3):358–386, 2005.

[24]
S Mohammad KhansariZadeh and Aude Billard.
Learning stable nonlinear dynamical systems with gaussian mixture models.
IEEE Transactions on Robotics, 27(5):943–957, 2011.  [25] S. Mohammad KhansariZadeh and Aude Billard. Learning control lyapunov function to ensure stability of dynamical systembased robot reaching motions. Robotics and Autonomous Systems, 6(62), 2014.
 [26] Seyed Mohammad KhansariZadeh and Aude Billard. A dynamical system approach to realtime obstacle avoidance. Autonomous Robots, 32(4):433–454, 2012.
 [27] Seyed Mohammad KhansariZadeh and Oussama Khatib. Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors. Autonomous Robots, 41(1):45–69, 2017.
 [28] Oussama Khatib. Realtime obstacle avoidance for manipulators and mobile robots. The international journal of robotics research, 5(1):90–98, 1986.
 [29] Masakazu Kojima. Sums of squares relaxations of polynomial semidefinite programs. Research report B397, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2003.
 [30] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3):796–817, January 2001.
 [31] Andre Lemme, Yaron Meirovitch, Seyed Mohammad KhansariZadeh, Tamar Flash, Aude Billard, and Jochen J Steil. Opensource benchmarking for learned reaching motion generation in robotics. 2015.
 [32] Winfried Lohmiller and JeanJacques E Slotine. On contraction analysis for nonlinear systems. Automatica, 34(6):683–696, 1998.
 [33] Anirudha Majumdar, Amir Ali Ahmadi, and Russ Tedrake. Control design along trajectories with sums of squares programming. In 2013 IEEE International Conference on Robotics and Automation, pages 4054–4061. IEEE, 2013.
 [34] Naresh Marturi, Alireza Rastegarpanah, Chie Takahashi, Maxime Adjigble, Rustam Stolkin, Sebastian Zurek, Marek Kopicki, Mohammed Talha, Jeffrey A Kuo, and Yasemin Bekiroglu. Towards advanced robotic manipulation for nuclear decommissioning: a pilot study on teleoperation and autonomy. In Robotics and Automation for Humanitarian Applications (RAHA), 2016 International Conference on, pages 1–8. IEEE, 2016.
 [35] Patrick Min. Binvox, a 3d mesh voxelizer, 2004.
 [36] B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd. Conic optimization via operator splitting and homogeneous selfdual embedding. Journal of Optimization Theory and Applications, 169(3):1042–1068, June 2016.
 [37] Pablo A. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical Programming, 96(2):293–320, 2003.
 [38] Edouard Pauwels, Didier Henrion, and JeanBernard Bernard Lasserre. Inverse optimal control with polynomial optimization. arXiv preprint arXiv:1403.5180, 2014.
 [39] Michael Posa, Twan Koolen, and Russ Tedrake. Balancing and step recovery capturability via sumsofsquares optimization. In Robotics: Science and Systems, pages 12–16, 2017.
 [40] Harish Ravichandar, Iman Salehi, and Ashwin Dani. Learning partially contracting dynamical systems from demonstrations. In Conference on Robot Learning (CoRL), 2017.
 [41] Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233–242, 1999.
 [42] Carsten W Scherer and Camile WJ Hol. Matrix sumofsquares relaxations for robust semidefinite programs. Mathematical programming, 107(12):189–211, 2006.
 [43] Vikas Sindhwani, Stephen Tu, and Mohi Khansari. Learning contracting vector fields for stable imitation learning. arXiv preprint arXiv:1804.04878, 2018.
 [44] Mark Talamini, Kurtis Campbell, and Cathy Stanfield. Robotic gastrointestinal surgery: early experience and system description. Journal of laparoendoscopic & advanced surgical techniques, 12(4):225–232, 2002.
 [45] Russell H Taylor, Arianna Menciassi, Gabor Fichtinger, Paolo Fiorini, and Paolo Dario. Medical robotics and computerintegrated surgery. In Springer handbook of robotics, pages 1657–1684. Springer, 2016.
 [46] Richard Washington, Keith Golden, John Bresina, David E Smith, Corin Anderson, and Trey Smith. Autonomous rovers for mars exploration. In Aerospace Conference, 1999. Proceedings. 1999 IEEE, volume 1, pages 237–251. IEEE, 1999.
 [47] Tianhao Zhang, Zoe McCarthy, Owen Jowl, Dennis Lee, Xi Chen, Ken Goldberg, and Pieter Abbeel. Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1–8. IEEE, 2018.
Comments
There are no comments yet.