Inferring geometric constraints in human demonstrations

09/29/2018 ∙ by Guru Subramani, et al. ∙ University of Wisconsin-Madison 0

This paper presents an approach for inferring geometric constraints in human demonstrations. In our method, geometric constraint models are built to create representations of kinematic constraints such as fixed point, axial rotation, prismatic motion, planar motion and others across multiple degrees of freedom. Our method infers geometric constraints using both kinematic and force/torque information. The approach first fits all the constraint models using kinematic information and evaluates them individually using position, force and moment criteria. Our approach does not require information about the constraint type or contact geometry; it can determine both simultaneously. We present experimental evaluations using instrumented tongs that show how constraints can be robustly inferred in recordings of human demonstrations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Geometric constraints are a key part of physical tasks. These constraints impose restrictions on how an object may physically move in the environment. Understanding the geometry of these constraints makes it easier to interact with them effectively. For example, opening a door is easier if one knows that the door is attached to a hinge and where the axis of the hinge is. This paper introduces a method for inferring geometric constraints from recorded human demonstrations of people performing tasks. Our method automatically identifies the types of constraints and their parameters robustly by incorporating both kinematic and force/moment information. For example, given a recording of a person opening a door, our method can identify that there is a hinge constraint and determine the location and axis of the hinge.

Knowledge of constraints can be useful in robotics applications. It allows robots to be programmed to use hybrid force-position control to interact with the constraint. For example, to erase a whiteboard, one applies pressure against the plane while moving long it. Such constraints are typically specified manually; constraint inference can simplify the programming process. As part of programming by demonstration, explicit inference of constraints offers the potential to allow the parameters of hybrid force-position control to be inferred from demonstrations. However, existing approaches to inferring constraints are limited.

Our constraint inference method can model and infer complicated constrained motions with multiple degrees of freedom(DOF) such as the constrained motion of a pen when it draws on paper, without any knowledge of the pen’s geometry. As shown in figure 1

, the tip location is estimated along with the parameters of the planar surface. Our method incorporates forces and moments to distinguish between constraints that are ambiguous when only kinematic information is used. For instance, our method can distinguish between the arc motion of a hinge constraint and circular motion in free space by incorporating forces and moments.

The key insights behind our method are (1) the constrained motion of an object can be modeled as geometric constraints on a rigid body, (2) kinematic information(position and orientation) alone is insufficient to determine a constraint type and (3) force/moment information is useful to distinguish between constraint types. Therefore, our approach takes as input not only positions and orientations, but also the applied forces and moments of the tool performing the task.

Our method can identify a wide range of standard geometric constraints and their geometric parameters without requiring knowledge of the detailed geometry of the tool. Prior methods cannot model, identify and distinguish between the variety of constraint models we consider and do not use their physical properties such as incorporating reaction forces and moments, therefore limiting their robustness with which they can identify these constraints (e.g. [1], [2], [3], [4]).

Figure 1: (a) Stylus (Rigid body) – A human demonstrator moves the stylus (using instrumented tongs shown in figure (3) with its tip against a planar surface to estimate the planar surface geometry and tool tip location. (b) shows the trajectory of the rigid body in green. After the demonstration, our algorithm estimates both the location of the plane in the global coordinate frame and the tool tip position relative to the rigid body frame. (c) The estimated plane location compared to ground truth is within the tolerance (sub-mm) of the motion capture system.

2 Related Work

Robot manipulation and path planning in the presence of task constraints are an active research area in robotics. Stilman [5]

modeled constraints as a motion constraint vector specifying permissible degrees of freedom about task frame axes indicating which coordinate motion may change. This is similar to the Plücker representation

[6]. This representation is useful for planning but difficult to use for estimation and fitting geometry as some constraint models have coupled linear and angular rotations and may not have a well defined task frame attached to the motion of the body. For example, in the point on plane constraint, the rotation axis does not coincide with the point of contact on the rigid body or is even the same along the entire motion. Li et al. [7] encoded task manifolds as a Constrained Object Manipulation (COM) task for efficient path planning using a human demonstration to learn a compliance controller. Havoutis and Ramamoorthy [8] represented constraints as lower dimensional task manifolds learned through manifold learning. While this can encode complicated constraint regions, it does not allow a compact, semantic, parameterized representation of a constraint (for example, a hinge constraint is defined by its axis or rotation). Ortenzi et al. [9] used the null space of allowable velocities to determine constraints autonomously but this may not be suitable for a human demonstration setting.

Constraint inference has also been explored in the Programming by demonstration(PbD) context. C-Learn by Pérez-D’Arpino and Shah [1], utilizes different grasp approaches and key-frames to determine permissible directions such as planar motion or motion along a line. CHAMP by Niekum et al. [2]

, uses a parametric model based change point detection system to determine geometric objects such as lines and arcs in recorded motions. Inferring constraints by learning the null spaces of motion has been explored by

Lin and Howard [3]. All these approaches provide semantic constraint representations but can only identify simple constraints such as lines, planes and arcs. Using forces for constraint inference has been explored too. In Subramani et al. [4], clustering and filtering based approaches were used to determine plane, line and arc constraint geometry. Constrained motions during multiple contacts between a polyhedral robotic tool and the environment are explored by Meeussen et al. [10]. All these methods are limited to simple constraints and do not incorporate moment and orientation information requiring specification of tool geometry.

Our method distinguishes itself from the above-mentioned inference methods by identifying the constraint type and estimating the parameters associated with the constraint model explicitly. Constraints such as a point on an object constrained on a plane can be inferred without any knowledge of the location of the point of contact, something that no other prior work considers let alone can infer. The models used are compact and semantic, yet can represent constraints of multiple degrees of freedom. Our method can incorporate both force and moment information to identify constraint geometry that would otherwise be ambiguous when only kinematic information is known.

3 Salient Features of Approach

This paper provides a method to infer geometric constraints in human demonstrations. Our method takes as input recorded positions, orientations, forces and moments and outputs constraints over a demonstration. The salient features of this approach are as follows:

  • Prior to processing, a library of constraint types is constructed. Kinematic constraints are modeled as constraint equations of the rigid body’s configuration.

  • Input to our method is a recorded task demonstration, consisting of measurements of positions, orientations, forces, and moments for an instrumented tool. The measurement can be in an arbitrary frame on the rigid tool; our methods will determine the contact/constraint parameters with respect to this frame.

  • Our approach requires the motions to be segmented into periods with a single constraint type. We use the approach of [11] to perform the segmentation.

  • For each segment, our approach first attempts to fit each constraint model in the library, estimating their corresponding parameters using nonlinear least squares regression.

  • Each sample in the segment is checked for permissible position, force and moment errors with each constraint type. Constraints are often ambiguous from kinematics alone. Our method disambiguates constraint types using force and moment information.

  • Our approach then selects the constraint through a voting process. Each sample votes for a constraint type if it satisfies the position, force and moment conditions for it.

Figure 2: Geometric constraints and their physical counterparts

4 Mathematical Modeling

When an object/robot end effector is constrained, the degrees of freedom of its rigid body motion are limited and its rigid body motion is restricted to a subset in SE(3). Our modeling approach models the constrained object as a rigid body. Constraint equations are mathematical relationships between defined geometry on the body (e.g., a point on the body) and defined global geometry (e.g., a plane in the environment). For an initial library of constraints, we have analyzed the constraints shown in figure 2. Other constraints can be added if desired. Our derivations use a standard representation for rigid body motion, alternate representations could be used as well.

4.1 Generalized Model of Constraints

This section derives the relationship between the generalized constraint geometry and the permissible linear velocity, angular velocity, forces and moments on the body. First, we determine the permissible linear and angular velocities using virtual displacements. Next, we use the principle of virtual work to determine the permissible reaction forces and moments.

Consider a 6-degree of freedom rigid body of negligible inertial properties located in space through 3 translational coordinates r and rotation coordinates represented either with a unit quaternion q SU(2) or an orthogonal rotational matrix . The rigid body is constrained by constraint equations : SE(3) such that:

(1)

where . These equations () represent the configurations of the rigid body.

In order to be admissible, virtual displacements (variation of p) under the constraint equations (1) must satisfy:

(2)

Where and are the partial derivatives of (1). Equation (2) may also be written using virtual rotation variable [12]. The virtual rotation variable is related to the angular velocity of a rigid body. This relationship is similar to how the virtual displacement is related to linear velocity.

(3)

The permissible reaction forces and reaction moments that the constraint applies to the constrained body must satisfy the virtual work equation [13] because constraint reaction forces and moments do not produce work. The principle of virtual work requires:

(4)

Combining equations (3) and (4):

(5)
(6)

where are the Lagrange multipliers. Equations (5) and (6) provide a relationship between the generalized constraint equations and the permissible reaction forces and moments. is computed from and this is shown in the Appendix (Section 9). This formulation is standard in the multi-body dynamics literature, see [12] for a review.

5 Mathematical models of geometric constraints

This section describes the different constraint types evaluated in the paper.

Consider a 6-degree of freedom rigid body of negligible inertial properties located in space through 3 translational coordinates r and rotation coordinates represented either with a unit quaternion q SU(2) or an orthogonal rotational matrix . Together (r,q) form the body coordinates p.

Each constraint type has a set of parameters that parameterize the geometry of the constraint. Let the parameters of the constraint be , then equation becomes .

5.1 Fixed point constraint:

The fixed point constraint represents a point on the rigid body constrained to the environment such as a ball and socket joint.

Consider a point in the global reference frame defined as a fixed point on the rigid body. s is a vector directed from the origin of the local reference frame of the body to the rigidly fixed point s* in the global reference frame and its local reference frame counterpart is :

(7)

where r and A are the body coordinates. s* is attached to a point P in the environment to create the fixed point constraint:

(8)

where : SE(3) . The parameters of this constraint are P and (a total of six variables). This constraint removes three degrees of freedom through three constraint equations leaving three degrees of freedom of motion.

5.2 Point on plane constraint:

The point on plane constraint describes a point on the rigid body constrained to a plane in the environment such as a pencil tip (point) moving across paper (plane).

This constraint requires a representation of a plane. A plane may be generated by applying a general displacement (i.e. translation and rotation) transformation of the x-y plane which involves:

  1. translating the x-y coordinate plane along the z-axis

  2. rotating the translated plane about the origin.

We represent the rotation transformation using two exponential coordinates corresponding to , the exponential map, which is equivalent to a rotation matrix with an axis of rotation in the x-y plane. Rodrigues’ rotation formula [14] is used to compute . The third term of w is zero because rotations about the z - axis (perpendicular to the plane) do not alter the plane’s geometry. The translation is represented by . Thus, the normal vector on this plane is represented by and the shifted origin of the x-y plane is represented by .

A point P on the plane satisfies the following equation:

(9)

which specifies that the dot product between a vector within the plane and the plane normal is zero. If the constrained point on the rigid body is s* then the constraint equation is:

(10)

The parameters of this constraint are , six variables. The constraint equation removes one degree of freedom leaving 5 degrees of freedom of movement.

5.3 Concentric cylinder constraint:

The concentric cylinder constraint is similar to the motion of a collar on a shaft where a rigid body (the collar) is permitted to translate and rotate about a fixed axis (shaft).

This constraint requires a representation of the axis of rotation. Similar to the plane, the axis of rotation can be generated by applying a general displacement (i.e. translation and rotation) transformation of the z coordinate axis which is equivalent to:

  1. translating the z axis in the x-y plane

  2. rotating the translated axis about the origin.

This is represented by two exponential coordinates and two translational coordinates . The third term in w is zero because rotations about the z - axis produce a line that could be produced by an alternative translation motion. This defines the axis in the global reference frame. The tangent to this axis is and the translated origin is . and represent vectors perpendicular to this axis. The rigid body must only translate and rotate about this axis and this is enforced by :

  1. constraining a point s* on the rigid body to coincide with the axis

  2. constraining vector s to be perpendicular to this axis

  3. constraining a unit vector on the rigid body to be perpendicular to and the axis.

The unit vector prevents the rigid body from rotating about . The constraint equations are:

(11)
(12)
(13)
(14)
(15)

Equations (11) and (12) enforce point to lie on the axis because the vector between the origin of the line and the point s* must be perpendicular to and . Equations (13), (14) and (15) enforce perpendicularity between , and the axis.

A solution to equations (13) and (15) is . This solution does not provide the intent of these constraint equations which is to force perpendicularity between vectors. To address this, we constrain to equal a unit vector:

(16)

Equations (11) through (16) represent the constraint equations . The parameters of this constraint are . Parameter equations (15) and (16) do not apply a constraint on the body but define geometry on it and thus do not remove degrees of freedom from the body. Thus the rigid body has 2 degrees of freedom of movement.

5.4 Planar constraint:

The planar constraint is exemplified by an eraser moving against a whiteboard. The rigid body can only rotate about a vector perpendicular to the plane, and all points within the rigid body translate parallel to this plane. We may assume the origin of the local coordinate frame on the body is contained within the plane.

(17)

A unit vector (unity enforced by equation (16)) is defined to force the rigid body perpendicular to the plane:

(18)
(19)

The parameters of this constraint are , a total of 6 variables. Thus, the rigid body has 3 degrees of freedom of movement.

5.5 Prismatic constraint:

The prismatic constraint represents translational motion in one direction. It is similar to pulling out a drawer. All points on the rigid body translate identically. We assume the origin of the local coordinate frame of the body is contained within this line/axis:

(20)
(21)

The orientation of the rigid body is fixed using equations (15), (16) and the following:

(23)
(24)
(25)
(26)

where and are unit vectors fixed on the rigid body. Equations (23) through (26) and (16) prevent the body from rotating. This rigid body has one degree of freedom of motion. The constraint parameters are: .

5.6 Axial rotation constraint:

The axial rotation constraint is similar to a door knob or a hinged door, all points on the rigid body rotate about an axis and translations are not permitted.

Consider a point on the rigid body s* that is on the axis of rotation and in the plane perpendicular to the axis containing the coordinate frame origin. A rigid point on the axis of rotation defined as constrains the point s*. The vector is perpendicular to this axis. The rigid body must not rotate about vector so unit vector is introduced to enforce this condition. The constraint equations are (15), (16) and the following:

(27)
(28)
(29)

Constraint parameters are . This constraint has 1 degree of freedom of movement.

6 Inference Approach

Our algorithm takes as input samples of the motion containing positions, orientations, linear velocity, angular velocity, forces and moments. As output, it provides a constraint model and its corresponding constraint parameters, . Our method requires the demonstration to be segmented such that each period contains a single constraint. We perform this segmentation using the method of [11]. The segmentation allows us to assume that all samples in a segment are part of the constraint. The steps of our approach are as follows:

6.1 Fitting Geometric Models to Kinematic Information

The first step is to fit kinematic information to all of our defined constraint models. Equations (1) and (2) are used as the regression function. The virtual displacements and are replaced with linear velocity v and angular velocity . To simplify notation, equation (1) and (2) are represented as and respectively. Consider the samples used for the model fit. These samples must satisfy and . The kinematic fit estimates model parameters using known variables p and v by inserting these values in and at every sample and performing a least squares regression over all samples:

(30)

The least squares regression was performed using Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) [15].

6.2 Evaluating the fit quality with Kinematic Information

Once the parameters for each constraint are identified, the best model is selected. A naïve approach would be to pick the model with the least fit error. However this approach will fail as the equations are not comparable and the units are not the same. Our approach is to eliminate models that do not agree with the data. Each model is evaluated independently. A kinematic error criterion (shown in Table 1) uses kinematic information to evaluate position and orientation errors for each sample in the data.

A threshold is used to identify samples that agree well with the model. This threshold may be set considering the scale of the constraint motion (e.g. opening large a room door vs turning a small knob) and instrumentation accuracy. This threshold generates a Boolean list corresponding to the permissible samples using the kinematic error criterion. The units of the kinematic error criteria are distance.

Prismatic Motion Distance between estimated line and coordinate frame origin
Axial Rotation Distance between the rotation point on rigid body and axis of rotation
Planar Motion Distance between the rigid body frame origin and the plane
Fixed Point Distance between the global point P and rigid body point s*
Concentric Cylinder Distance between rotation point on body and estimated axis of rotation
Point on Plane Distance between point on rigid body and estimated plane
Table 1: Kinematic error criteria for each constraint

6.3 Evaluating fit quality using Force and Moment information

In many cases the model may still be ambiguous after applying the kinematic error criterion described above. For example, a real fixed point constrained motion will have very small kinematic errors for both the point on plane and the fixed point constraints because the point on plane is the more general model. In this case the plane geometry of the point on plane constraint is not well defined as different planes could satisfy the kinematic information. In other situations it may still be well defined. For instance, the motion of opening a door, the axial rotation constraint, may also be a permissible planar constraint; both the geometry of the axial rotation and planar constraints are well defined for the prescribe kinematic motion. This ambiguity can be addressed by considering reaction and friction forces(moments) caused by the constraint. However, along with the reaction and friction forces, the measured forces contain inertial and gravitational forces. Inertial properties of the constrained object are not compensated but they may be considered negligible when the accelerations in the demonstration are small. Gravitational forces of the constrained object are not compensated but they may be considered negligible when constrained motion of the object is perpendicular to the direction of gravity or the object moved is light. This is the case in many real world situations such as pulling out a drawer where the motion of the draw is perpendicular to gravity or erasing on a whiteboard where the weight of the dry eraser is small.

We incorporate force information by determining whether the measured reaction forces and friction forces are consistent with the identified constraint. If the identified constraint is correct, then ideally, the measured forces would be equal to the sum of the reaction and friction forces (assuming negligible inertial and gravitational effects). Equations 5 and 6 provide the permissible reaction forces and moments. To determine the reaction forces and moments, the Lagrange multipliers must be estimated. Using the measured forces and moments, the Lagrange multipliers are solved at every sample using least squares optimization:

(31)

The estimated reaction forces and estimated reaction moments are computed as follows:

(32)
(33)

The residual forces and moments after removing reaction forces and moments are:

(34)
(35)

The residuals still contain friction. Friction forces and moments are directed along the direction of motion. and are unit vectors of velocity and angular velocity respectively.

(36)
(37)

Finally, the force and moment error criteria are:

(38)
(39)

It is important that friction forces and moments are removed after the reaction forces and moments are removed. It is possible to have reaction forces and moments in the direction of motion. This is because forces and moments in the virtual work equation are coupled and work done by each may cancel out the other:

(40)
Constraint Type Position Threshold (m) Force Threshold (N) Moment Threshold (Nm)
Point on plane 0.0005 .05 0.5
Fixed point 0.0001 1.0 0.2
Concentric cylinder 0.002 0.02 0.01
Planar 0.001 1. 0.02
Prismatic 0.001 0.1 0.001
Axial rotation 0.01 0.02 0.2
Table 2: Kinematic error criteria for each constraint

Similar to the kinematic error criterion, the force and moment error criteria are determined through a threshold of and and a boolean list and are determined. This threshold depends on the accuracy of the measured data and the size of the constraints involved. The threshold for the moments depends on the distance of the applied forces from the local reference frame, as a large distance would result in large moments. In practice, applying the thresholds is fast and can be done interactively; the easiest way to determine them is to perform a few experiments with the instrumentation and available constraint models. (The numerical values of the thresholds for the experiments in the following sections are provided in table 2.)

6.4 Selecting the constraint model

Each model has corresponding kinematic, force and moment boolean lists , and describing which samples fit the model. The intersection of these lists provide the eligible samples for each constraint type. The constraint with the most eligible samples is selected.

Figure 3: (a) Instrumented tongs, (b) Constraint Sabre, (c) Inferred constraint motions. The motions are correctly segmented into free space motion (grey) and constrained motion (colored).

7 Experimental Evaluations

The performance of our constraint inference system was evaluated using a custom testbed containing the constraints described in section 5. Two different hand held tools, the Instrumented tongs and the Constraint Sabre, shown in figure 3, were used to collect measurements from constraint interactions in human demonstrations.

The Instrumented tongs simulate a robot gripper and are used to manipulate constrained objects. The tongs are tracked using an Optitrack motion capture system and applied forces and moments were measured using two enclosed ATIMini40 force torque sensors. The tongs do not provide a rigid attachment to constrained objects, and thus, the motions of the tongs cannot be considered as the motions of the constrained object, requiring markers to be placed on the object. The force-torque sensors’ measurements are transformed to the object’s frame to calculate the applied forces and moments on the object.

In the case of the Constraint Sabre, the tool attaches rigidly to a constrained object using a (hex key), enabling the use of the constraint sabre kinematic measurements as a proxy for the motion of the constrained object. Motion capture markers measure the Constraint Sabre’s position and orientation in space and a force torque sensor measures the applied forces and moments. Different tools may be attached to the Constraint sabre to interact with constraints in the environment similar to different robot end effector tool attachments.

In section 6, the method assumes that samples considered contain only one active constraint. This is not a practical assumption, as a typical demonstration may have multiple constraints separated by free space motion. The force action recognition method of [11] was used to identify constrained motion segments between free space motion. Figure 3 shows an example of a typical demonstration and its visualization. An expert uses similar plots to determine classification accuracy.

7.1 Inferring the point on plane constraint in detail

Determining the constraint parameters for the the point on plane constraint is shown here(shown in figure 1). The point on plane constraint was demonstrated by moving a custom made steel tipped stylus against a plane. Motion capture markers, attached to the stylus, measure its rigid body motion. A demonstrator uses the instrumented tongs to grasp the stylus and move it against the plane. Our method estimates both the plane geometry and the location of the point of contact on the stylus from a single demonstrated motion within the tolerance of the motion capture system. Any stylus (rigid body) could be used as long as measurements of the rigid body motion are available.

7.2 Classification and fit accuracy

Classification and fit accuracy were evaluated with both the Instrumented Tongs and the Constraint Sabre but shown as two separate experiments.

The instrumented tongs were used to interact with 5 different constraint types. Two demonstrators, one of which was not an author of the paper, performed approximately 10 trials of each of the five constraint types (98 trials total, two trials were removed because of corruption in the motion capture data). During each trial the testbed was repositioned to provide variability in the constraint positions.

Constraint Classification Accuracy Fit Error Mean S.D. Min Max
Prismatic 100% 0.000104 1.99e-05 6.58e-05 0.00014
Axial Rotation 100% 0.00106 0.000247 0.000449 0.00148
Fixed Point 75% 0.000771 0.000177 0.000573 0.00135
Point on Plane 100% 0.000166 4.26e-05 0.000110 0.000248
Planar Motion 100% 0.000894 0.000445 0.000440 0.00249
Table 3: Classification and fit statistics for the Instrumented Tongs

The classification accuracy and corresponding fit accuracy calculated using the Kinematic error criterion (assuming constraint is inferred correctly) is presented in table 3. The fixed point constraint was falsely predicted as a rotational constraint (5%) and a point on plane constraint (20%) because the reaction forces and moments are coupled in all three constraints (See equation (40)). Equation (31) is not reliable when large moments are measured as large moments bias the estimated Lagrange multipliers. Without force and moment thresholding the overall accuracy was 57%. Thus, the use of force and moment information is useful for distinguishing between constraint types in ambiguous situations.

A similar experiment was conducted with the Constraint Sabre. Using the constraint sabre, two demonstrators, one of whom is not an author on this paper, performed 10 and 13 constraint interactions over the linear, axial rotation, concentric cylinder and planar motion constraints.

The complete set of constraint interactions were performed over the same demonstration as shown in Figure 3. Free space motion occurred between distinct constraint interactions. The overall prediction accuracy for the linear, axial rotation, concentric cylinder and planar constraints was 87%, 96%, 91% and 100% respectively.

The linear and rotational constraints are occasionally misclassified as a planar constraint(9% and 4% respectively) likely because the planar constraint is a more general model which often results in a better fit to the measured data. If small forces and moments are applied during the demonstration, the force and moment criteria are not useful and thus the planar constraint is mistakenly estimated.

Sometimes, the concentric cylinder constraint is incorrectly identified as an axial rotation constraint (5%) when no translation occurred or due to the low tolerance set on the kinematic information criterion for the axial rotation constraint. The low tolerance was required to counteract the mechanical play between the hexagonal key and the physical constraint.

The overall fit accuracy was less when compared to the instrumented tongs because of the mechanical play in the hex key mate. If the rigid body motions of the constrained objects were measured directly, the fit and classification accuracies would be similar to that of the instrumented tongs.

Figure 4: Left - Shows recorded motion interacting with an axial rotation constraint. Both planar and axial rotation constraint geometry fit the recorded motions. Right - While position errors for both the planar (green) and axial rotation (blue) models are similar, the force errors are significant for the planar one and can be used to determine the axial rotation model as the correct constraint model.

7.3 Significance of force and moment information

The same motion may fit to different constraint models. Figure 4 shows recorded motion interacting with an axial rotation constraint. While position errors for both the planar and axial rotation models are similar as both fit the motion well, the force errors are significant for the planar one and can be used to determine the axial rotation model as the correct constraint model. Note that position errors are low for point on plane and axial rotation constraints too but are not shown here. Force and moment information is useful to distinguish between constraint types that are ambiguous when only considering kinematic information. While one may argue to always pick the most constrained model and ignore force and moment information, this does not work for all situations, for example, a 2DOF concentric cylinder constraint will fit a 3DOF point contact constraint

motion better but is a more restrictive model(lesser degrees of freedom). Including force and moment information is not a heuristic as it exploits the physics of the interaction.

7.4 Significance of quality of information

Constraints that have many degrees of freedom such as the point on plane constraint require many distinct samples to estimate reliable parameters. For example, simply moving the stylus in a straight line without changing its orientation in space would not provide enough information to determine the correct parameters such as the location of the plane. In a human demonstration, samples close in time are similar. Fitting this constraint with few contiguous samples in time may not provide as good a fit as sampling them randomly from the entire demonstration. Figure 5 shows this in detail.

Figure 5: The point on plane constraint performed with the instrumented tongs. The recorded trajectory is similar to the one shown in figure 1. Left - shows the fit error(m) of the point on plane constraint as a function of the number of contiguous samples used during fitting. Projected images of the stylus motion are shown indicating the amount of information over the demonstration. In this particular example, only after reaching 1 second of motion does the fit error drop to an acceptable value. Right - Samples are selected randomly from the entire demonstration and fit. The fit error drops with an increase in the number of samples. Randomly selected samples are usually distinct and provide richer information than an equal number of contiguous samples.

8 Conclusion

This paper demonstrates a method to identify geometric constraints of various degrees of freedom even in the absence of tool geometry. It describes a method to model these constraints, fit motions to these models and uses force and moment information to distinguish between constraints in ambiguous situations. The estimated models are semantic and may be parsed by both human and machine.

However, the proposed method does not estimate inertial and gravitational effects which in certain cases are significant, for example, interacting with constrained exercise equipment. It also does not incorporate force and moment information in the fitting procedure which may improve performance. Finally, it would be interesting to see this method in a fully integrated teaching by demonstration system and we consider this as an interesting avenue for future work.

9 Appendix - Kinematic Identities

9.1 Relationship between and q

(41)

Quaternion where and .

(42)
(43)

where

(44)

and I

is the identity matrix

References

  • Pérez-D’Arpino and Shah [2017] C. Pérez-D’Arpino and J. A. Shah. C-learn: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 4058–4065. IEEE, 2017.
  • Niekum et al. [2015] S. Niekum, S. Osentoski, C. G. Atkeson, and A. G. Barto. Online Bayesian changepoint detection for articulated motion models. Proceedings - IEEE International Conference on Robotics and Automation, 2015-June(June):1468–1475, 2015. ISSN 10504729. doi: 10.1109/ICRA.2015.7139383.
  • Lin and Howard [2016] H.-C. Lin and M. Howard. Learning Null Space Projections in Operational Space Formulation. 2016.
  • Subramani et al. [2018] G. Subramani, M. Gleicher, and M. Zinn. Recognizing geometric constraints in human demonstrations using force and position signals. IEEE Robotics and Automation Letters, 3(2):1252–1259, April 2018. doi: 10.1109/LRA.2018.2795648.
  • Stilman [2010] M. Stilman. Global Manipulation Planing in Robot Joint Space with Task Constraints. IEEE Transactions on Robotics, 26(3):576–584, 2010. doi: 10.1109/TRO.2010.2044949.
  • Siciliano and Khatib [2008] B. Siciliano and O. Khatib. Springer handbook of robotics Chapter 2. Springer Science & Business Media, 2008.
  • Li et al. [2017] M. Li, K. Tahara, and A. Billard. Learning task manifolds for constrained object manipulation. Autonomous Robots, 42(1):1–16, 2017. ISSN 15737527. doi: 10.1007/s10514-017-9643-z.
  • Havoutis and Ramamoorthy [2013] I. Havoutis and S. Ramamoorthy. Motion planning and reactive control on learnt skill manifolds. The International Journal of Robotics Research, 32(9-10):1120–1150, 2013.
  • Ortenzi et al. [2016] V. Ortenzi, H.-c. Lin, M. Azad, J. A. Kuo, and M. Mistry. Kinematics-based estimation of contact constraints using only proprioception. 2016 IEEE-RAS International Conference on Humanoid Robots (Humanoids 2016), pages 1304–1311, 2016. ISSN 21640580. doi: 10.1109/HUMANOIDS.2016.7803438.
  • Meeussen et al. [2008] W. Meeussen, J. Rutgeerts, K. Gadeyne, H. Bruyninckx, and J. De Schutter. Contact state segmentation using particle filters for programming by human demonstration in compliant motion tasks. Springer Tracts in Advanced Robotics, 39:3–12, 2008. ISSN 16107438. doi: 10.1007/978-3-540-77457-0˙1.
  • Subramani et al. [2017] G. Subramani, D. Rakita, H. Wang, J. Black, M. Zinn, and M. Gleicher. Recognizing Actions during Tactile Manipulations through Force Sensing. pages 4386–4393, 2017.
  • Haug [1989] E. J. Haug. Computer aided kinematics and dynamics of mechanical systems, volume 1. Allyn and Bacon Boston, 1989.
  • Lanczos [2012] C. Lanczos. The variational principles of mechanics. Courier Corporation, 2012.
  • Murray et al. [1994] R. M. Murray, Z. Li, S. S. Sastry, and S. S. Sastry. A mathematical introduction to robotic manipulation. CRC press, 1994.
  • Fletcher [2013] R. Fletcher. Practical methods of optimization. John Wiley & Sons, 2013.