Feature-Based Transfer Learning for Robotic Push Manipulation

05/09/2019 ∙ by Jochen Stüber, et al. ∙ University of Birmingham 0

This paper presents a data-efficient approach to learning transferable forward models for robotic push manipulation. Our approach extends our previous work on contact-based predictors by leveraging information on the pushed object's local surface features. We test the hypothesis that, by conditioning predictions on local surface features, we can achieve generalisation across objects of different shapes. In doing so, we do not require a CAD model of the object but rather rely on a point cloud object model (PCOM). Our approach involves learning motion models that are specific to contact models. Contact models encode the contacts seen during training time and allow generating similar contacts at prediction time. Predicting on familiar ground reduces the motion models' sample complexity while using local contact information for prediction increases their transferability. In extensive experiments in simulation, our approach is capable of transfer learning for various test objects, outperforming a baseline predictor. We support those results with a proof of concept on a real robot.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Pushing is an essential skill for both humans and robots. While prehensile manipulation is versatile and elegant, neither is it always possible nor suitable. Objects may be too large or heavy to grasp or the robot may not be equipped with grippers. Mobile robots in particular are posed to encounter an unpredictable variety of real-world manipulation tasks without the ability to grasp, i.e. adding a gripper to NASA’s K-Rex rover means extra payload [7]. Rocks on the moon may nevertheless obstruct its path. In a more mundane context, precise pushing allows industrial and personal robots to operate efficiently in cluttered environments by removing obstacles and reducing pre-grasp uncertainty [3]. In both described contexts, robots may encounter objects of various sizes, shapes, and materials. Hence, there is a strong case for learning transferable forward models of robotic push manipulation.

However, generalisation across objects is challenging and still an open problem. Analytical approaches to pushing rely on knowledge of intrinsic physical parameters which may not be available, or expensive to obtain. Data-driven pushing models tend to either generalise poorly, require impractically large amounts of data, or both.

We present a contact-based approach to push prediction that enables generalisation across objects of different shapes. Our aim is not to improve the quality of predictions for a specific object, but to efficiently learn a generative model that will enable the robot to make reasonable predictions on novel objects. We have also based our approach on features that can be extracted from a point cloud obtained on the fly by the robot, so to remove the need of modelling various types of objects. We do so by approximating the global motion of the object by learning a set of local experts. Each expert is a specialised predictor learned from demonstration.

In particular, we learn two types of experts: 1) static contact models and 2) motion models. The former models encode the contact between the pusher and the object, and between the object and the environment. Meanwhile, the latter models make predictions on how the object will behave under a specific push. In other words, instead of learning motions for a specific object, we break down the problem to learn how few local points of interest (i.e. robot-object and object-environment contacts) will change under a push. We then use those predictions to estimate the global motion of the object.

Figure 1 shows our test scenario in which a Pioneer 3DX pushes an object not previously encountered, composed of two piled dice. All our models have been trained in a simulated environment, also the one tested in the real scenario. To cope with the discrepancy between the simulated and the real world, we learned the motion models by rollouts with sampled friction coefficients (see Sec. IV-C).

Fig. 1: Our test scenario. A Pioneer 3DX equipped with a 3D printed bumper pushing a novel object. The robot is capable of predicting the effects of the push even if the dice are not present in the training data.

When encountering a novel object, we use the contact model to generate robot-object and object-environment contacts similar to those seen during training, and apply the contact-specific motion models to predict how the object will behave under a push. As the generated contacts are similar to those seen during training time, the motion models can predict on familiar ground.

Our main contributions are thus twofold. First, we propose a novel approach to transfer learning of robotic push manipulation. Second, we test that approach in extensive experiments in a simulated environment, and in a proof of concept on a real robot.

Ii Related Work

Fig. 2: An object is pushed from an initial pose (dark grey) to a final pose (light grey, dotted line). We aim to predict the global motion by learning local predictors for the motions and , for . We represent motion, as a rigid body transformation that moves the frame into frame . The rigid body transformations and describe the estimated contacts on the object’s surface w.r.t. the estimated global frame of the object, . This relation does not change over time, thus once we estimate the local motions and we use the relations and to estimate . The transformations and denote the relative pose of the bodies in contact, respectively the robot link and the environment , w.r.t. the feature contact frames and . This allows us to learn independent contact-specific motion models.

One approach to predicting the result of a push involves building models informed by classical mechanics [12, 13, 11]. In order to be precise, such models require explicit representation of intrinsic physical parameters such as friction. Most of those approaches focus on planar sliding of simple two-dimensional (2D) objects. Recently, there has been a promising stream of work, however, that aims to augment analytical models with data-driven approaches, e.g. for modelling the stochastic nature of friction [16].

In view of avoiding the tuning of physical parameters, a second approach is to learn forward models from data, thereby implicitly encoding physical parameters in the model. A wide range of features and machine learning techniques have been used for pushing prediction, e.g. recently

[2]. However, most approaches tend to work well only on the objects that they have been trained on, although notable exceptions exist, e.g. [10]

. Recently, artificial neural networks have been applied to robotic pushing. Such deep learning approaches typically use RGB images as input, and may be used to construct end-to-end systems which optimise perception, prediction, and control jointly for task performance. While such work holds great promise, existing approaches require hundreds of hours of video material corresponding to thousands of actions for training to be applicable in real world scenarios

[1, 4]. Generalisation across objects further increases sample complexity.

Most related to our approach is our own previous work [10, 17, 9]. In [10, 17], we dealt with the problem of learning forward models to predict how objects behave under push operations, and in [9], we proposed to use local features to transfer learned grasps to novel objects. In this paper, we investigate both approaches further to learn contact-specific motion models for push operations.

Iii Background

Iii-a Surface features

A central aspect of our approach is the probabilistic modelling of surface features, extracted from three-dimensional (3D) object point clouds. Features are composed of a 3D position, a 3D orientation, and a 2D local surface descriptor that encodes local curvatures. Let us denote by

the group of rotations in three dimensions. A feature belongs to the space , where is the group of 3D poses, and surface descriptors are composed of two real numbers. We thus represent an object as a set of surface features

(1)

Let us denote the separation of a feature into for position, a quaternion for orientation, and for the surface descriptor. For compactness, we also denote the pose of a feature as . As a result, we have , and .

We now describe how we compute , the local curvature descriptors. The surface normal at is computed from the nearest neighbours of using a PCA-based method, e.g. [6]. Surface descriptors correspond to the local principal curvatures [15]. The curvature at point is encoded along two directions that both lie in the plane tangential to the object’s surface, i.e. perpendicular to the surface normal at . The first direction, , is a direction of the highest curvature. The second direction, , is perpendicular to . The curvatures along and are denoted by and

respectively, forming a 2-dimensional feature vector

. The surface normals and principal directions allow us to define the 3D orientation that is associated to a point . Next, we introduce the notation and concepts required to represent motion models.

Iii-B Rigid body motions

Let us now consider the problem of a robotic agent with link pushing an object , observed at discrete time steps , ; see Fig. 2 for a visual representation of the notation. We assume that both and are 3D rigid bodies and that the interaction between them is quasi-static. In this context quasi-static means that the object only moves when it is being pushed by the robot with a constant low-speed velocity. Let denote the pose of link and the pose of , measured at time in an inertial reference frame which we will refer to as the world frame. In this paper, we aim to determine the motion of an object resulting from a pushing action from an action set . Let denote the rigid body transformation from to . Using the same notation introduced for 3D poses, we represent the separation of into for the translation, and a quaternion for the rotation. Our approach is probabilistic in that we we aim to learn not a deterministic mapping

but rather a conditional probability density function (PDF) over rigid body motions

.

In order to determine , we use a PoE. The key idea of this approach is that candidate motions are evaluated by a set of models (the experts), each giving a likelihood (opinion). For a candidate to be considered likely, all experts must agree. This is implemented by taking the product of opinions. In contrast to mixture models, each expert thus has a veto in the sense that a single zero probability results in an overall zero likelihood.

In our context of motion prediction, we consider multiple local predictors of the global object motion (see Fig. 2). Each of those models predicts the object’s local motion at point where the object is in contact with the robot or the environment. The PoE is a natural fit for this context as each contact encodes a local kinematic constraint on the object’s motion. To be considered probable, a motion must satisfy all of those constraints simultaneously. To formalise this, let us denote a local surface feature in contact with robot link as . We represent the contact by the pose of relative to the local feature frame . In addition to the robot-object contact, we consider at set of contacts with corresponding features . Let us now again consider the robot applying a pushing action to the object. In addition to , we can now additionally observe the motion of each local feature frame, namely for the robot-object contact, and for the object-environment contacts where .

We note that each local contact feature frame and resides at a relative pose to which, for a particular is given by , where denotes the pose composition operator, and is the inverse of , with . As we assume that we are dealing with rigid bodies, . For any given , and , we can thus compute the corresponding . In other words, given the initial pose of a local contact frame, its relative pose to the object pose, and its local motion, we can compute the corresponding object motion. Coming back to our previous set-up, we learn a model for the motion of the robot-object contact frame given by and a motion model for object-environment contact frames, given by where . Given the relative poses and , those motion models also define probability densities over . Crucially, this formulation is transferable in that the learned motion is given in the local contact frame. When predicting the motion for a novel object, the corresponding relative poses and are used to generate . In the next section, we discuss how we estimate probability densities from data.

Iii-C Kernel density estimation

In this paper, we represent PDFs non-parametrically with a set of features, or particles. The underlying PDF is created through kernel density estimation [14], by assigning a kernel function to each particle supporting the density. For contact models, we consider PDFs defined on surface features , i.e. on . For that purpose, let us denote by a surface feature given by , and by a triplet of real numbers . We thus define our kernel as

(2)

where is the kernel mean point, is the kernel bandwidth, is an -variate isotropic Gaussian kernel, and corresponds to a pair of antipodal von Mises-Fisher distributions which form a Gaussian-like distribution on (for details see [5]). Given a set of surface features, the probability density in a region of space is then determined by the local density of the particles in that region, as

(3)

where is the kernel bandwidth and is a weight associated to such that . For motion models, we consider a kernel function defined over surface features and 3D motions, hence over . That kernels is defined as the product of and the kernel function

(4)

where is the motion to be evaluated, , and .

Iv Proposed Approach

At training time, we learn a contact model

comprising both a robot-object contact model and an object-environment contact model. For each action in our action set, we then learn local motion models specific to that contact model. At prediction time, we can query any novel point cloud to find the closest set of contacts and interpolate a prediction for the object movement.

Iv-a Contact model

A contact model encodes the joint probability distribution of surface features and of the 3D pose of the robot’s link in contact. At prediction time, we obtain a point cloud

of the novel object from a single shot taken from a depth camera. Given a set of surface features , with and , a robot-object contact model is constructed from features from the object’s surface. Surface features close to the link surface are more important than those lying far from the surface. Features are thus weighted, to make their influence on decrease with their squared distance to the link. Let us denote by the pose of relative to the pose of the th surface feature. In other words, is defined as

(5)

where denotes the pose of in the world frame, denotes the pose composition operator, and is the inverse of . The contact model is then estimated as

(6)

where is a normalising constant, and .

Additionally, we learn an object-environment contact model from samples obtained from . For each sampled feature , we attach a frame to the closest point in the environment and represent the environment contact by the pose of relative to . For the object-environment contact model, we opted for a simple binary weighting function

(7)

where is a cut-off distance. We then estimate the object-environment contact model using the same formulation given for the robot-object contact model.

Iv-B Query density

A query density results from the combination of a learnt contact model with a novel object point cloud . The purpose of a query density is both to generate and evaluate poses of the corresponding robot’s link (or the environment) on the new object. Imagine you have only learned how to push a cube from one of its side faces. When you need to push a triangular prism you will not place your finger on one of the corners, expecting it to move like the cube. However, if you place your finger on a side face you may be able to use your experience on the cube to make a prediction. We refer the reader to [9] for further details on how to compute the query density, providing only a brief overview here. In summary, a query density is a PDF defined as

(8)

where denotes a point on the object’s surface, expressed in the world frame, is the surface curvature of such a point, denotes the pose of a body relative to a local frame on the object, and is the absolute pose in the world frame of the body. In our context, the body is either the robot’s link or a local environment frame.

At prediction time, we use a robot-object query density to generate poses of the robot’s link on the new object, and to attach a robot-object contact frame as an expert for prediction. We achieve the former by marginalising with respect to , , and , obtaining the distribution which models the pose in the world frame of the link . We approximate the robot-object query density by kernels centred on the set of weighted robot link poses obtained from the pose sampling algorithm proposed by [9]. To generate a robot-object feature frame , we choose it such that it maximises . For optimisation we use simulated annealing [8].

In addition to the robot-object query density, we create an object-environment query density by combining the object-environment contact model with . While we are free to position the robot in the world, we consider the state of the environment as fixed for a given time . We hence use the object-environment query density only to select the set of query object-environment contact frames . This is equivalent to marginalising with respect to , , and to obtain the distribution over poses in the world frame of local feature frames on the object’s surface. We then sample from for a total of times.

Iv-C Motion model

Having learned a contact model, we then proceed to learning both a robot-object motion model, and an object-environment motion model for each action in the action set. We learn those models from the data by observing the local motion of robot-object and object-environment frames at training time. For each action , we simulate a set of rollouts which are the kernels of the corresponding motion model. Each rollout has a different friction coefficient for the contacts uniformly sampled from a distribution in the range , similar to the approach taken by [16].

In doing so, we learn to predict motion conditional on both the contact pose and the local curvature in the form of PDFs and . At prediction time, we use the relative poses and of the contact frames with respect to the object pose to define PDFs over object motion P( and from the local motion PDFs, as discussed before. Each kernel in the PDF of each motion model is defined over . In order to generate predictions, we obtain an opinion on candidate motions from each local expert defined as the conditional PDF

(9)

where is the candidate object motion to evaluate, is the relative pose and surface curvature associated with the conditioning contact frame, is the th motion kernel, is the motion kernel bandwidth parameter, and is the contact feature bandwidth parameter. Finally, we combine all local motion models in a PoE, as introduced in the background section, and select the best prediction by obtaining the argument with the maximum likelihood. For that purpose, we again use simulated annealing as a convenient optimisation procedure.

V Experiments

We evaluated our approach empirically in experiments on a simulated and a real Pioneer 3DX mobile robot equipped with a bumper for pushing objects. Our experiments focus on object transfer, the main aspiration of this paper. We learned contact and motion models in simulation for one object (i.e. the cube), and then evaluated the predictive performance of the learned models on a set of five test objects in simulation, and two test objects in a real world scenario. We also compared our approach to a baseline predictor which encodes the planar sliding constraints (Sec. V-D).

For our experiments, we varied the trained models along two dimensions, first the contact information used for prediction, and second the number of training samples. We considered three values in each dimension and all combinations between them, leading to a total of nine evaluated models. During both training and testing, all objects were placed in an obstacle-free planar environment. Due to the geometry of the objects considered, and the kinematics of our robot, our experiments are a study of planar sliding behaviour. Investigating performance in the context of rolling and toppling, as well as in cluttered environments, is future work. For all simulation experiments, we used the Gazebo simulator in version 7.8 with the Open Dynamics Engine (ODE).

V-a Training set

As our training object, we chose a cube with side length m and mass kg. Using a Kinect depth camera, we took a single shot of the object, and learned contact models from the thus obtained point cloud. Specifically, we learned two robot-object contact models. Due to the symmetry of the bumper, we only distinguish between front and side link without differentiating between left and right.

In the case of the first robot-object contact model, the bumper’s front link is in full contact with one face of the cube. In the case of the second robot-object contact model, the bumper’s side link is in full contact with a face of the cube. In both cases, we used a cut-off distance of cm for the weighting function. Furthermore, we learned an object-environment contact model from surface features sampled from the point cloud. Here, we used a cut-off distance of cm for the weighting function. We note that in our set-up devoid of obstacles and clutter, the object’s environment contacts are exclusively made with the ground.

In the next step, we learned a motion model for each combination of action and robot-object contact model. To that end, we generated training pushes as follows. For each robot-object contact model, we constructed a query density from which we generated feasible training contacts, discarding infeasible instances that placed the robot in mid-air or in collision with either the object or the environment. At each feasible contact, we then repeatedly applied a set of pushes. Specifically, we considered an action set of three pushes, one straight linear push and two angular pushes. In all cases, a velocity command was fed to the robot’s controller for a fixed duration. We used a duration of four seconds. Linear velocity amounted to m in the -direction of the robot’s base frame for all pushes. For the straight linear push, all components of the angular velocity vector were set to the zero, while for the two angular pushes, we considered angular velocities of and respectively, directed around the -axis of the robot’s base frame. In the case of the side link’s contact model, we excluded the angular push directed away from the contact surface from the action set due to its resulting in a loss of contact without applying a push.

At each training contact, we executed each action five times (i.e. rollouts, Sec. IV-C), gathering a set of training pushes, for the front link’s contact model, and for the side link’s contact model.

V-B Test set

We evaluated the learned motion models both in simulation and on a real robot. Considering the simulated environment first, we generated a test set of pushes covering five objects, one being the cube seen during training, and the other four being unseen objects. This design reflects our focus on object transfer. The unseen objects comprise a cuboid with side lengths m and m, an equilateral triangular prism with side length m, a cylinder of radius m and height m, and a shape derived from the equilateral triangular prism by rounding off one of its edges. We will refer to the latter object as a rounded triangular prism. All considered objects have the same mass of

kg. We chose the objects such that they exhibit variance with respect to the shape, area, and curvature of their surfaces which in turn determine the nature of the contacts that can be generated.

We generated test pushes for the selected objects following the same process used to gather training data. For each test object, we obtained a single shot from a Kinect depth camera. For each of the two learned contact models, we constructed a query density and sampled query poses. For each query pose, we then applied each action from the corresponding action set four times. For each test push, we sampled the contact friction coefficients from the same distribution used during training. Hence, we obtained a test set of pushes per training object, for the front link’s contact model and for the side link’s contact model. Considering all objects, our test set thus comprised pushes. For model selection, we additionally generated a separate validation set of pushes. We tuned the kernel bandwidth parameters on the validation set and kept them constant across all experiments.

Regarding, the evaluation on the real robot, we considered a test set of two boxes. The first, with length m, width m, and height m, we will refer to as the large box. Further, we will refer to the second, with length m, width m, and height m, as the small box

. Again, we obtained point clouds of the objects from single shots of a Kinect camera. Subsequently, we estimated each object’s initial pose from the point cloud. First, we approximated the position of its centre of mass (COM) as the centroid of the point cloud. Then, we computed the object’s orientation from the eigenvectors of the point cloud’s covariance matrix. After applying the push, we obtained the object’s final pose from a new shot of the depth camera using the same approach. As in simulation, we considered both contact models (front and side), and generated test contacts from the query density. However, we pruned the angular pushes from the action set, considering only the linear case. Following the described process, we generated

test pushes for this initial proof of concept. We will extend the scope of those experiments in prioritised future work.

Fig. 3: Pushes with predictions: initial object pose (green, in contact with robot), true final object pose (green, displaced), and predictions (blue).

V-C Predictions

We now proceed to describe how we generated predictions. For each generated test contact, we attached a set of frames. First, we attached a global object frame at the object’s estimated COM. Both in simulation and on the real robot, we estimated the location of the COM as the centroid of the point cloud.

Subsequently, we determine the robot-object and the object-environment contact frames by using the query density as described in Sec. IV-B. In our experiments, we also aim to investigate the benefit of learning object-environment contacts to improve predictions. Hence, in our results, we denote the predictor based on only robot-object contact information as RO, and the predictors with additional access to three and five object-environment contacts as RO+3OE, and RO+5OE respectively.

Additionally, we varied the number of training pushes available to our models. In simulation, we considered the cases of , , and training pushes; on the real robot, we considered pushes only. We thus trained nine different models, namely each of RO, RO+3OE, and RO+5OE with the three considered training set sizes. Based on the available input information, and number of samples, each predictor generated candidate object motions for each test contact using simulated annealing. Every one of those candidates was generated as the likeliest of samples obtained from the probability density over object motions, and then optimised in iterations of the simulated annealing algorithm. Of the optimised candidates, we kept the predictions with the maximum likelihood scores. Finally, we computed error statistics for all predictions and compared them to a baseline predictor. We describe our methodology used thereby next.

Fig. 4: Left: Performance of predictors in simulation, by size of training set. Right: Performance of predictors trained on training pushes, by object.

V-D Baseline predictor

As a performance baseline for comparison, we used a baseline predictor to all test pushes. Although simple, this predictor has several advantages over our proposed models. First, it knows the object’s true COM. Foremost, though, it encodes the planar sliding constraints of our set-up. Thus, the predictor knows that objects will not topple or roll, and that they cannot penetrate the ground. In contrast, our contact-based predictors need to learn those constraints from the data.

We implemented it such that for the th action from the action set, it applies a translation to the initial pose in world coordinates of the object’s true COM, to which the predictor has access. For the linear push that translation is computed by transforming the translation given by from the robot’s base frame into world coordinates. Here, is the linear velocity vector of the velocity command corresponding to the th action, and is the action duration. For angular pushes, we followed the same process but instead of compute the translation from the vector which is the vector in the -plane of the robot’s base frame that lies at an angle to , where is the angular velocity around the -axis of the robot’s base frame corresponding to the th action.

V-E Performance measure

For each prediction, we computed both a linear and an angular distance metric with regard to the final object pose. For linear distance , we opted for Euclidean distance. For angular distance , we used a quaternion distance metric defined as

(10)

where where and are unit quaternions representing the final orientation in world coordinates of the object in test sample, and prediction respectively. To obtain a unified distance measure, we combine and by computing the normalised error as

(11)

where is a normalisation constant. We set where is the Euclidean norm of the linear velocity vector representing the linear push from our action set, and is the duration of the push applied during experiments. We chose that normalisation constant because it constitutes a critical parameter of the problem. All else equal, the longer the push, the more challenging the prediction. In our specific set-up, .

Object Baseline RO RO+3OE RO+5OE
Simulation
Cuboid
Cube
Rounded
Triangular
Cylinder
Total
Real robot
Large box
Small box
Total
TABLE I: Average normalised error, models trained on training pushes

V-F Results

In simulation and in experiments with the real robot, all of the trained models outperformed the baseline predictor for all test objects with regard to the average normalised error. The lowest overall error value across contact information and sample sizes, with a magnitude of , was achieved by RO+3OE with test samples (see Fig. 4). Breaking this down into its non-normalised components, it is equivalent to an average linear error of cm, and an average angular error of

. All trained models furthermore exhibited a lower standard deviation of the normalised error than the baseline predictor. Nevertheless, we see great variability in predictions with some of the standard deviations similar in magnitude to the average error itself.

Concerning the size of the training set (Fig. 4, left), our data indicates a relationship between model complexity and sample complexity. For training samples, the simplest model RO performs best while RO+5OE fares worst. As the number of training pushes increases, the performance of RO remains approximately flat. Meanwhile, the error values for RO+3OE and RO+5OE decrease. At pushes, RO+3OE surpasses RO as the top performer while while RO+5OE reaches a similar performance level as RO.

Considering differences between objects (Fig. 4, right), we find that, overall, the object-environment contacts do not significantly improve the quality of predictions, besides for the triangular prism. In some cases, discordant outputs from the local predictors diminished performance, which can happen with the PoE. Nevertheless, the predictions are still more accurate than the baseline predictor.

Vi Discussion & Future Work

The experimental results obtained in simulation provide empirical support for the hypothesis that our approach enables object transfer. For objects with sufficiently similar contact geometry, performance is comparable or even better than for the training object. Objects with very different curvatures (the cylinder), and a different area of the support surface (the triangular prism) challenge our predictors. In the first case, the query density is unable to generate sufficiently similar contacts. In the latter case, although similar contacts can be generated, the dynamics of the test object are too different from those encoded by the motion model. In the case of the rounded triangular prism, predictions are accurate as the query density is able to generate contacts on the flat faces while the support surface is large enough to result in motions similar to those of the training cube. One possible approach to increasing prediction accuracy further is to try to increase the generalisation capabilities of the learned models. A more simple, yet promising alternative is learning additional contact and motion models, and selecting the most suitable ones at prediction time. We expect that learning one model for flat surface and one for curved surfaces alone will greatly increase prediction accuracy.

Regarding the high variability of prediction accuracy, we find a large difference in performance between the two contact models. For the front link’s contact model, the average normalised error across all models is , compared to a value of for the left link’s contact model. Further analysis of the data shows that for contacts with the left link, depending on the query pose and contact friction, the object came in contact with the robot’s wheels, leading to highly variable results. This is a problem that can be more readily addressed through improved robot design than through modifications of the predictors.

Another reason for variable prediction performance lies in the differing quality of generated query poses. Not all generated query poses are equally similar to the contacts seen during training time. Hence, predictions may require more or less generalisation capability depending on the query pose. Investing more effort into generating optimised query poses may thus be worth the cost depending on the achieved performance yield. Investigating this question is an interesting route for future work.

With regard to performance gains achieved by adding further contact information, our results are mixed. Overall best performance is achieved by RO+3OE but RO performs better for small training sets. Even in the case of training samples, for some objects, adding contact information indeed worsens performance. To illuminate those results, we note that combining object-environment contacts in a PoE entails the risks of obtaining small numbers. In the extreme case, this may result in numerical instability or predictions of barely diminishable likelihoods. When few kernels support the density, likelihoods are more prone to be small than in the case of abundant training data. Our findings that RO+3OE and RO+5OE improve with the size of the training set are consistent with those observations. Whether this explains all of the observed differences, and whether their accuracy will continue to increase at a similar rate with the number of training pushes, are questions which we aim to address in future work by utilising our full training set.

In the light of this discussion, a particularly interesting result of our experiments is that additional contact information worsens performance for the cylinder in simulation, and for the large and small boxes in experiments with the real robot. Notably, in all cases the objects exhibit surface features which are comparably dissimilar to those seen during training time: the cylinder because of its rounded shape, the test boxes because real point clouds are noisier than the simulated ones seen during training. As motion candidates’ likelihoods are evaluated conditional on surface features, this may exacerbate the aforementioned challenge of small numbers. At this point, we emphasise that this paper represents a first investigation into the proposed approach, and rigorously scrutinising the posed questions will require further experiments which we will conduct in future work. In particular, we aim to train models on real point cloud data to evaluate whether they fare better in predicting real pushes from noisy point clouds.

With regard to limitations

of our approach, we already noted that our experiments focus on planar sliding, and have primarily been conducted in simulation. Complementing this with further experiments on real robotic platforms, and with objects that are free to topple and roll, are prioritised items for future work. Alleviating the requirement of learning a separate motion model for each action would furthermore greatly increase the range of applications for our approach. Evaluating the predictive performance of alternative surface features, such as surface material, is another interesting route for further investigations. So is using information obtained from force-torque sensors for prediction. Furthermore, we are interested in learning dynamic contact models, i.e. predicting how contacts will change over time. Finally, we aim to use the presented feature-based predictors as transition models for reinforcement learning.

References

  • [1] P. Agrawal, A. V. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intuitive physics. In Advances in Neural Information Processing Systems, pages 5074–5082, 2016.
  • [2] M. Bauza and A. Rodriguez. A probabilistic data-driven model for planar pushing. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 3008–3015, May 2017.
  • [3] M. R. Dogar and S. S. Srinivasa. A planning framework for non-prehensile manipulation under clutter and uncertainty. Autonomous Robots, 33(3):217–236, 2012.
  • [4] C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems, pages 64–72, 2016.
  • [5] R. Fisher. Dispersion on a sphere. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 217(1130):295–305, 1953.
  • [6] K. Kanatani. Statistical optimization for geometric computation: theory and practice. Courier Corporation, 2005.
  • [7] J. King, M. Cognetti, and S. Srinivasa. Rearrangement planning using object-centric and robot-centric action spaces. In IEEE International Conference on Robotics and Automation, May 2016.
  • [8] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, et al. Optimization by simulated annealing. science, 220(4598):671–680, 1983.
  • [9] M. Kopicki, R. Detry, M. Adjigble, R. Stolkin, A. Leonardis, and J. L. Wyatt. One-shot learning and generation of dexterous grasps for novel objects. The International Journal of Robotics Research, 35(8):959–976, 2016.
  • [10] M. Kopicki, S. Zurek, R. Stolkin, T. Moerwald, and J. L. Wyatt. Learning modular and transferable forward models of the motions of push manipulated objects. Autonomous Robots, 41(5):1061–1082, Jun 2017.
  • [11] K. M. Lynch, H. Maekawa, and K. Tanie. Manipulation and active sensing by pushing using tactile feedback. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, volume 1, pages 416–421, Jul 1992.
  • [12] M. T. Mason. Manipulator grasping and pushing operations. PhD thesis, Massachusetts Institute of Technology, 1982.
  • [13] M. A. Peshkin and A. C. Sanderson. The motion of a pushed, sliding workpiece. IEEE Journal on Robotics and Automation, 4(6):569–598, Dec 1988.
  • [14] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, London, 1986.
  • [15] M. Spivak. A Comprehensive Introduction to Differential Geometry, volume 1. Publish or Perish, 1999.
  • [16] J. Zhou, J. A. Bagnell, and M. T. Mason. A fast stochastic contact model for planar pushing and grasping: Theory and experimental validation. arXiv preprint arXiv:1705.10664, 2017.
  • [17] C. Zito, R. Stolkin, M. S. Kopicki, and J. L. Wyatt. Two-level rrt planner for robotic push manipulation. In IEEE Proc. Intelligent Robots and Systems (IROS), pages 678–685, 2012.