Robot manipulation tasks such as tidying up a room or sorting piles of objects are a substantial challenge for robots, especially in scenarios with unknown objects. The objective of object singulation is to separate a set of cluttered objects through manipulation, a capability regarded as relevant for service robots operating in unstructured household environments. Further, the ability to separate unknown objects from surrounding objects provides great benefit for object detection, which still remains an open problem for overlapping objects. A key challenge of object singulation is the required interaction of multiple capabilities such as perception, manipulation and motion planning. Perception may fail because of occlusions from objects but also from occlusion by the manipulator itself. Manipulation actions such as grasping may also fail when attempting to manipulate objects that the perception systems fails to correctly segment. Motion planning is particularly prone to errors when applied in unknown, unstructured environments.
Previous approaches for object singulation strongly incorporate the concept of object segments or even complete objects. Push action strategies are selected based on the current belief of the configuration of all segments or objects in the scene, an assumption that is not robust, since segments might merge and object detection might fail. We apply a different strategy that aims to relax the concept of segments and objects for solving the task at hand, making task execution less prone to modelling errors from the perception modules. Similar to recent approaches that operate directly on image inputs levine2016end ; mahler2017dex we aim for an end-to-end action selection approach that takes as input an RGB-D image of a tabletop scene and proposes a set of meaningful push actions using a convolutional neural network, as shown in Fig. 1.
Our primary contributions are: 1) a push proposal network (Push-CNN), trained in an iterative manner to detect push candidates from RGB-D images in order to separate cluttered objects, 2) a method that samples potential push candidates based on depth data, which are ranked with our Push-CNN, and 3) real-world experiments with a PR2 robot, in which we compare the performance of our method against a strong manually-designed baseline.
We quantitatively evaluate our approach on 4 sets of real-robot experiments. These experiments involve singulation tasks of increasing difficulty ranging from 4 unknown objects up to configurations with 8 unknown objects. In total the robot executed over push actions during the evaluation and achieved an object separation success rate of up to , which shows that our Push-CNN is able to generalize to previously unseen object configurations and object shapes.
2 Related Work
Standard model-based approaches for object singulation require knowledge of object properties to perform motion planning in a physics simulator and to choose push actions accordingly cosgun2011push ; dogar2011framework
. However, estimating the location of objects and other properties of the physical environment can be subject to errors, especially for unknown objectsyu2016more . Interactive model-free methods have been applied to solve the task by accumulating evidence of singulated items over a history of interactions including push and grasp primitives chang2012interactive . Difficulties arise when objects have to be tracked after each interaction step, requiring small motion pushes due to partial observations and occlusions. We follow a model-free approach that encodes evidence of singulated objects in the learned feature space of the network. Hermans et al. hermans2012guided perform object singulation using several push primitives similar to ours. Their method is based on object edges to detect splitting locations between potential objects to apply pushes in those regions respectively, but does not include learned features and does not take into account stacked objects. Katz et al. katz2014perceiving present an interactive segmentation algorithm to singulate cluttered objects using pushing and grasping. Similar to them, we also create object hypothesis from over-segmenting objects into surface facets but use a different method based on the work of richtsfeld2012segmentation
. They perform supervised learning with manual features to detect good manipulation actions such as push, pull and grasp. We take the idea of learning a step further by directly learning from over-segmented images and therefore removing the need for manual feature design. Boularias et al.Boularias2015
learn favourable push actions to improve grasping of objects in clutter in a reinforcement learning setting, but use manual features and do not show the applicability of their method for more than 2 objects. Gupta et al.gupta2015using present an approach to sort small cluttered objects using a set of motion primitives, but do not show experiments with objects of various sizes and shapes. Laskey et al. laskey2017comparing leverage learning from human demonstrations to learn control policies for object singulation, but thus far only considered singulating a single object from a cluttered scene, while our method can singulate up to 6 objects and achieve a higher success rate.
Our approach is related to robot learning from physical interaction pinto2016curious , including approaches that learn to predict how rigid objects behave if manipulated kopicki2011learning ; hermans2013learning ; zhu2017model and self-supervised learning methods for grasping pinto2016supersizing ; levine2016learning . Gualtieri et al. gualtieri2016high present a grasp pose detection method in which they detect grasp candidates using a convolutional neural network. They follow a similar methodology but for a different task and goal. Recent methods learn the dynamics of robot-object interaction for the task of pushing objects to a target location finn2016deep ; agrawal2016learning , but do not evaluate on scenes with many touching objects and do not follow an active singulation strategy. Byravan et al. byravan2016se3 learn to predict rigid body motions of pushed objects using a neural network, but to not demonstrate results for multiple objects in a real-world scenario. Object separation is used as a source to leverage perceptual information from interaction following the paradigm of interactive perception bohg2016interactive . As demonstrated in our experiments, our singulation method can be used to generate informative sensory signals that lower the scene complexity for interactive object segmentation schiebener2011segmentation ; hausman2013tracking ; van2014probabilistic ; koo2014unsupervised .
3 Learning to Singulate Objects
We consider the problem of learning good push actions to singulate objects in clutter. Let be an image with height and width taken from an RGB-D camera with a known camera intrinsics matrix . Further, let be a push proposal with start position and push angle both specified in the image plane. We aim to learn a function with learned parameters
that takes as input the image together with a push proposal and outputs a probability of singulation success.
The prediction function is trained in an iterative manner (we use 2 training iterations ) on experiences that the robot collects trough interaction with the environment. At the first iteration, we gather a dataset
of push proposals together with images from randomly pushing objects in simulation . Then, we train a classifierthat best mimics the labels from an expert user in terms of successful and unsuccessful singulation actions. In the next iteration, we use to collect more data and add this data to . The next classifier is trained to mimic the expert labels on the whole dataset . We refer to as a vanilla network that is trained solely on experience collected from random interaction and as an aggregated network. The training examples are labeled by an expert user, where is a corresponding binary label. Fig. 2 depicts the labels given by an expert user for one exemplary singulation trial. The function is trained in a supervised manner using a negative log likelihood loss
and a sigmoid function.
Our approach is divided into three modules, as shown in Fig. 1: 1) a sampling module that generates a set of push proposals, 2) a neural network module that classifies the set of push proposals and ranks them accordingly, and 3) an action execution module that computes arm motion plans for the ranked pushes and then executes the first push proposal that has both a high probability of singulation success and a successful motion plan.
Push Proposal Sampling
Our push proposal sampling method is designed to generate a set of push proposals . First, we sample 3D push handles represented as point normals pointing parallel to the xy-plane of the robot’s odometry frame , . Specifically, given the raw depth image, we first apply a surface-based segmentation method richtsfeld2012segmentation (we only use the pre-segmentation and surface-detection steps of the approach) to obtain a set of segments together with a surface normals map and a table plane. Second, we sample for each segment a fixed number of push handles and remove push handles below the table plane. We assume a fixed push length for all handles and compute the convex hull for the table in order to filter out push handles that have push end-points outside of the table. Finally, we obtain a set of push proposals by transforming the push handles into the camera frame using the transformation matrix from the odometry to the camera frame together with the camera intrinsics .
Push Proposal Network and Input
We parametrize the predictor using a deep convolutional neural network, denoted as a push proposal CNN in Fig. 1. The distinction between and as input is somewhat artificial, because both inputs are combined in a new image which constitutes the final input of the network. To predict the probability of success of a push proposal for the given singulation task the most important feature is the relation of the candidate segment that we want to push and its relation to neighbouring segments. We propose to fuse the segmented camera image with a push proposal using rigid image transformations , see Fig. 3. First, we transform the image by a rigid translation according to the start position of the push proposal. Second, we apply image rotation according to the push angle . The resulting image is , as depicted in the right image of Fig. 3. At test time we forward the set of transformed input images into the push proposal network to predict a probability of success for each push proposal in the set . The network architecture is depicted in Fig. 4.
We train the network from scratch using random uniform weight initialization and Adam kingma2014adam
. We performed network architecture search with 10 different configurations and found that for our task the depicted network architecture yielded best performance on 2 separate validation sets. We experienced a drop in classification performance when increasing the image size, introducing dropout, or removing pooling operations, and did not experience improvement for adding xavier weight initialization or other activation functions besides ReLUs. Finally, we train our model for 25 epochs with a batch size of 64, using the default parameters for Adam (learning rate, , , , decay).
Push Motion Planning
To find a motion plan for each push proposal we use the LBKPIECE motion planning algorithm provided by an out-of-the-box framework sucan2012open . Our arm motion strategy consists of two steps. In the first step, we plan to reach the target push proposal . In a second step, given that the gripper reached the desired push start position, the robot performs a straight line push with fixed push length . We compute motion plans for both steps before execution to avoid that the robot executes the reaching plan and then fails to find a plan for the straight line push. We found our two-step procedure to be more stable than executing a reach and push trajectory at once, due to arm controller errors.
We conduct both qualitative and quantitative experiments on a real PR2 robot to benchmark the performance of our overall approach. Hereby, we aim to answer the following questions: 1) Can our push proposal network predict meaningful push actions for object shapes and configurations it has not seen during training? 2) Can our network trained in simulation generalize to a real-world scenario? 3) Can our model reason about scene ambiguities including multiple object clusters and focus its attention to objects that are more cluttered than others? In order to answer question (1) we perform real-world experiments with objects that the network has not seen during training and compare against a manual model-free baseline that reasons about available free space in the scene and keeps a history of previous actions. To answer question (2) we test a network that we train solely with data from simulation (denoted as vanilla network). Finally, three challenging experiments with 6 and 8 objects aim to answer question (3), where the learned model has to trade-off between further separating isolated objects to create more space and split isolated object clusters to solve the task. The overall intent of our experiments is to show that a learned model is able to directly map from sensor input to a set of possible actions, without prior knowledge about objects, spatial relations or physics simulation.
4.1 Experimental Setup
Our PR2 robot is equipped with a Kinect 2 camera mounted on its head that provides RGB-D images with a resolution of . We use both arms of the robot, each with 7-DOF to perform the pushing actions. The task is designed as follows: a trial consists of a fixed number of objects from a test set of everyday objects with different shapes that the robot has never seen before. In the starting configuration for a trial all objects are placed or stacked on a table to be in collision. The starting configurations are saved and reproduced for all methods in a manual manner using point cloud alignment. We conduct different experiments of increasing difficulty ranging from 4 to 8 objects, in which for each experiment the number of objects stays fixed. The 25 starting configurations for our quantitative evaluation with 4 objects are depicted in Fig. 5.
We report the number of successful singulation trials. A singulation trial is considered successful if all objects are separated by a minimum distance of 3cm. Further, we report results for singulation success after every action, to show which methods can successfully execute the task with less actions. For each trial the robot performs a fixed set of actions. Given the number of objects for each experiment the maximum number of pushes is . The robot is allowed to stop before, if it reasons that all objects are singulated. This is implicitly encoded into our method. If the robot does not find a motion plan for the set of push proposals that the network ranked as positive it automatically terminates the trial.
To train our push proposal network we use an iterative training procedure. First, the vanilla network is trained on labeled data from a total of 2,486 (243 positives, 2,243 negatives) random interactions performed in simulation. Then the aggregated network is trained on additional 970 interactions (271 positives, 699 negatives), which we collected using the vanilla network in both simulation and real-world, resulting in a training dataset size of 3,456 push interactions. We use the object dataset by Mees et al. mees17iros for our simulation experiments.
4.2 Baseline Method
We provide a quantitative comparison against a manually designed baseline method, to evaluate whether or not the predictions of the push proposal network lead to improved object singulation results. We do not provide knowledge about the objects and their physical properties to our model, correspondingly our baseline method is model-free and follows an interactive singulation strategy that is reasonably effective. The method which we denote as ‘free space+tracking’ works as follows:
Given a set of segments from our object over-segmentation approach we construct a graph that includes all segments, where each segment is a node in the graph and the edges correspond to distances between the segments. We represent each segment as an axis-aligned bounding box (AABB) and compute an edge between 2 segments by means of the Manhattan distance of the 2 AABBs.
We compute 2 features for scoring push proposals with the baseline method. The first feature is a predictive feature that reasons about the resulting free space if a segment would be pushed to some target location. To compute the feature, we predict a straight line motion of the respective segment to which the push proposal is assigned to, according to the push direction and the length. Next, we compute the Manhattan distance between the resulting transformed AABB and all other segments in the scene, which we assume will remain static. The free space feature is the minimum distance between the transformed AABB and the other AABBs. If the predicted final position of a segment would lead to collision with another segment the free space feature is zero.
The second feature includes the push history that we store for each segment. It follows the intuition that the same segment should not be pushed too often, regardless of the predicted free space around it. To retrieve the push history over all segments, we follow a segment-based tracking approach, which aligns the centroid and the principal components of two segments from the current set of segments and the set of segments from the last interaction. We match a set of segments using a weighted average of the principal components and the segment centroid distances . To punish multiple pushes of a segment throughout a trial we map the push motion history into a normalized feature using an exponential decay function .
Accordingly, a push proposal receives the score .
4.3 Quantitative Comparisons
We extensively evaluate our method in real-robot object singulation experiments with 4,6 and 8 unknown objects. Additionally we perform an experiment with 6 identical objects to further challenge our approach. Our results with 4 objects (experiment 1), shown in Table 1, indicate that our method is able to improve over the performance of the manual baseline. The success rate of our vanilla network is , which suggests that the model is making meaningful predictions about which push actions to perform in order to singulate all 4 objects. Fig. 6 provides a more fine-grained evaluation, showing the success rate with respect to the number of pushes. Note that the network requires less push actions to complete the task. Interestingly, the baseline method performs on par with the network after the robot has executed five push actions, but then it does not further improve after 6 executed pushes. We noted that the baseline method sometimes fails to singulate the last 2 objects of the scene and instead will choose an object that might already be singulated because it lacks a good attention mechanism.
|Experiment No.||Number of objects||Method||Success/total trials||Success rate||Mean number of pushes|
|1||4||Free space + tracking||14/25||56%||5.41.68|
|2||6||Free space + tracking||4/10||40%||7.81.61|
|3||6 (identical objects)||Aggregated network||10/20||50%||7.61.76|
The task is more complex with 6 or 8 objects, due to additional space constraints and formation of multiple object clusters. When looking at the scene in Fig. 1 one sees that 4 out of the 6 objects are located very closely on the table (green drink bottle, blue box, white round bowl, white coffee bag). Although there are many possible push actions that we would consider reasonable, only pushing the coffee bag or the round bowl to the left side of the table can clear the scene. Accordingly, we find that the performance of the vanilla network drops with respect to the previous experiment with 4 objects. During training it has only seen a small amount of scenes where a random baseline would have cleared all but 2 objects and even less likely has seen examples where the robot by chance chose the right action to clear the 2 remaining objects. Therefore, in this scenario the vanilla network and the baseline method perform on par with a success rate of . When comparing the average number of pushes the baseline performs slightly better, see Fig. 7. Results show that the aggregated network clearly outperforms the other two methods, winning seven out of ten trials with an average of push actions needed to singulate all 6 objects, as shown in Table 1 (experiment 2). To get an intuition about the numerical results we refer the reader to the performance reported by Hermans et al. hermans2012guided , who evaluated a very similar task with 6 objects. They report a success rate of with twelve average number of push actions.
We perform an additional experiment with 6 identical objects (sponges) using the aggregated network and reach a success rate of . The most challenging experiment is conducted with 8 objects and even though the network was not trained with such large amount of objects on the table our approach performs well and is able to generalize. Both experiments are depicted in Table 1 (experiment 3 and 4).
4.4 Qualitative Results
Fig. 8 gives insights about the singulation trials from the perspective of the Kinect camera for experiment 1. The first and the third column shows the current scene, while the second and fourth column shows the robot performing a push action. The trial evolves from top to bottom of the figure. Fig. 9 shows qualitative end results of singulation runs with 6 identical objects (experiment 3) and 8 objects (experiment 4).
We presented a novel neural network-based approach to singulate unknown objects in clutter by means of pushing actions and showed that it is possible to perform challenging perceptual manipulation tasks by learning rich feature representations from visual input. Unlike traditional methods we do not manually design features for the task at hand. Instead, we train a high-capacity convolutional neural network that we incorporate into a novel system of motion planning and action selection, leading to good generalization performance, while requiring a reasonable amount of labeled training data. We tested our method in extensive real-robot experiments using a PR2 robot and showed the ability of our method to achieve good performance for the challenging task of singulating up to 8 objects. In the future it would be interesting to train the network in a self-supervised manner, which poses the challenge to automatically generate labels from ambiguous visual data. Furthermore, the network could be extended to a multi-class model that predicts different push lengths, which would require collecting additional training data with varying push lengths.
Acknowledgements.This work was partially funded by the German Research Foundation under the priority program “Autonomous Learning” SPP 1527 and under grant number EXC 108. We thank Seongyong Koo for advice with the baseline method. We further thank Sudhanshu Mittal, Oier Mees and Tim Welschehold for their help and ideas.
- (1) Agrawal, P., Nair, A.V., Abbeel, P., Malik, J., Levine, S.: Learning to poke by poking: Experiential learning of intuitive physics. In: Advances in Neural Information Processing Systems (NIPS) (2016)
- (2) Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S., Sukhatme, G.: Interactive perception: Leveraging action in perception and perception in action. arXiv preprint arXiv:1604.03670 (2016)
Boularias, A., Bagnell, J.A.D., Stentz, A.T.: Learning to manipulate unknown
objects in clutter by reinforcement.
In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) (2015)
- (4) Byravan, A., Fox, D.: Se3-nets: Learning rigid body motion using deep neural networks. arXiv preprint arXiv:1606.02378 (2016)
- (5) Chang, L., Smith, J.R., Fox, D.: Interactive singulation of objects from a pile. In: Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA) (2012)
- (6) Cosgun, A., Hermans, T., Emeli, V., Stilman, M.: Push planning for object placement on cluttered table surfaces. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2011)
- (7) Dogar, M., Srinivasa, S.: A framework for push-grasping in clutter. Robotics: Science and systems VII 1 (2011)
- (8) Finn, C., Levine, S.: Deep visual foresight for planning robot motion. arXiv preprint arXiv:1610.00696 (2016)
- (9) Gualtieri, M., ten Pas, A., Saenko, K., Platt, R.: High precision grasp pose detection in dense clutter. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS). IEEE (2016)
- (10) Gupta, M., Müller, J., Sukhatme, G.S.: Using manipulation primitives for object sorting in cluttered environments. IEEE transactions on Automation Science and Engineering 12(2) (2015)
- (11) Hausman, K., Balint-Benczedi, F., Pangercic, D., Marton, Z.C., Ueda, R., Okada, K., Beetz, M.: Tracking-based interactive segmentation of textureless objects. In: Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA) (2013)
- (12) Hermans, T., Li, F., Rehg, J.M., Bobick, A.F.: Learning contact locations for pushing and orienting unknown objects. In: Proc. of the IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids) (2013)
- (13) Hermans, T., Rehg, J.M., Bobick, A.: Guided pushing for object singulation. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2012)
- (14) Katz, D., Venkatraman, A., Kazemi, M., Bagnell, J.A., Stentz, A.: Perceiving, learning, and exploiting object affordances for autonomous pile manipulation. Autonomous Robots (2014)
- (15) Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
- (16) Koo, S., Lee, D., Kwon, D.S.: Unsupervised object individuation from rgb-d image sequences. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2014)
- (17) Kopicki, M., Zurek, S., Stolkin, R., Mörwald, T., Wyatt, J.: Learning to predict how rigid objects behave under simple manipulation. In: Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA) (2011)
Laskey, M., Chuck, C., Lee, J., Mahler, J., Krishnan, S., Jamieson, K., Dragan, A., Goldberg, K.: Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations.In: Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA) (2017)
Levine, S., Finn, C., Darrell, T., Abbeel, P.: End-to-end training of deep
Journal of Machine Learning Research17(39) (2016)
- (20) Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research (2016)
- (21) Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312 (2017)
- (22) Mees, O., Abdo, N., Mazuran, M., Burgard, W.: Metric learning for generalizing spatial relations to new objects. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2017)
- (23) Pinto, L., Gandhi, D., Han, Y., Park, Y.L., Gupta, A.: The curious robot: Learning visual representations via physical interactions. arXiv preprint arXiv:1604.01360 (2016)
- (24) Pinto, L., Gupta, A.: Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In: Proc. of the IEEE Int. Conf. on Robotics & Automation (ICRA) (2016)
- (25) Richtsfeld, A., Mörwald, T., Prankl, J., Zillich, M., Vincze, M.: Segmentation of unknown objects in indoor environments. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2012)
- (26) Schiebener, D., Ude, A., Morimoto, J., Asfour, T., Dillmann, R.: Segmentation and learning of unknown objects through physical interaction. In: Proc. of the IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids) (2011)
- (27) Sucan, I.A., Moll, M., Kavraki, L.E.: The open motion planning library. IEEE Robotics & Automation Magazine 19(4), 72–82 (2012)
- (28) Van Hoof, H., Kroemer, O., Peters, J.: Probabilistic segmentation and targeted exploration of objects in cluttered environments. IEEE Transactions on Robotics 30(5) (2014)
- (29) Yu, K.T., Bauza, M., Fazeli, N., Rodriguez, A.: More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing. In: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) (2016)
- (30) Zhu, S., Kimmel, A., Bekris, K.E., Boularias, A.: Model identification via physics engines for improved policy search. arXiv preprint arXiv:1710.08893 (2017)