Data-Efficient Learning for Sim-to-Real Robotic Grasping using Deep Point Cloud Prediction Networks

06/21/2019 ∙ by Xinchen Yan, et al. ∙ 3

Training a deep network policy for robot manipulation is notoriously costly and time consuming as it depends on collecting a significant amount of real world data. To work well in the real world, the policy needs to see many instances of the task, including various object arrangements in the scene as well as variations in object geometry, texture, material, and environmental illumination. In this paper, we propose a method that learns to perform table-top instance grasping of a wide variety of objects while using no real world grasping data, outperforming the baseline using 2.5D shape by 10 cloud of object, and use that to train a domain-invariant grasping policy. We formulate the learning process as a two-step procedure: 1) Learning a domain-invariant 3D shape representation of objects from about 76K episodes in simulation and about 530 episodes in the real world, where each episode lasts less than a minute and 2) Learning a critic grasping policy in simulation only based on the 3D shape representation from step 1. Our real world data collection in step 1 is both cheaper and faster compared to existing approaches as it only requires taking multiple snapshots of the scene using a RGBD camera. Finally, the learned 3D representation is not specific to grasping, and can potentially be used in other interaction tasks.



There are no comments yet.


page 1

page 4

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning a domain-invariant representation for object manipulation in real-world environments with minimum supervision is a fundamental challenge in vision and robotics. State-of-the-art learning systems rely on collecting large-scale datasets with interaction labels (e.g., success or failure) using multiple robots (e.g., KUKA robotic arms) to update the model parameters in parallel via end-to-end training [32]. The cost of deploying these learning systems to a new setting (e.g. a novel task under different environment) is very high and the efficiency is limited by the number of robots that can be deployed in parallel. Therefore, existing learning based methods also use simulation data to alleviate the data collection problem. However, the existence of the sim-to-real gap imposes additional difficulty in transferring from simulation to the real world. Recent work mainly focuses on domain randomization by generating diverse configurations in simulation [10, 49, 23]. Other approaches use pixel-level or feature-level domain adaptation [3, 14, 24]. At best these methods still require a significant amount of unlabelled real world data, which is still costly and time-consuming to collect and may not transfer to a new task.

Figure 1: Architecture overview: (a) we use an object detection network to obtain object-centric color, depth and mask images; (b) Our point cloud prediction network allows us to generate a 3D point cloud of the detected object; (c) Finally, we use a grasping critic network to predict a grasp.

In contrast to end-to-end training frameworks, we consider learning visual structures as an intermediate representation for sim-to-real transfer. More specifically, we embrace the recent advances in using deep neural networks to predict 3D structures (e.g., 3D voxel grids, point clouds, and triangle meshes) from single-view observation 

[13, 22, 51]. Compared to 2D sensory input, 3D structure is known to be very useful for shape-based object manipulation such as grasping [18, 31, 2]. For example, the geometric center of an object, is a shape feature useful for its localization and manipulation, which can be inferred directly from 3D structures [57].

Although shape prediction modules can be useful, it remains a non-trivial task to apply the existing work to robotic platform in the real world. First, inferring a depth image solely from a single-view RGB image based on traditional computer vision techniques introduces ambiguities. Recent learning-based work on image-to-depth prediction have demonstrated good performance on the depth prediction from a single RGB camera 

[11, 60, 17], however, applications of these methods to robotics tasks are not yet well explored. Second, even with additional depth channel as input, sim-to-real transfer is not straightforward. For example, depth sensors often inject arbitrary noise (e.g. when the object is dark or transparent, or when there is misalignment between depth and RGB) in the real world.

In this work, we design a novel shape prediction model that generates full 3D point clouds of an object from a single-view RGBD image sampled from a sequence containing multiple snapshots of the scene. We further explore cross-view consistency as the self-supervision signal for training, as multiple input views share the same intermediate representation given camera transformations (e.g., rotating the 3D shape from one view to another). Compared to other 3D representations such as voxels [56, 59] and triangle meshes [27, 54], 3D point clouds are lightweight (i.e. low-dimensional) and free from aliasing artifacts under camera transformations.

In summary, our contributions are:

  • we present a self-supervised shape prediction framework that reconstructs full 3D point clouds as representation for robotic applications;

  • we show data-efficient and robust sim-to-real transfer using this self-supervised framework;

  • we demonstrate the application of the predicted point cloud on the robotic task of table-top instance grasping using zero real-world grasping data.

2 Related Work

Learning to interact with objects is a wide and actively studied field of vision and robotics research. Many approaches are based on using visual features obtained from RGB or RGBD images to identify objects and grasping points [47, 38, 30, 19, 29, 42, 44]

. While early approaches focus on studying the problem of grasping based on traditional techniques, such as logistic regression or learning probabilities of grasp success 

[47, 38], more recent approaches often rely on deep neural networks to extract more nuanced features from images. This ranges from detecting objects [30] and their pose [19], to learning grasp types from kinesthetic demonstrations [29], under gripper pose uncertainty [26]

, or following an unsupervised learning scheme 

[9]. The effectiveness of deep neural networks is unparalleled, however, training the networks requires large-scale labeled datasets to generalize to unseen objects [44]. Other approaches for robotic grasping focus on identifying the grasp affordance of objects [8, 28] or by categorizing them according to their function [41].

Another line of research focuses on reconstructing objects and scenes as 3D triangle meshes, which in turn can be used to enable more informed robotic behavior [33, 50]. Reconstructing objects from incomplete scans is challenging. Various approaches exist to reconstruct the geometry of objects, while considering the many facets of the problem [39, 40, 16, 7, 34, 48]. Very recently, Henderson and Ferrari [22] introduced a network architecture to generate 3D meshes, while only providing single image supervision. At a higher level, our method is related to recent work on deep 3D representation learning for robot-object interaction [12, 58, 55]. Moreover, to facilitate the learning of 3D shape representations and the grasping of objects, recent efforts concentrate on curating large shape and grasp repositories [4, 37, 36].

It has been recognized that point sets can serve as an effective representation to obtain additional information of objects. Several recent approaches learn representations to generate point clouds of shapes [13, 1, 15]. More closely targeted to robotics, Xu et al. [57] and Wang et al. [53]

propose fusion networks to extract pixel-wise dense feature embeddings for estimating 3D bounding boxes and 6D object pose respectively. These approaches focus on robotic vision, and unlike them we do not rely on the 6D object pose as a label for training.

Reconstructing the geometry of an object allows identifying grasping points more precisely and thereby to control the grasp [18, 31, 2]. To this end, Varley et al. [51] and Yan et al. [58] recently proposed to reconstruct shapes by learning geometry-aware representations based on 3D occupancy grids. Grids serve as an efficient representation but they obscure the natural invariance of 3D shapes under geometric transformations and only support to represent shapes at low resolution. Similar to our method these approaches also aim at reconstructing objects or object parts to enable more robust grasping. However, unlike them we focus on the self-supervised reconstruction of full 3D point clouds of objects. Point clouds enable a more robust sim-to-real transfer and thereby facilitate efficient training.

Methods on sim-to-real transfer aim at training neural networks on simulated data with the goal to operate on real data at inference time. This is also known as domain adaptation, where a model is trained with data points of a source domain to then generalize to a target domain [43, 6]. As simulated data can be generated efficiently with ground-truth labels, a number of methods focus on sim-to-real transfer by enhancing or synthesizing images [47, 52, 20]. More recently, Bousmalis et al. [3] introduced a generative adversarial network pipeline to enhance simulated data to significantly reduce the number of real-world data samples for a grasping task. Fang et al. [14] propose a multi-task domain adaptation framework composed of three grasp prediction towers for instance grasping in cluttered scenes. Finally, James et al. [24] introduce a two-stage generator pipeline to translate simulated and real images into a canonical representation to realize sim-to-real transfer. While existing work mostly focuses on introducing architectures to operate on images, we argue that point clouds of objects serve as an effective representation to facilitate sim-to-real transfer.

Figure 2: Overview of our object detection and point cloud prediction networks: we detect an object and obtain its cropped color and depth images along with a binary mask based on an object detection network. We then use the 5 channels (RGBD-M) to train a point cloud prediction network that allows us to predict a 3D point cloud (PC) of the detected object. Using the depth image of the object and its mask, we compute a ground truth label for object depth. The network is trained against an image-based loss of the projected 3D point cloud and the ground truth mask of the object.

3 Methods

Given a set of RGBD observations, we aim at learning an intermediate representation that is compact, domain-invariant, semantically interpretable, and directly applicable for object manipulation. In particular, we use a point cloud of a target object of interest as our intermediate representation for learning due to the following reasons: (1) Point clouds are more lightweight and flexible compared to other 3D representations such as voxel grids and triangle meshes. (2) Point clouds describe the full 3D shape of a target object, which is inherently invariant to surface textures or environmental conditions. (3) A point cloud representation can directly be used to localize objects in the scene, hence simplifying the tasks when training a policy. Our approach includes two steps: 1) Learning a domain-invariant representation using visual observations from simulation and real world, which will be described in Section 3.1, and 2) Learning an object manipulation policy such as grasping using the representation from step 1, which will be described in Section 3.2.

3.1 Learning domain-invariant representation

Given a set of RGBD observations , where is an individual observation which could come from simulation or real world, our goal is to learn a domain-invariant point cloud representation reflecting the 3D geometry of the target object. These observations can be easily obtained by using a mobile manipulator moving around and taking snapshots of the workspace from different angles. We assume each object may be present in more than one snapshot as it allows to reconstruct the 3D geometry of the object better. However, we do not impose any explicit constraint on the number of snapshots that the object should be present in. Please note that the depth values from RGBD observations forms a 2.5D representation of the objects (e.g., visible part subject to noise) and thus do not provide the full 3D geometry. Furthermore, there is a reality gap between the depth values in simulation and real world, hence making a policy that is solely trained in simulation quite ineffective in the real world.

Self-supervised labeling.

While target point clouds for supervised learning of a deep network can be easily obtained in simulation, this task becomes notoriously costly and time-consuming for real data. Furthermore, the presence of noise and unmodeled nonlinear characteristics in a depth sensor make the learning harder, especially in the context of transfer learning. To address this challenge, we base our framework on the recent work of learning to reconstruct 3D object geometry using view-based supervision with differential re-projection operators

[25, 35].

We represent a point cloud of an object as a set of points , where , , are coordinates regarding -th point along xyz axis, respectively. Without loss of generality, we assume the point cloud coordinates are defined in the camera frame. We assume the ground-truth point cloud annotation is not directly available in the real world data and thus use multi-view projections as the supervision signal. More specifically, we use the camera intrinsic matrix to obtain the 2D projection in the image space from the point cloud (e.g., homogeneous coordinate is projected from ):


For localization, the corresponding tight bounding box can be derived from the 2D projection: , where , , , represents the bounding box center and size, respectively.

We collect RGBD snapshots from various scenes in simulation and real world by moving a mobile manipulator around the workspace. For the real world dataset we use Mask-RCNN [21] to detect object bounding boxs and their associated mask at each frame. For the simulation dataset bounding boxes can directly be obtained. Note that it is quite common that multiple objects may be present in each snapshot. We denote the data associated to the -th object in the -th frame by . We also denote the number of objects in the -th frame by . Next, we use the mask for each object to extract its associated depth values from the depth channel in each observation and then use the camera intrinsic matrix to obtain from the depth values. At the end of this step, we obtain and for all and .

Our deep net provides an estimate of the point cloud as the output which can be used to determine and using Eq. 1

. Then we define the loss function for training the domain invariant point cloud representation as follows:


where , , and are weighting coefficients, is the Huber loss between the estimated and labeled bounding box, and is the projected point-cloud prediction loss based on [25]. We extend their method in the following way: we do not rely on obtaining a full 3D point cloud as this is more difficult to accomplish in real-world environments. Moreover, while [25] uses synthetic 3D point clouds for training to reconstruct normalized shape centered at the origin, our goal is to reconstruct shapes at real-world scale.

Network architecture. Figure 2 illustrates the network architecture we use for shape prediction. Similarly to [25], we use a network that is composed of several encoder-decoder modules [13] and a fully-connected layer to predict the point clouds. However our proposed architecture is different from [25] in three ways in order to make it applicable for real world robotics setting: (1) we use object masks as an additional input channel to handle situations when multiple objects are present in the scene (a very common situation in robot settings). Therefore, the number of input channels to our network is five (RGBD-Mask), (2) we introduce a dynamic image cropping step on the RGBD-M channels. This allows to get a more focused view of the target instance, and (3) to account for the dynamic cropping, we add an additional input right after the encoder to provide the network with the adapted camera intrinsic characteristic resulted from cropping.

3.2 Learning Point cloud-based Grasping Policy

In this section we describe how the learned point cloud representation can be used to perform table-top instance object grasping. In this work, we use a critic network to predict the probability of the success for a sample table-top grasp based on the predicted point cloud for the target object, and the transformation from the robot base to the camera frame. In our setup the sample is composed of the 3D gripper position with respect to the robot base and the gripper yaw rotation .

Network architecture.

Figure 3 shows the architecture of our grasp prediction network. For the preprocessing, we first transform the point cloud to the proposed grasp frame using:


where can be directly calculated based on the sample grasp pose . We then shuffle the order of points in to allow the network to adapt to variations in the order of point clouds.

Our grasping critic network is derived from PointNet [45] architecture, which is composed of 4 fully-connected layers each followed by a ReLUactivation function with BatchNorm. The last layer is linear and reduces the output size to 1. It is followed by a sigmoid activation function to provide the grasp success.

Generating grasping data for training.

We collected data for training the grasp policy only in simulation. We use a heuristic grasping policy for the data collection as follows: (1) Compute the center of volume of the object

based on the predicted point cloud, (2) Set the translation part of the grasp pose to plus some random noise , i.e.

, (3) Randomly draw a yaw angle from a uniform distribution in the range


Figure 3: Overview of our point cloud-based grasping network. The point clouds are first transformed to the gripper frame, followed by a classification network adapted from PointNet [45] architecture with 5 fully-connected layers in total.

We then evaluate the grasp success by moving the arm first to a pre-grasp pose , where is a pose exactly above with some height difference constant, i.e. . This allows to properly pose the robot end-effector with respect to the object before attempting the grasp. Then the robot is moved to pose , and finally we command the robot to close its parallel-jaw gripper. The robot is then commanded to lift the object by moving back to . Then the grasp success is evaluated by checking whether the object is moved above the table. This evaluation can be done easily since we have access to the ground truth object pose in simulation. The training data is collected by running simulation robots in parallel and stored for training the off-policy grasping network.

Runtime evaluation.

One of the main advantages of using point clouds is that we directly have access to the 3D location of an object in the scene. Given this knowledge, the simplest approach to infer a grasp pose is to randomly sample several grasp pose candidates around the object pose (similar to our data collection procedure), and then take the command with the highest grasp success probability. However, to get better results, we use the cross-entropy method (CEM) [46] to find the most suited grasp pose by using a simple derivative-free optimization technique. Using CEM, at each iteration we sample a batch of grasp candidates. Then evaluate these samples and pick best ones (

). Next, we fit a Gaussian distribution to

and then sample a new batch of size . We repeat this process a few times until either a grasp candidate above the desired success probability threshold is achieved or when the CEM reaches the maximum number of allowed iterations. In our implementation we use , , and max number of iterations 3.

Note that since we are directly sampling the grasp pose, we can directly impose additional constraints to our sampling strategy (e.g., limiting samples to the desired workspace as well as removing kinematically infeasible samples).

4 Experiments

In the following we provide details on our real and simulated datasets and discuss results on shape prediction and grasping performance with a real robot.

4.1 Datasets.

Dataset (split) # objects # episodes # object-centric seqs
ShapeNet 1,345 38,653 189,424
Kitchenware (train) 69 37,568 142,293
Kitchenware (test) 20 3,092 11,956
Real (train) 110 534 38,301
Real (val) 34 49 2,720
Real (test) 43 291 18,462
Table 1: Statistics of shape prediction datasets.
Dataset (split) # objects # episodes
Kitchenware (train) 69 358,286
Kitchenware (test) 20 2,111
Procedural 1,000 633,069
Table 2: Statistics of simulated grasping datasets.

Shape prediction datasets.

We use both real-world data and data generated from simulation for learning domain-invariant 3D shapes. As shown in Table 1, our real-world dataset contains 187 distinct object instances in total, covering 8 object categories: balls, bottles cans, bowls, cups, mugs, wine glasses, and plates. To obtain sufficient objects for training our shape prediction model, we use additional CAD models from ShapeNet [4] and Kitchenware [58]. We use an off-the-shelf simulator PyBullet [5] to create virtual scenes in the simulation. For ShapeNet, we select objects from 6 categories: bags, bottles, bowls, cans, jars, and mugs, which are similar to the objects of our real-world dataset. Moreover, we use Procedural dataset [3] containing procedurally generated object shapes for evaluation.

Simulated grasping datasets.

Assuming our shape representation is domain-invariant, we only generate grasping data in simulation. As shown in Table 2, we use the Kitchenware and Procedual dataset [3] for training. Additionally, we use a subset of the Kitchenware dataset for grasping evaluation in simulation. These are held-out objects which have never been used for learning the domain-invariant 3D shapes.

Figure 4: Overview of the dataset used for learning the domain-invariant shape prediction model. We visualize the object instances used for training the point cloud prediction model in the real world and simulation (e.g., Kitchenware dataset and ShapeNet subset) from top to bottom.
Figure 5: Examples of data collection for shape prediction in the simulated (a) and real environment (b) and for instance grasping in the simulated (c) and real (d) setup.

Scene construction and pre-processing.

As illustrated in Figure 5, in both real and simulation environment, we put a table in front of the mobile manipulator and randomly set the table height between 30 and 50 centimeters. In the simulation, we perform additional randomization over the table location and object textures (e.g., texture pattern of the background, color of the table). We place 4 to 5 objects on the table with arbitrary location and orientation. Given the scene arrangement, we place the mobile manipulator at 5 different angles looking at the table from one side. For each viewpoint, the robot takes one snapshot containing RGB and depth images with a resolution of .

For object detection and segmentation we used Mask R-CNN [21]. Given an image of a scene, Mask-RCNN detects and segments objects above a threshold, which generates bounding boxes and segmentation masks for each instance of an object. For our setup we are interested in detecting four types of object categories: bottles, wine glasses, cups, and bowls. If bounding boxes overlap we use an IOU threshold of 0.5 to remove duplicate objects as part of the non-maximum suppression step. For each detected object, we generated a cropped image and re-scale it to size . For multiple snapshots taken at the same scene, we associate objects across snapshots based on the additional depth information. For the simulated environment, we introduce virtual viewpoints to obtain a full 360 degree capture of the scene. The robot looks at the table from different viewpoints. We then extract object point clouds based on the object detection inference results.

Figure 6: Visualizations of point clouds generated with our point prediction network. From left to right: the input image, the generated 3D point cloud, and two different views with the projected point cloud as an overlay on the detected object. Our method allows us to produce meaningful point clouds from real and simulated test data (Kitchenware, Procedural).

4.2 Experimental Results

Evaluation metrics.

We consider both shape prediction performance and grasping performance for evaluation. For shape prediction, we use bounding boxes from the simulation or Mask-RCNN as ground-truth. We project the predicted 3D point cloud to 2D and evaluate the overlap of the prediction and the ground truth. To measure the performance, we run the instance grasping and compute the grasping success rate. Please note that our method relies on object detection based on Mask-RCNN, which may fail to reliably detected objects. While this can consequently also result in grasping failure, we consider this as an orthogonal problem that is outside the scope of this work.

Evaluation on shape prediction.

To decide whether our shape prediction model is able to generate meaningful point clouds from a single RGBD observation, we conducted qualitative and quantitative evaluations on predicted 3D point clouds. In Figure 6 we illustrate that our shape prediction model is able to successfully predict 3D point clouds. The point clouds (second column) are shown from the view of the input image. One advantage of point clouds as representation is that they can be easily transformed from one view to another. The right-most column in Figure 6 shows the projected 2D points still align with the object when looked at from another view. This also addresses the concern that our model does not learn the trivial mapping from input object mask. It is worth noting that our shape prediction model also generalizes to unseen categories from Procedural dataset (see the last two rows of Figure 6).

For a quantitative evaluation, we use more than 10K image patch sequences containing the object instances of the test set of both Kitchenware and our real data. We compute the averaged IOU of the 2D projections with ground-truth masks (from the simulation or Mask-RCNN) and summarize the results in Table 3. We also conducted an ablation study on the number of viewpoints used during training. The results are reported in Table 3. The model performs poorly when only one view is used for supervision, however, the performance increases when multiple views are provided.

Dataset # views 1 2 4 full
Kitchenware 0.188 0.659 0.803 0.803
Real 0.186 0.492 0.625 0.626
Table 3: Shape Prediction IOU on unseen objects.
Dataset 2.5D shape Reconstructed 3D shape
Kitchenware 68% 64%
Real 51% 61%
Table 4: Grasping success rate on unseen objects.
Figure 7: Grasping sequence evaluation: we visualize the real world grasping sequences for the baseline model (left) and our model (right). For both, the first two columns show the depth and RGB image before we run grasping trials. The third column illustrates the pre-grasp state where the robot arm has been moved on top of the target instance. The last column shows the final state after the grasping has been executed.

Evaluation on instance grasping.

Finally, we evaluated whether the learned critic model is able to guide the CEM policy for grasping on both simulated and real data (the dataset contains and unseen objects respectively). Specifically, we execute 100 grasping trials in the real world and in the simulation. We report the average grasping success rate in Table 4. During execution, we follow the same protocol used for generating the simulation dataset for training (see Figure 5). The robot is expected to pick up one specific object from the table and drops it into a bin. If the grasp was not successful, we manually remove the object from the table. This prevents the model from repeatedly grasping the same object.

As reported in Table 4, CEM policy guided by our critic model achieves 64% instance grasping success rate in simulation and 61% in the real world execution with zero real data for training. This is a significant improvement over previous work that aims at sim-to-real transfer at image-level (e.g., [3] achieved 23.53% indiscriminative grasping success trained only with simulated data).

To evaluate the performance of grasping success with our reconstructed 3D point clouds as representation for sim-to-real transfer, we learn a baseline critic model that only uses raw 2.5D sensor depth as the input. As reported in Table 4, the CEM policy guided by the baseline critic model achieves slightly higher grasping success rate in simulation (e.g., 68% vs. 64%), as it is trained on the clean ground-truth point cloud. However, policy guided by the baseline model suffers severely from domain shift, as the success rate drops from 68% to 51% when applied in the real-world environment. As we used the same architecture and training set for both our model and the baseline, we believe the performance gap is the result of domain shift (e.g., the unmodeled noise from the depth input, the noise introduced by our Mask-RCNN model). Due to our domain-invariant 3D representation, the policy based on our critic model achieved 10% higher success rate compared to the policy based on raw depth from the sensor. These results are shown in Table 4 and Figure 7. To summarize, the improvement illustrates clear advantages of our compact and geometry-aware domain-invariant representation.

5 Conclusions and Future Work

In this work we presented a novel self-supervised approach that learns to perform table-top instance grasping of objects using no real world grasping data. The proposed framework consists of a shape prediction model that learns a domain-invariant 3D point cloud representation of objects by operating on single RGBD images containing the same object from different viewpoints. Moreover, the 3D point cloud is further utilized to perform instance grasps based on a critic model learned with simulated grasping data only.

Experimental results have demonstrated that (1) our shape prediction model is able to learn domain-invariant 3D shapes in real world settings from single RGBD image observations with self-supervision; (2) the policy guided by our representation generalizes significantly better in the real world compared to previous state-of-the-art based on end-to-end training and a policy based on a 2.5D shape representation.

In a broader context, our shape-aware representations provide a better understanding of objects and thereby have the potential to enable more robust robotic behavior towards grasping and other manipulation tasks. As avenues for future work, it would be interesting to explore the potential of such representations with respect to more diverse object categories or end-effector configurations as well as a larger number of tasks.


  • [1] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. J. Guibas. Learning representations and generative models for 3d point clouds. In ICML, 2018.
  • [2] J. Bohg and D. Kragic. Learning grasping points with shape context. Robot. Autonom. Syst., 58(4):362–377, 2010.
  • [3] K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. P. Sampedro, K. Konolige, S. Levine, and V. Vanhoucke. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In ICRA, 2018.
  • [4] A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q.-X. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. Shapenet: An information-rich 3d model repository. CoRR, 2015.
  • [5] E. Coumans and Y. Bai.

    Pybullet, a python module for physics simulation for games, robotics and machine learning., 2016–2019.
  • [6] G. Csurka. Domain adaptation for visual applications: A comprehensive survey. CoRR, abs/1702.05374, 2017.
  • [7] A. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Nießner. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In CVPR, 2018.
  • [8] H. Dang and P. K. Allen. Semantic grasping: planning task-specific stable robotic grasps. Autonomous Robots, 37(3):301–316, 2014.
  • [9] C. M. Devin, E. Jang, S. Levine, and V. Vanhoucke. Grasp2vec: Learning object representations from self-supervised grasping. 2018.
  • [10] M. Dogar, K. Hsiao, M. Ciocarlie, and S. Srinivasa. Physics-based grasp planning through clutter. In Robotics: Science and Systems VIII, July 2012.
  • [11] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, pages 2650–2658, 2015.
  • [12] S. A. Eslami, D. J. Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor, et al.

    Neural scene representation and rendering.

    Science, 2018.
  • [13] H. Fan, H. Su, and L. J. Guibas. A point set generation network for 3d object reconstruction from a single image. In CVPR, pages 2463–2471, 2017.
  • [14] K. Fang, Y. Bai, S. Hinterstoißer, S. Savarese, and M. Kalakrishnan.

    Multi-task domain adaptation for deep learning of instance grasping from simulation.

    ICRA, pages 3516–3523, 2018.
  • [15] M. Gadelha, R. Wang, and S. Maji. Multiresolution tree networks for 3d point cloud processing. In ECCV, 2018.
  • [16] V. Ganapathi-Subramanian, O. Diamanti, S. Pirk, C. Tang, M. Niessner, and L. Guibas. Parsing geometry using structure-aware shape templates. In 3DV, 2018.
  • [17] R. Garg, V. K. BG, G. Carneiro, and I. Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In ECCV, pages 740–756, 2016.
  • [18] C. Goldfeder, M. Ciocarlie, H. Dang, and P. K. Allen. The columbia grasp database. In ICRA, 2009.
  • [19] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt. High precision grasp pose detection in dense clutter. In IROS. IEEE, 2016.
  • [20] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt. High precision grasp pose detection in dense clutter. IROS, pages 598–605, 2016.
  • [21] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In ICCV, 2017.
  • [22] P. Henderson and V. Ferrari. Learning to generate and reconstruct 3d meshes with only 2d supervision. In BMVC, 2018.
  • [23] S. James, A. J. Davison, and E. Johns. Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task. In CoRL, 2017.
  • [24] S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell, and K. Bousmalis. Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. 12 2018.
  • [25] L. Jiang, S. Shi, X. Qi, and J. Jia. Gal: Geometric adversarial loss for single-view 3d-object reconstruction. In ECCV, 2018.
  • [26] E. Johns, S. Leutenegger, and A. J. Davison. Deep learning a grasp function for grasping under gripper pose uncertainty. In IROS, 2016.
  • [27] H. Kato, Y. Ushiku, and T. Harada. Neural 3d mesh renderer. In CVPR, 2018.
  • [28] D. Katz, A. Venkatraman, M. Kazemi, J. A. Bagnell, and A. Stentz. Perceiving, learning, and exploiting object affordances for autonomous pile manipulation. Autonomous Robots, 37(4):369–382, 2014.
  • [29] M. Kopicki, R. Detry, M. Adjigble, R. Stolkin, A. Leonardis, and J. L. Wyatt. One-shot learning and generation of dexterous grasps for novel objects. Int. J. Robotics Res., 35(8):959–976, 2016.
  • [30] I. Lenz, H. Lee, and A. Saxena. Deep learning for detecting robotic grasps. Int. J. Robotics Res., 34(4-5):705–724, 2015.
  • [31] B. León, S. Ulbrich, R. Diankov, G. Puche, M. Przybylski, A. Morales, T. Asfour, S. Moisio, J. Bohg, J. Kuffner, et al. Opengrasp: A toolkit for robot grasping simulation.
  • [32] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robotics Res., page 0278364917710318.
  • [33] M. Li, K. Hang, D. Kragic, and A. Billard. Dexterous grasping under shape uncertainty. Robot. Autonom. Syst., 75:352–364, 2016.
  • [34] Y. Li, A. Dai, L. Guibas, and M. Niessner. Database-assisted object retrieval for real-time 3d reconstruction. Comput. Graph. Forum, 34(2):435–446, 2015.
  • [35] S. Liu, W. Chen, T. Li, and H. Li. Soft rasterizer: Differentiable rendering for unsupervised single-view mesh reconstruction. CoRR, 2019.
  • [36] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. 2017.
  • [37] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg. Dex-net 1.0: A cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In ICRA, 2016.
  • [38] L. Montesano and M. Lopes. Active learning of visual descriptors for grasping using non-parametric smoothed beta distributions. Robot. Autonom. Syst., 60(3):452–462, 2012.
  • [39] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In ISMAR, pages 127–136, 2011.
  • [40] D. T. Nguyen, B. Hua, M. Tran, Q. Pham, and S. Yeung. A field model for repairing 3d shapes. In CVPR, pages 5676–5684, 2016.
  • [41] E. Nikandrova and V. Kyrki. Category-based task specific grasping. Robot. Autonom. Syst., 70:25–35, 2015.
  • [42] T. Osa, J. Peters, and G. Neumann.

    Experiments with hierarchical reinforcement learning of multiple grasping policies.

    In ISER, pages 160–172. Springer, 2016.
  • [43] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa. Visual domain adaptation: A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53–69, 2015.
  • [44] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In ICRA, 2016.
  • [45] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, pages 652–660, 2017.
  • [46] R. Rubinstein and D. Kroese.

    The cross-entropy method: A unified approach to combinatorial optimization, monte-carlo simulation, and machine learning.

  • [47] A. Saxena, J. Driemeyer, and A. Y. Ng. Robotic grasping of novel objects using vision. Int. J. Robotics Res., 27(2):157–173, 2008.
  • [48] S. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. A. Funkhouser. Semantic scene completion from a single depth image. In CVPR, 2017.
  • [49] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. IROS, pages 23–30, 2017.
  • [50] N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy, and T. Asfour. Part-based grasp planning for familiar objects. In Humanoid Robots (Humanoids), pages 919–925, 2016.
  • [51] J. Varley, C. DeChant, A. Richardson, A. Nair, J. Ruales, and P. Allen. Shape completion enabled robotic grasping. 2016.
  • [52] U. Viereck, A. ten Pas, K. Saenko, and R. Platt. Learning a visuomotor controller for real world robotic grasping using easily simulated depth images. CoRR, abs/1706.04652, 2017.
  • [53] C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei, and S. Savarese. Densefusion: 6d object pose estimation by iterative dense fusion. In CVPR, 2019.
  • [54] N. Wang, Y. Zhang, Z. Li, Y. Fu, W. Liu, and Y.-G. Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In ECCV, 2018.
  • [55] S. Wang, J. Wu, X. Sun, W. Yuan, W. T. Freeman, J. B. Tenenbaum, and E. H. Adelson. 3d shape perception from monocular vision, touch, and shape priors. In IROS. IEEE, 2018.
  • [56] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NeurIPS, 2016.
  • [57] D. Xu, D. Anguelov, and A. Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In CVPR, pages 244–253, 2018.
  • [58] X. Yan, J. Hsu, M. Khansari, Y. Bai, A. Pathak, A. Gupta, J. Davidson, and H. Lee. Learning 6-dof grasping interaction via deep geometry-aware 3d representations. In ICRA, 2018.
  • [59] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NeurIPS, 2016.
  • [60] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.