GPR: Grasp Pose Refinement Network for Cluttered Scenes

05/18/2021 ∙ by Wei Wei, et al. ∙ 0

Object grasping in cluttered scenes is a widely investigated field of robot manipulation. Most of the current works focus on estimating grasp pose from point clouds based on an efficient single-shot grasp detection network. However, due to the lack of geometry awareness of the local grasping area, it may cause severe collisions and unstable grasp configurations. In this paper, we propose a two-stage grasp pose refinement network which detects grasps globally while fine-tuning low-quality grasps and filtering noisy grasps locally. Furthermore, we extend the 6-DoF grasp with an extra dimension as grasp width which is critical for collisionless grasping in cluttered scenes. It takes a single-view point cloud as input and predicts dense and precise grasp configurations. To enhance the generalization ability, we build a synthetic single-object grasp dataset including 150 commodities of various shapes, and a multi-object cluttered scene dataset including 100k point clouds with robust, dense grasp poses and mask annotations. Experiments conducted on Yumi IRB-1400 Robot demonstrate that the model trained on our dataset performs well in real environments and outperforms previous methods by a large margin.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Robotic grasping is a fundamental problem in the robotics community and has many applications in industry and house-holding service. It has shown promising results in industrial applications, especially for grasping under structured environments, such as automated bin-picking [20]. However, it remains an open problem due to the variety of objects in complex scenarios. Objects have different 3D shapes, and their shapes and appearances are affected by lighting conditions, clutter and occlusions between each other.

Fig. 1:

Comparison with state-of-the-art methods. Instead of exhaustive searching and evaluating possible grasp candidates in the point cloud, our method generates possible grasp candidates efficiently in stage-1 as the single-shot grasp detection pipeline. Moreover, our model refines low-quality and classifies noisy grasp candidates in stage-2 base on discriminative feature representation of the local grasping area.

Traditionally, the problem of object grasping in cluttered scenes is tackled by estimating 6D object pose [45, 42, 40] and selecting grasp from the grasp database. As a result, these approaches are not applicable to unseen objects. In order to generalize to unseen objects, many recent works [15, 6, 46, 22, 25]

conduct grasp pose detection as rectangle detection in 2D space with CNNs, and their models perform well on novel objects. However, planar grasping with 3/4 DoF (degree of freedom) inevitably results in inflexibility, since the gripper is forced to approach objects vertically. Besides the DoF constraint, these works utilize the 2D images as input, which ignore gripper contact with the object in 3D space. Some recent works suggest that 3D geometry structure is highly relevant to grasp quality

[37, 18]. PointNetGPD [18] evaluates grasp quality in 3D space with exhaustive searching in point clouds. S4G [33] and PointNet++Grasping [28] propose efficient single-shot grasp pose detection network architectures, while the results may be noisy and suffer collisions with surrounding objects. The main reason can be attributed to: 1) lack of shape awareness of the local contextual geometry of the gripper closing area; 2) grasping with max opening width is more likely to cause collisions with surrounding objects in dense clutter.

Considering the above problems, we propose to detect grasp poses globally and refine them locally. Single-shot feature representation helps to avoid exhaustive searching in the point cloud, while it is not able to learn discriminative local feature representation without further inspection of the local grasping area. For addressing the limitation, we turn to focus on the local grasping area and design a two-stage grasp pose refinement network (GPR) for estimating stable and collisionless grasps from point clouds. As illustrated in Fig.1, our model predicts coarse and noisy grasp proposals in the first stage. Then, points inside the proposals are cropped out and transformed into local gripper coordinate in the second stage. Finally, these points are used to encode discriminative local feature representation for grasp proposals refinement and classification. Remarkably, our model takes a single-view point cloud as input and extends the 6-DoF grasp with an extra dimension as grasp width, which adjust gripper opening width and avoid unnecessary collisions. Furthermore, our two-stage network is trained in an end-to-end fashion.

For most data-driven methods, it is common to boost generalization performance with a large-scale dataset. However, manually annotated 6D grasps can be time-consuming [10]. Most current works generate grasp annotations based on traditional analysis methods [27, 11] or physics simulators [24, 8, 39]. In [18, 22], researchers had built datasets for individual objects, while ignoring multi-objects in cluttered scenes. [33, 28] propose to generate grasps in cluttered scenes. However, almost all of the object models come from the YCB object dataset [5] may lead to insufficient shape coverage. We collect 150 objects with various shapes and build large-scale synthetic datasets for both individual objects and objects in dense clutter. Experiment results show that the model trained on our dataset performs well in the real robot platform and gets promising results.

In summary, our primary contributions are:

  • An end-to-end grasp pose refinement network for high-quality grasp pose prediction in cluttered scenes that detects globally while refines locally.

  • Extend 6-DoF grasp with grasp width as a 7-DoF grasp for improvement of dexterous and collisionless grasping in dense clutter.

  • A densely annotated synthetic single-object grasp dataset including 150 object models, and a large scale cluttered multi-object dataset with 100k point clouds with detailed annotations. We will release the dataset.

Ii Related Work

Deep Learning based Grasp Configuration Detection. [4]

gives a thorough survey of robotic grasping based on deep learning. Given the object model and grasp annotations,

[7, 45] tackle this problem as template matching, and the 6-DoF pose retrieving problem. While template matching methods show low generalization ability for unknown objects. [37] designs several projection features as the input of a CNN-based grasp quality evaluation model. [18] replaces input with direct irregular point cloud and train PointNet [31] for grasp classification. These methods rely on detailed local geometry for constructing both collision-free and force-closure grasps. [15, 6, 46, 34, 14] tackle this problem as grasp rectangle detection in 2D images from a single object to multi-object scenarios. While these methods just perform 3/4-DoF grasp. [33] proposes a single-shot grasp proposal framework to regress 6-DoF grasp configurations from point cloud directly. [28] follows a similar setting, while it generates grasp based on the assumption that the approaching direction of a grasp is along the surface normal of the objects. Worth noting that [22] collects numerous object models for GQ-CNN training and obtains state-of-the-art performance. Of all the above methods, GPD can also estimate grasp width with geometry prior. However, it relies on multi-view point clouds input. In this paper, we revisit grasp width as a critical element for grasp configuration and our model can directly predict high accuracy grasp width.

Grasping Dataset Synthesis. [12, 6, 46] annotate rectangle representation for grasping detection in images manually. [29, 16] collect annotations with a real robot. While an enormous amount of annotated data is needed for supervised deep learning, therefore manually grasp configuration annotating is unpractical due to time-consuming. Given an object with a gripper model and environment constraints, we are able to synthesize grasp configurations in two kinds of ways generally. One is based on analytic methods [1], which derive from force-closure [27] and Ferrari Canny metric [11]. [2] gives a detailed survey of these methods. [33, 28, 18, 23, 22, 10] generate dataset based on this way. Another is based on physical simulators, such as [8, 39], these simulators perform better than analytic methods in terms of force contacts. [26, 9, 3, 43, 44] generate their dataset using simulated environment.

Deep Learning on Point Cloud Data. PointNet [31] and PointNet++ [32] are two novel frameworks to directly extract feature representation from point cloud data. Many methods [41, 38, 17, 21, 36, 30] extend these frameworks to point cloud classification, detection and segmentation. In this paper, we utilize PointNet++ as the backbone.

Iii Problem Statement

In this work, we focus on the problem of planning a robust two-fingered parallel-jaw grasping based on point clouds. Our two-stage refinement network takes the whole cluttered scene as input and outputs dense grasp poses with high quality and robustness. Some of the key definitions are introduced here:

Object States: Let describes state of an object in a grasp scene, where specifies the surface model, mass and centroid properties of object , denotes 6D object pose, denotes friction coefficient.

Point Clouds: Let represents the point cloud of the scene captured by the depth camera.

Grasps: Let denotes grasp configurations in a cluttered scene. Each grasp configuration is defined as , where represents the origin lies at the middle of the line segment connecting two finger tips, and denote approach direction and closing direction of a grasp, describes grasp width, and denote contact points.

Grasp Metric: We adopt the widely used Ferrari Canny metric [11] for labelling grasp quality.

Iv Dataset Generation

In this section, we introduce our dataset generation method for grasp poses annotation for both individual objects and objects in dense clusters. The overall pipeline is illustrated in Fig.2. We take the following procedure to obtain dense grasp annotations. Firstly, we label single-object grasp annotations and then match grasp annotations into cluttered scenes according to the 6D object pose. Finally, we apply collision filtering for all the grasp configurations.

Iv-a Single-object grasp dataset Generation

For single-object grasp dataset generation, we collect 150 objects of various shapes and categories. Half of these objects come from the BOP-Challenge dataset and YCB-Video dataset [5], others are collected from the internet.

Given a specific object model , the target is to generate dense grasp annotations including grasp configuration and corresponding grasp metric mentioned above.

Fig. 2: Overview of our datasets generation procedure. (a) Example single object models. (b) Example single object grasps with label . For each object, 15 grasps are sampled for visualization. The colors from red to green represent from low to high. (c) Illustration of a cluttered scene. (d) Example grasps in a cluttered scene.

First, candidate contact points are sampled on object surface model with outward normals calculated. Based on force-closure principle, antipodal grasp directions are then sampled inside the friction cone of point . Each antipodal grasp candidate will be classified as a positive grasp candidate , if satisfies rules as follows: 1) At least one antipodal contact point is found on object backward surface; 2) Force-closure property. Otherwise, antipodal grasp candidate is classified as negative grasp .

Second, for each positive antipodal grasp candidate of a contact point , collision check is applied between gripper and object. Those grasp candidates failed in collision check will be classified as negative grasps . If no positive antipodal grasp candidate is reserved, the corresponding sampled point is classified as a negative point , which means an unsuitable contact point.

Third, the grasp metric for each reserved positive grasp candidates is calculated by Ferrari Canny metric as .

Finally, we apply Non-maximum Suppression algorithm (NMS) for pruning redundant grasps. Distance between two sampled grasp and is calculated by following equation:

(1)

are set to 16384, 0.3. , and is set to 1, 0.03, and 0.03 in our experiments. For all the objects , the output annotations are denoted as . Examples of our single-object grasp dataset are shown in Fig.2 (b).

Iv-B Multi-object Grasp Dataset Generation

To simulate densely cluttered scenes for the multi-object grasp dataset, we adopt the following procedures using [8]:

First, objects are randomly sampled, then these sampled objects are initializing with random poses, and falling into a static bin successively in the simulator, as shown in Fig.2(c).

Then, the 6D object pose will be recorded after all sampled objects falling into the bin and reaching stable states. Each unsuitable grasp point for each object will be added into negative point set . Then we apply collision check for each grasp of each object obtained by single-object grasp generation. If no collision occurs, contact points of grasp will be added into positive grasp contact points set , and the corresponding grasp annotation will be added into positive grasp set . Otherwise, the point will be added into negative grasp contact points set .

Fig. 3: An example shows points mask label in our multi-object grasp dataset. Points in light blue denote positive grasp contact point. Points in dark blue denote negative contact point due to collision. Points in orange denote unsuitable contact points on foreground objects.

Point cloud within the bin is cropped for generating points label and mask which is defined as follows:

(2)

Where denotes Indicator function for generating points mask. For each point , a KD-Tree search is applied to find the nearby points among with query radius . Moreover, each point in will be broadcast with the same label and mask . Finally, each point will only reserve the corresponding label and mask with the highest score. For each point , the similar process will be done. An Example is shown in Fig.3.

Fig. 4: Overview of our GPR network for grasp pose detection and refinement in point cloud. Stage-1 for generating 7-DoF grasp proposals. Stage-2 for refining grasp proposals with further geometry awareness of local grasping area.

V Grasp Pose Refinement Network

In this section, we present our proposed two-stage grasp pose refinement network (GPR) for grasp pose detection in cluttered scenes. The overall structure is illustrated in Fig.4.

V-a Grasp Proposal Generation

Existing 6-DoF grasp pose detection methods could be classified into one-stage and two-stage methods. One-stage methods[34, 14, 23, 22, 33, 28] are generally faster but directly predict grasp pose without local geometry awareness. Two-stage methods[15, 6, 46] mostly depend on anchor mechanism[35] developed on 2D object detection, which generate proposals firstly and then refine the proposals and confidences in the second stage. However, directly applying anchor mechanism for predicting grasp pose in 3D space is non-trivial due to the huge search space and irregular format of the point cloud.

Therefore, we propose to directly estimate grasp pose in a bottom-up manner to avoid exhaustive searching in 3D space with 3D rotation inspired by [33, 28]. We predict mask and coarse 7-DoF grasp proposal for each point in the scene, as shown in stage-1 sub-network of Fig.4.

Feature representations and segmentation. We design the backbone network based on the PointNet++ [32], which is a robust learning model for dealing with sparse point cloud and non-uniform point density. We utilize the PointNet++ network with multi-scale grouping strategy as the backbone.

Given the point-wise feature encoded by the backbone network, we append two head ahead to our backbone: one segmentation head for predicting grasp contact points mask, and one grasp pose regression head for generating 7-DoF grasp proposals. We utilize focal loss [19] to handle the severe imbalance problem for grasp contacts segmentation, as shown in Fig.4.

Bin-based grasp pose regression. It is difficult to regress 7-DoF grasp configuration directly, which has been proved in previous literature [13, 36, 30]. Therefore, we develop bin-based regression method similar as [36]. Specifically, a 7-DoF grasp is represented as , where denotes the grasp center, and denote approach and closing directions of the gripper, denotes gripper opening width. Gripper direction regression is converted to angle prediction, as show in Fig.5

. For angle prediction, gripper approach vector is denoted by

and jointly, while finger closing direction is projected onto X-Y plane, and denoted by .

We divide a target angle of point , e.g. , into bins with uniform angle , and calculate the bin classification target bin and residual regression target res within the classified bin. The angle loss for , and consists of two terms, one term for bin classification and another for residual regression within the classified bin. The target angle could be formulated as follows:

(3)

Where () is the target grasp angle of a specific grasp contact point , denote the starting angle, bin is the ground-truth bin assignment, res is the residual value for further angle regression within the assigned bin, and is the unit bin angle of for normalization.

For grasp center and grasp width prediction, we adopt the following formulation:

(4)

Where is the coordinates of an interest grasp contact point, is the grasp width, () and is the grasp center coordinates and grasp width of its corresponding grasp configuration. The and are ground-truth bin assignment and residual location within the assigned bin, and is the bin length for normalization. denotes the corresponding search range.

Fig. 5: An illustration of bin-based angle regression. LABEL: The grasp approach vector n is denoted by azimuth angle and elevation angle , closing vector r is projected to X-Y plane and denoted by azimuth angle . LABEL: Examples show range of azimuth and elevation angle are split into a series of bins, where res denotes normalized residual value within the bin.

The overall loss of grasp proposal generation sub-network could be formulated as follows:

(5)

The loss includes two terms, for grasp poses prediction and for grasp contact points segmentation. Where is the number of positive grasp contact points,

is the probability of point

as a positive grasp contact point. Where and are the predicted bin assignment and residual of point p, and are corresponding ground-truth. denotes the classification loss of bin assignment, and denotes regression loss for residual prediction.

V-B Grasp Proposal Refinement

Non-maximum suppression and Grasp proposal sampling. Since sub-network for stage-1 generates one proposal per point, there are a larger number of proposals around ground-truth grasps. Non-maximum suppression (NMS) is applied to select the local maximum.

Region grouping and grasp canonical transformation. Given the grasp proposals generated by stage-1, point clouds within the gripper closing area are cropped out for further feature representation learning. Unified local coordinates are utilized to eliminate the ambiguity caused by absolute coordinate for objects with various poses and locations. Specifically, we adopt canonical transformation for points within the gripper closing area as shown in Fig.4. We set Approaching, Closing, and Orthogonal directions of the gripper as X, Y, and Z axes respectively, and the origin locates at the gripper bottom center. In experiments, the gripper closing area is enlarged by a scalar to capture more contextual information, which helps for proposal refinement.

Feature learning for grasp proposal refinement. After proposal canonical transformation, fine-grained local features within the proposals will be learned with the following steps.

First, for each point within the enlarged 3D grasp proposal, we obtain its canonical coordinate = = and corresponding global semantic feature learned by stage-1. Then, each inside point and corresponding feature of each grasp proposal are combined. Finally, the concated feature of each point inside the proposal are fed into a point cloud encoder to fuse both the global and local feature. Thus, we can obtain discriminative feature representation for grasp proposal refinement with grasp width and confidence.

The overall loss for training grasp proposal refinement sub-network is similar as depicted in grasp proposal generation sub-network.

Vi Experiments

We evaluate our GPR network both in simulation and the Yumi IRB-1400 Robot platform. In simulation experiments, ablation studies show our model predicts high precision grasp configurations. In the real robot platform, experimental results show that our model has good generalization ability.

Vi-a Implementation Details

For each point cloud grasp scene, 16384 points are sampled as input. The learning rate is set to 0.02 at start, and it is divided by 10 when the error plateaus. During the training phase, 256 proposals are sampled after proposals NMS for stage-2, while 100 proposals for inference. Of all the 150 object models, 120 objects are selected for training. Of all the 100k point clouds, 80k point clouds as training data.

Vi-B Simulation Experiments

Vi-B1 Extend 6-DoF Grasp with Grasp Width

We first evaluate our proposed method in terms of grasp width. To demonstrate the high precision prediction of grasp width, we show a quantitative analysis of over 20k scene with around 2M synthetic grasps. In our experiments, we define the measurement for grasping width as the absolute difference between the predicted grasp width and the ground-truth grasp width . We set 4 groups threshold for a comprehensive evaluation of grasp width prediction. For evaluation of each threshold, each absolute grasp width difference smaller than the threshold is classified as positive, otherwise negative. We select 100 proposals after NMS operation and filter out the negative samples. Experimental results shown in Tab.I demonstrate that our model can estimate high precision grasp width, and achieves 82.2% accuracy under 5 mm threshold. Fig.6 shows the overall grasp width distribution in our dataset. Grasp width is uniformly divided into 8 groups with an interval of 5 mm. While only 1/4 of all lie in the range cm. Grasping with max opening width can be problematic in cluttered scenes, because it may lead to collisions with surrounding objects. Fig.6 shows an example that adaptive grasp width is critical for dexterous grasping in cluttered scenes.

Fig. 6: LABEL: Ground-truth and predicted grasp width distribution. LABEL: An example shows adaptive grasp width in a clutter scene.
Grasp Width
Threshold(mm)
Accuracy (%)
stage-1 stage-2
2.5 42.0 52.5
5.0 76.0 82.2
7.5 87.5 90.2
10.0 92.9 93.1
TABLE I: Comparison of Grasp Width Accuracy

Vi-B2 One-stage VS. Two-stage

To illustrate the effectiveness of our proposed grasp pose refinement network, we evaluate the generated grasp proposals quality for both the two stages.

As shown in Tab.I, grasp width accuracy after refinement has 25% and 8% improvement respectively over stage-1 under threshold 2.5 mm and 5 mm. The improvement gets saturated with higher tolerances. For grasp pose accuracy, we adopt the distance measurement of grasp pose as in Eq.1. For evaluation of predicted grasp , is classified as positive, when is smaller than the predefined threshold, otherwise negative. Experimental results shown in Tab.II demonstrate that proposals after refinement outperform stage-1 by a large margin.

Grasp Pose
Threshold
Accuracy (%)
stage-1 stage-2
0.005 25.3 52.5
0.01 29.1 61.2
0.015 31.8 63.9
0.02 33.5 65.2
TABLE II: Comparison of Grasp Pose Accuracy

Vi-C Robotic Experiments

We validate the reliability and efficiency of our proposed GPR network in ABB Yumi IRB-1400 robot and a PhoXi industrial sensor. Objects are presented to the robot in dense clutter as shown in Fig.7. We keep a similar setting as in the simulation environment: 1) Camera is placed on top of the bin about 1.3 m; 2) Point cloud within the bin is cropped out for input data. 20 similar and 20 novel objects are selected for testing the generalization ability of our proposed network, as shown in Fig.7.

Fig. 7: Real setting of our robotic grasping experiments. LABEL: Cluttered scene grasping experiment setup with ABB Yumi robotic arm. LABEL: Objects used in our robotic experiments. Left one shows novel objects which are absent in the training dataset, right one shows similar objects.

We compare GPR to two state-of-the-art, open-sourced 6D grasp baselines, GPD

[37] and PointNetGPD [18]. We train GPD and PointNetGPD with their default setting on our dataset with the code they released.

The experiment procedure is as follows: 1) 10 of 20 objects are random sampled out, and then poured into the bin; 2) The robot attempts multiple grasps until all objects are grasped or 15 grasps have been attempted; 3) 10 times testing for each algorithm. The result is shown in Tab.III. Success Rate (SR) and Completion Rate (CR

) are used as the evaluation metrics.

Method Similar objects Novel objects
SR CR SR CR
GPD (3 channels) [37] 60% 84% 50% 66%
GPD (15 channels) [37] 52.7% 78% 36% 54%
PointNetGPD (3 classes)[18] 64.6% 84% 54.8% 80%
Ours 78.3% 94% 69.2% 90%
TABLE III: Results of Clutter Removal Experiments

As shown in Tab.III, our method outperforms baseline methods in terms of Success Rate, Completion Rate, which demonstrates the superiority of our methods. In our observation, our algorithm can get better performance in terms of collisions with surrounding objects and stable grasp configuration.

Vii Conclusions

In this paper, we proposed an end-to-end grasp pose refinement network for fine-tuning low-quality and filtering noisy grasps, which detects globally and refines locally. Meanwhile, we build a single-object grasp dataset which consists of 150 objects with various shapes, and a large-scale dataset for cluttered scenes. Experiments show that our model trained on the synthetic dataset performs well in real-world scenarios and achieves state-of-the-art performance.

References

  • [1] A. Bicchi and V. Kumar (2000) Robotic grasping and contact: a review. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II.
  • [2] J. Bohg, A. Morales, T. Asfour, and D. Kragic (2014) Data-driven grasp synthesis - A survey. IEEE Transactions on Robotics (TRO). Cited by: §II.
  • [3] S. Brahmbhatt, A. Handa, J. Hays, and D. Fox (2019) ContactGrasp: functional multi-finger grasp synthesis from contact. In IEEE International Conference on Intelligent Robots and Systems (IROS), Cited by: §II.
  • [4] S. Caldera, A. Rassau, and D. Chai (2018) Review of deep learning methods in robotic grasp detection. Multimodal Technologies and Interaction. Cited by: §II.
  • [5] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar (2015) The ycb object and model set: towards common benchmarks for manipulation research. In IEEE International conference on advanced robotics (ICAR), Cited by: §I, §IV-A.
  • [6] F. Chu, R. Xu, and P. A. Vela (2018) Real-world multiobject, multigrasp detection. IEEE Robotics and Automation Letters (RAL). Cited by: §I, §II, §II, §V-A.
  • [7] A. Collet, M. Martinez, and S. S. Srinivasa (2011) The moped framework: object recognition and pose estimation for manipulation. The international journal of robotics research (IJRR). Cited by: §II.
  • [8] E. Coumans and Y. Bai (2016–2020)

    PyBullet, a python module for physics simulation for games, robotics and machine learning

    .
    Note: http://pybullet.org Cited by: §I, §II, §IV-B.
  • [9] A. Depierre, E. Dellandréa, and L. Chen (2018) Jacquard: a large scale dataset for robotic grasp detection. In IEEE International Conference on Intelligent Robots and Systems (IROS), Cited by: §II.
  • [10] H. Fang, C. Wang, M. Gou, and C. Lu (2020) GraspNet-1billion: A large-scale benchmark for general object grasping. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Cited by: §I, §II.
  • [11] C. Ferrari and J. F. Canny (1992) Planning optimal grasps. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I, §II, §III.
  • [12] Y. Jiang, S. Moseson, and A. Saxena (2011) Efficient grasping from rgbd images: learning using a new rectangle representation. In IEEE International conference on robotics and automation (ICRA), Cited by: §II.
  • [13] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab (2017) SSD-6D: making rgb-based 3d detection and 6d pose estimation great again. In IEEE International Conference on Computer Vision (ICCV), Cited by: §V-A.
  • [14] S. Kumra and C. Kanan (2017)

    Robotic grasp detection using deep convolutional neural networks

    .
    In IEEE International Conference on Intelligent Robots and Systems (IROS), Cited by: §II, §V-A.
  • [15] I. Lenz, H. Lee, and A. Saxena (2015) Deep learning for detecting robotic grasps. The International Journal of Robotics Research (IJRR). Cited by: §I, §II, §V-A.
  • [16] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen (2018) Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research (IJRR). Cited by: §II.
  • [17] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen (2018) Pointcnn: convolution on x-transformed points. In Advances in neural information processing systems (NIPS), Cited by: §II.
  • [18] H. Liang, X. Ma, S. Li, M. Görner, S. Tang, B. Fang, F. Sun, and J. Zhang (2019) Pointnetgpd: detecting grasp configurations from point sets. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I, §I, §II, §II, §VI-C, TABLE III.
  • [19] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In IEEE international conference on computer vision (ICCV), Cited by: §V-A.
  • [20] M. Liu, O. Tuzel, A. Veeraraghavan, Y. Taguchi, T. K. Marks, and R. Chellappa (2012) Fast object localization and pose estimation in heavy clutter for robotic bin picking. The International Journal of Robotics Research (IJRR). Cited by: §I.
  • [21] Y. Liu, B. Fan, G. Meng, J. Lu, S. Xiang, and C. Pan (2019) DensePoint: learning densely contextual representation for efficient point cloud processing. In IEEE International Conference on Computer Vision (ICCV), Cited by: §II.
  • [22] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg (2017) Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. In Robotics: Science and Systems (RSS), Cited by: §I, §I, §II, §II, §V-A.
  • [23] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg (2016) Dex-net 1.0: a cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In IEEE international conference on robotics and automation (ICRA), Cited by: §II, §V-A.
  • [24] A. T. Miller and P. K. Allen (2004) Graspit! A versatile simulator for robotic grasping. IEEE Robotics & Automation Magazine. Cited by: §I.
  • [25] D. Morrison, J. Leitner, and P. Corke (2018) Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. In Robotics: Science and Systems (RSS), Cited by: §I.
  • [26] A. Mousavian, C. Eppner, and D. Fox (2019) 6-dof graspnet: variational grasp generation for object manipulation. In IEEE International Conference on Computer Vision (ICCV), Cited by: §II.
  • [27] V. Nguyen (1988) Constructing force-closure grasps. The International Journal of Robotics Research (IJRR). Cited by: §I, §II.
  • [28] P. Ni, W. Zhang, X. Zhu, and Q. Cao (2020) PointNet++ grasping: learning an end-to-end spatial grasp generation algorithm from sparse point clouds. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §I, §I, §II, §II, §V-A, §V-A.
  • [29] L. Pinto and A. Gupta (2016)

    Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours

    .
    In IEEE international conference on robotics and automation (ICRA), Cited by: §II.
  • [30] C. R. Qi, O. Litany, K. He, and L. J. Guibas (2019) Deep hough voting for 3d object detection in point clouds. In IEEE International Conference on Computer Vision (ICCV), Cited by: §II, §V-A.
  • [31] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) PointNet: deep learning on point sets for 3d classification and segmentation. In IEEE conference on computer vision and pattern recognition (CVPR), Cited by: §II, §II.
  • [32] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems (NIPS), Cited by: §II, §V-A.
  • [33] Y. Qin, R. Chen, H. Zhu, M. Song, J. Xu, and H. Su (2019) S4G: amodal single-view single-shot SE(3) grasp detection in cluttered scenes. In Conference on robot learning (CoRL), Cited by: §I, §I, §II, §II, §V-A, §V-A.
  • [34] J. Redmon and A. Angelova (2015)

    Real-time grasp detection using convolutional neural networks

    .
    In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II, §V-A.
  • [35] S. Ren, K. He, R. B. Girshick, and J. Sun (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in neural information processing systems (NIPS), Cited by: §V-A.
  • [36] S. Shi, X. Wang, and H. Li (2019) PointRCNN: 3d object proposal generation and detection from point cloud. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II, §V-A.
  • [37] A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt (2017) Grasp pose detection in point clouds. The International Journal of Robotics Research (IJRR). Cited by: §I, §II, §VI-C, TABLE III.
  • [38] H. Thomas, C. R. Qi, J. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas (2019) KPConv: flexible and deformable convolution for point clouds. In IEEE International Conference on Computer Vision (ICCV), Cited by: §II.
  • [39] E. Todorov, T. Erez, and Y. Tassa (2012) Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Cited by: §I, §II.
  • [40] C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei, and S. Savarese (2019) Densefusion: 6d object pose estimation by iterative dense fusion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I.
  • [41] W. Wu, Z. Qi, and F. Li (2019) PointConv: deep convolutional networks on 3d point clouds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II.
  • [42] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox (2018) PoseCNN: A convolutional neural network for 6d object pose estimation in cluttered scenes. In Robotics: Science and Systems (RSS), Cited by: §I.
  • [43] X. Yan, J. Hsu, M. Khansari, Y. Bai, A. Pathak, A. Gupta, J. Davidson, and H. Lee (2018) Learning 6-dof grasping interaction via deep geometry-aware 3d representations. In IEEE International Conference on Robotics and Automation (ICRA), Cited by: §II.
  • [44] X. Yan, M. Khansari, J. Hsu, Y. Gong, Y. Bai, S. Pirk, and H. Lee (2019) Data-efficient learning for sim-to-real robotic grasping using deep point cloud prediction networks. arXiv preprint arXiv:1906.08989. Cited by: §II.
  • [45] A. Zeng, K. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez, and J. Xiao (2017) Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge. In IEEE international conference on robotics and automation (ICRA), Cited by: §I, §II.
  • [46] H. Zhang, X. Lan, S. Bai, X. Zhou, Z. Tian, and N. Zheng (2019) ROI-based robotic grasp detection for object overlapping scenes. In IEEE International Conference on Intelligent Robots and Systems (IROS), Cited by: §I, §II, §II, §V-A.