Learning to Grasp 3D Objects using Deep Residual U-Nets

by   Yikun Li, et al.
University of Groningen

Affordance detection is one of the challenging tasks in robotics because it must predict the grasp configuration for the object of interest in real-time to enable the robot to interact with the environment. In this paper, we present a new deep learning approach to detect object affordances for a given 3D object. The method trains a Convolutional Neural Network (CNN) to learn a set of grasping features from RGB-D images. We named our approach Res-U-Net since the architecture of the network is designed based on U-Net structure and residual network-styled blocks. It devised to be robust and efficient to compute and use. A set of experiments has been performed to assess the performance of the proposed approach regarding grasp success rate on simulated robotic scenarios. Experiments validate the promising performance of the proposed architecture on a subset of ShapeNetCore dataset and simulated robot scenarios.



There are no comments yet.


page 1

page 4

page 6


Robotic Grasp Detection using Deep Convolutional Neural Networks

Deep learning has significantly advanced computer vision and natural lan...

Improved GQ-CNN: Deep Learning Model for Planning Robust Grasps

Recent developments in the field of robot grasping have shown great impr...

Double-Dot Network for Antipodal Grasp Detection

This paper proposes a new deep learning approach to antipodal grasp dete...

Interactive Open-Ended Object, Affordance and Grasp Learning for Robotic Manipulation

Service robots are expected to autonomously and efficiently work in huma...

RGB Matters: Learning 7-DoF Grasp Poses on Monocular RGBD Images

General object grasping is an important yet unsolved problem in the fiel...

Fully Convolutional Grasp Detection Network with Oriented Anchor Box

In this paper, we present a real-time approach to predict multiple grasp...

Optimizing Correlated Graspability Score and Grasp Regression for Better Grasp Prediction

Grasping objects is one of the most important abilities to master for a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Traditional object grasping approaches have been used in service robots, factories assembly lines, and many other areas widely. In such domains, robots broadly work in tightly controlled conditions to perform object manipulation tasks. Nowadays, robots are entering human-centric environments. In such places, generating stable grasp pose configuration for the object of interest is a challenging task due to the high demand for accurate and real-time response under changing and unpredictable environmental conditions [3]. In human-centric environments, an object may have many affordances, where each one can be used to accomplish a specific task. As an example, consider a robotic cutting task using a knife. The knife has two affordances parts: the handle and the blade. The blade is used to cut through material, and the handle is used for grasping the knife. Therefore, the robot must be able to identify all object affordances and choose the right one to plan the grasp and complete the task appropriately.

In this paper, we approach the problem of learning deep affordance features for 3D objects using a novel deep Convolutional Neural Network and RGB-D data. Our goal is to detect robust object affordances from rich deep features and show that the robot can successfully perform grasp actions using the extracted features in the environment. Towards this goal, we propose a novel neural network architecture namely Res-U-Net designed to be robust and efficient to compute and use. Besides, we propose a grasping approach to use the generated affordances and produce grasping trajectories for a parallel-plate robotic gripper. We carry out experiments to evaluate the performance of the proposed approaches in a simulation environment. Fig. 

1 shows six examples of our approach.

The remainder of this paper is organized as follows. In the next section, related work is discussed. Three CNN-based grasp affordances detection approaches are then introduced in section III. The detailed methodologies of grasping approach are presented in section IV, then we apply the neural network with the proposed grasp approach in a simulation environment and explain experimental evaluation in section V. Finally, conclusions are presented, and future directions are discussed in section VI.

Fig. 1: Examples of affordance detection results using the proposed Res-U-Net network.

Ii Related Work

Object grasping has been under investigation for a long time in robotics. Although an exhaustive survey is beyond the scope of this paper, we will review a few recent efforts.

Herzog et al. [5] assumed the similarly shaped objects could be grasped similarly and introduced a novel grasp selection algorithm which can generate object grasp poses based on previously recorded grasps. Vahrenkamp et al. [18] shown a system that can decompose novel object models by shape and local volumetric information, and label them with semantic information, then plan the corresponding grasps. Song et al. [17]

developed a framework for estimating grasp affordances from 2D images (texture and object category are taken into consideration). Kopicki et al.

[10] presented a method for one-shot learning of dexterous grasps and grasp generation for novel objects. They trained five basic grasps at the beginning and grasped new objects by generating grasp candidates with contract model and hand-configuration model. Kasaei et al. [6] introduced interactive open-ended learning approach to recognize multiple objects and their grasp affordances. When grasping a new object, they computed the dissimilarity between the new object and known objects and found the most similar object. Then, they try to adopt corresponding grasp configuration. If the dissimilarity is larger than the preset threshold, a new class will be created and learned. Kasaei et al. [7] proposed a data-driven grasp approach to grasp the household object by using top and side grasp strategies. It has been reported that they cannot be applied to grasp challenging objects, e.g., objects that should be grasped by their handle or grasped vertically as for instance a plate [15].

Over the past few years, extraordinary progress has been made in robotic application with the emergence of deep learning approaches. Nguyen et al. [12] researched on detecting grasp affordances using RGB-D images and got satisfactory results. They trained a deep Convolutional Neural Network to learn depth features for object grasp affordances from the camera images, which is proved to outperform the other state-of-the-art methods. Qi et al. [13] studied deep learning on point sets, and they proved the deep neural network can efficiently and robustly learn from point set features. Kokic et al. [9] utilized convolutional neural networks for encoding and detecting object grasp affordances, class and orientation to formulate grasp constraints. Mahler et al. [11]

used a synthetic dataset to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model which can predict the probability of success of grasps from depth images.

Iii Affordance Detection

The input to our CNN is a point cloud of an object, which is extracted from a 3D scene using object detection algorithms such as [8, 16]. The point cloud of the object is then fed into a CNN to detect an appropriate grasp affordance of the object. Our approach consists of two main processes, including data representation of 3D objects and training of CNN on represented data. We use three types of neural networks to learn the object affordances features from 3D objects. In the following subsections, we describe the detail of each process.

Iii-a Data Representation

A point cloud of an object is represented as a set of points, , where each point is described by their 3D coordinates and RGB information. In this work, we only used geometric information of the object. Therefore, the input and output data type is point cloud which stored in a three dimensions array. Towards this end, we first represent an object as a volumetric grid and then use the obtained representation as the input to the CNN with 3D filter banks. In this work, considering the computational power limit, we use a fixed occupancy grid of size voxels as the input of networks.

Iii-B Baseline Networks

To make our contribution transparent, we build two baseline networks based on the encoder-decoder network [19] and U-Net [14] in comparison with proposed network architecture and highlight the similarities and differences between them. All the networks contain two essential parts: one is the encoder network, and the other is the decoder network.

Fig. 2: Structure of the encoder-decoder network. Each grey box stands for a multi-channel feature map. The number of channels is shown on the top of the feature map box. The shape of each feature map is denoted at the lower left of the box. The different color arrows represent various operations shown in the legend.
Fig. 3: Structure of the proposed Res-U-Net: compared to the U-Net, we replace the residual blocks with 3D convolutional layers and skipping over layers. This skipping over layers effectively simplifies the network and speeds learning by reducing the impact of vanishing gradients.
Fig. 4: Structure of the U-network: compared to the encoder-decoder network, the last feature map of each layer in the encoder part is copied and concatenated to the first feature map of the same layer in the decoder part.

The architecture of the encoder-decoder network [14] is depicted in Fig. 2. This architecture is the lightest one among the selected architectures in terms of the number of parameters and computation, making the network easier and faster to learn. The encoder part of this network has nine 3D convolutional layers (all of them are

), and each of them is followed by batch normalization and ReLU layer. At the end of each encoder layer, there is a 3D max-pooling layer of

to produce a dense feature map. Each encode layer is corresponding to a decoder layer. It also has nine 3D convolutional layers. The difference is that instead of having 3D max-pooling layers, at the beginning of each layer, an up-sampling layer is utilized to produce a higher resolution of the feature map. Besides, a convolutional layer and a sigmoid layer is attached after the final decoder to reduce the multi-channels to .

The architecture of U-Net [14] is shown in Fig. 4. The basic structure of the U-Net and the described encoder-decoder network are almost the same. The main difference is that, in U-Net architecture, the dense feature map is first copied from the end of each encoder layer to the beginning of each decoder layer, and then the copied layer and the up-sampled layer are concatenated.

Iii-C Proposed Network

In this section, we propose a new network architecture to tackle the problem of grasp affordances detection for 3D objects using a volumetric grid representation and 3D deep CNN. In particular, our approach is a combination of U-Net and residual network [4].

The architecture of our approach is illustrated in Fig. 3. We call this network Res-U-Net. To retain more information from the input layer and dig more features, inspired by the residual network [4], we come up with this new network architecture. Compared to the U-Net, we replace the residual blocks with 3D convolutional layers and skipping over layers. The main motivation is to avoid the problem of vanishing gradients, by reusing activations from a previous layer until the adjacent layer learns its weights. Benefiting from the residual blocks, the network can go deeper since it simplifies the network, using fewer layers in the initial training stages.

Fig. 5: An illustrative example of detecting affordance for a Mug object: (a) a Mug object in our simulation environment; (b) point cloud of the object; (c) feeding the point cloud to Res-U-Net for detecting the graspable part of object (highlighted by orange color); (d

) the identified graspable area is then segmented into three clusters using the K-means algorithm. The centroid of each cluster is considered as a graspable point. Then, the point cloud of the object is further processed in three pipelines to find out an appropriate grasp configuration (end-effector positions and orientations) for each graspable point. In particular, inside each pipeline, a set of approaching paths is first generated based on the Fibonacci sphere (shown by red lines) and the table plane information (shown by a dark blue plane); we then eliminate those paths that go through the table plane. Afterward, we find the principal axis of the graspable part by performing PCA analysis (the green line shows the main axis), which is used to define the goodness of each approaching path. The best approaching path is finally detected and (

e) used to perform grasping; (f) this snapshot shows a successful example of grasp execution.

Iv Grasp Approach

As we mentioned in the previous section, we assume the given object is laying on a surface, e.g., a table. The object is then extracted from the scene and fed to the Res-U-Net as shown in Fig. 5 (a-c) After detecting the graspable area of the given object, the point cloud of the object is further processed to determine grasp points and an appropriate grasp configuration (i.e., grasp point and end-effector positions and orientations) for each grasp point. In particular, the detected affordance part of the object is first segmented into clusters using the K-means algorithm, where is defined based on the size of affordance part and robot’s griper. The centroid of each cluster indicates one grasp candidate (Fig. 5 (d)) and is considered as one side of the approaching path. We create a pipeline for each grasp candidate and process the object further to define the other side of the approaching path. Inside each pipeline, we generate a Fibonacci sphere with setting the center of the sphere at the grasp candidate and then randomly select points on the sphere. We then define linear approaching paths by calculating lines using selected points and the grasp candidate point (i.e., the center of the sphere). In our current setup, has been set to 256 points which are shown by red lines Fig. 5. In this study, we use a set of procedures to define the best approaching path:

  • Removing the approaching paths which are started from the under-table: by considering the table information, we remove infeasible approaching paths, i.e., those paths that their start point is under the table (see the second image in each pipeline).

  • Computing the main axis of the affordance part:Principal Component Analysis (PCA) is used to compute the axes of minimum and maximum variance in the affordance part. The maximum variance axis is considered as the main-axis (shown by a green line in the third image of each pipeline).

  • Calculating a score for each approaching path: the following equation is used to calculate a score for each approaching path:


    where represents the number of points of the object, stands for the distance between the specific approaching path and one of the points in a point cloud model, is equal to 0.01, and is the angle between approaching path line and the main axis of the affordance part, ranging from 0 to . Since [1] has shown that humans tend to grasp object orthogonally to the principal axis, we then calculate in the formula to reduce the score when the path is orthogonal to the principal axis. The lower score means the distances between the approaching path to all points of the objects are farther. Therefore, the path with the lowest score is selected as a final approaching path for each grasp point candidate. The approaching paths with scores’ influence are shown as the fourth image in each pipeline. It is visible that all paths with deeper color represent proper approaching paths. Finally, the best approaching path is selected as the approaching path for the given grasp point (last figure in each pipeline).

After calculating a proper approaching path, we instruct the robot to follow the path. Towards this end, we first transform the approaching path from object frame to world frame and then dispatch the planned trajectory to the robot to be executed (Fig. 5 (e and f). It is worth to mention, in some situation it is possible that the fingers of the gripper get in contact with the table (which stops the gripper from moving forward). To handle this point, we do slight roll rotation on the gripper to find a better angle between gripper and table to keep gripper moving forward. An illustrative example of the proposed grasp affordance detection is depicted in Fig. 5.

V Experiments and Results

A set of experiments was carried out to evaluate the proposed approach. In this section, we first describe our experimental setup and then discuss the obtained results.

V-a Dataset and Evaluation Metrics

In these experiments, we mainly used a subset of ShapeNetCore [2] containing 500 models from five categories including Mug, Chair, Knife, Guitar, and Lamp. For each category, we randomly selected 100 object models and convert them into complete point clouds with the pyntcloud package. We then shift and resize the point clouds data and convert them into a array as the input size of networks.

To the best of our knowledge, there are no existing similar researches done before. Therefore, we manually labeled an affordance part for each object to provide ground truth data. Part annotations are represented as point labels. A set of examples of labeled affordance part for different objects is depicted in Fig. 6 (affordance parts are highlighted by orange color). It should be noted that we augment the dataset with by rotating the point clouds along the z-axis for 90, 180 and 270 degrees and flip the point clouds vertically and horizontally from the top view to augment the training and validation data. We obtain 2580 training, 588 validation and 100 test data for evaluation.

We mainly used Average Intersection over Union (IoU) as the evaluation metric. We first compute IoU for each affordance part on each object. Afterwards, for each category, IoU is computed by averaging the per part IoU across all parts on all objects of category.

Fig. 6: Examples of affordance part labeling for one instance of guitar, lamp, mug and chair categories: point cloud of the object is shown by dark blue and labeled affordance part of each object is highlighted by orange color.

V-B Training

We start by explaining the training setup. All the proposed networks are trained from scratch through RMSprop optimizer with the

setting to 0.9. We initially set the learning rate to 0.001. If the validation loss does not decrease in 5 epochs, the learning rate is decayed by multiplying the square root of 0.1 until it reaches the minimum learning rate of

. The binary cross-entropy loss is utilized in training and the batch size is set to 16. We mainly use Python and Keras library in this study. The training process takes around two days on our NVIDIA Tesla K40m GPU, depending on the complexity of the network.

Fig. 7: Train and validation learning curves of different approaches: (left) Line plots of IoU over training epochs; (right) Line plots of IoU over validation epochs.

V-C Affordance Detection Results

Figure 7 shows the results of affordance detection by three neural networks on our dataset. By comparing all the experiments, it is visible that the encoder-decoder network performs much worse than the other two counterparts. In particular, the final Intersection over Union (IoU) of the encoder-decoder network was 28.9% and 22.3% on training and validation data receptively. The U-network performs much better than the encoder-decoder network. Its final IoU is 80.1% and 71.4% on training and validation dataset, receptively. Our approach, Res-U-Net, clearly outperformed the others by a large margin. The final IoU of Res-U-Net was 95.5% and 77.6% on training and validation dataset respectively. Particularly, in the case of training, it was 15.4 percentage points (p.p.) better than U-Net and 66.6 p.p. better than the encoder-decoder network, in the case of validation, it was 6.2 p.p., and 55.3 p.p. better than U-Net and encoder-decoder network respectively.

V-D Grasping Results

We empirically evaluate our grasp methodology using a simulated robot. In particular, we build a simulation environment to verify the capability of our grasp approach. The simulation is developed based on the Bullet physics engine. We only consider the end-effector pose to simplify the complexity and concentrate on evaluating the proposed approach.

We design a grasping scenario that the simulated robot first grasps the object and then picks it up to a certain height to see if the object slips due to bad grasp or not. A particular grasp was considered a success if the robot is able to complete the task. In this experiment, we randomly selected 20 different objects for each of the five mentioned categories. In each experiment, we randomly place the object on the table region and also rotate it along the z-axis. It is worth to mention that all test objects were not used for training the neural networks. Table I shows the experimental results of grasping success rate. Figure 1 shows the grasp detection results of ten example objects. A video of this experiment is available online at http://youtu.be/5_yAJCc8owo.

Fig. 8: The robustness of the Res-U-Net to different level of Gaussian noise and varying point cloud density: (left) grasp success rate against down-sampling probability; (right) grasp success rate against Gaussian noise sigma.

Two sets of experiments were carried out to examine the robustness of the proposed approach with respect to varying point cloud density and Gaussian noise. In particular, in the first set of experiments, the original density of training objects was kept and the density of testing objects was reduced (downsampling) from 1 to 0.5. In the second set of experiments, nine levels of Gaussian noise with standard deviations from 1 to 9 mm were added to the test data. The results are summarized in Fig. 


From experiments of reducing density of test data (i.e. Fig.8 (left), it was found that our approach is robust to low-level downsampling i.e., with 0.9 point density, the success rate remains the same. In the case of mid-level downsampling resolution (i.e. point density between 0.6 and 0.8), the grasp success rate dropped around 20%. It can be concluded from Fig.8 (left) that when the level of downsampling increases to 0.5, the grasp success rate dropped to 57% rapidly.

In the second round of experiment, Gaussian noise is independently added to the , and -axes of the given test object. As shown in Figure 8 (right), performance decrease when the standard deviation of the Gaussian noise increases. In particular, when we set the sigma to , and , the success rates are dropped to , , and respectively.

Our approach was trained to grasp five object categories. In this experiment, we examine the performance of our grasp approach by a set of ten completely unknown objects. In most of the cases, the robot could detect an appropriate grasp configuration for the given object and completed the grasping scenario. This observation showed that the proposed Res-U-Net could use the learned knowledge to grasp some of the never seen before objects correctly. In particular, we believe that the new objects that are similar to known ones (i.e., they are familiar) can be grasped similarly. Figure 9 shows the steps taken by the robot to grasp a set of unknown objects in our experiments.

In both experiments, we have encountered two types of failure modes. First, Res-U-Net may fail to detect an appropriate part of the object for grasping (e.g., Mug). Second, grasping may fail because of the collision between gripper, object, and table, if the detected affordance for the given object is too small (e.g., Knife) or too large to fit in the robot’s gripper, or if the object is too big or slippery (e.g., Guitar and Lamp).

Fig. 9: Examples of grasping unknown objects by recognizing the appropriate affordance part and approaching path.
Category Success rate (%) Success / Total
Mug 75 15 / 20
Chair 85 17 / 20
Knife 95 19 / 20
Guitar 85 17 / 20
Lamp 85 17 / 20
Average 85 85 / 100
TABLE I: Grasp success rate

Another set of experiments was performed to estimate the execution time of the proposed approach. Three components mainly make the execution time: perception, affordance detection, and finding suitable grasp configuration. We measured the run-time for ten instances of each. Perception of the environment and converting the point cloud of the object to appropriate voxel-based representation (on average) takes 0.15 seconds. Affordance detection by Res-U-Net requires an average of 0.13 seconds, and finding suitable grasp configuration demands another 1.32 seconds. Therefore, finding a complete grasp configuration for a given object on average takes about 1.60 seconds.

Vi Conclusion and Future Work

In this paper, we have presented a novel deep convolutional neural network named Res-U-Net to detect grasp affordances of 3D Objects. The point cloud of the object is further processed to determine an appropriate grasp configuration for the selected graspable point. To validate our approach, we built a simulation environment and conducted an extensive set of experiments. Results show that the overall performance of our affordance detection is clearly better than the best results obtained with the U-Net and Encoder-Decoder approaches. We also test our approaches by a set of never seen before objects. It was observed that, in most of the cases, our approach was able to detect grasp affordance parts correctly and perform the proposed grasp scenario completely. In the continuation of this work, we plan to evaluate the proposed approach in clutter scenarios such as clearing a pile of toy objects. Furthermore, we will try to train the network using more object categories and evaluate its generalization power using a large set of unknown objects. We would also like to investigate the possibility of considering Res-U-Net for task-informed grasping scenarios.


  • [1] R. Balasubramanian, L. Xu, P. D. Brook, J. R. Smith, and Y. Matsuoka (2010) Human-guided grasp measures improve grasp robustness on physical robot. In 2010 IEEE International Conference on Robotics and Automation, pp. 2294–2301. Cited by: 3rd item.
  • [2] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu (2015) ShapeNet: An Information-Rich 3D Model Repository. Technical report Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago. Cited by: §V-A.
  • [3] J. J. Gibson (2014) The ecological approach to visual perception: classic edition. Psychology Press. Cited by: §I.
  • [4] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 770–778. Cited by: §III-C, §III-C.
  • [5] A. Herzog, P. Pastor, M. Kalakrishnan, L. Righetti, T. Asfour, and S. Schaal (2012) Template-based learning of grasp selection. In 2012 IEEE International Conference on Robotics and Automation, pp. 2379–2384. Cited by: §II.
  • [6] S. H. Kasaei, M. Oliveira, G. H. Lim, L. S. Lopes, and A. M. Tomé (2018) Towards lifelong assistive robotics: a tight coupling between object perception and manipulation. Neurocomputing 291, pp. 151–166. Cited by: §II.
  • [7] S. H. Kasaei, N. Shafii, L. S. Lopes, and A. M. Tomé (2016) Object learning and grasping capabilities for robotic home assistants. In Robot World Cup, pp. 279–293. Cited by: §II.
  • [8] S. H. Kasaei, J. Sock, L. S. Lopes, A. M. Tomé, and T. Kim (2018) Perceiving, learning, and recognizing 3d objects: an approach to cognitive service robots. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §III.
  • [9] M. Kokic, J. A. Stork, J. A. Haustein, and D. Kragic (2017) Affordance detection for task-specific grasping using deep learning. In 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pp. 91–98. Cited by: §II.
  • [10] M. Kopicki, R. Detry, M. Adjigble, R. Stolkin, A. Leonardis, and J. L. Wyatt (2016) One-shot learning and generation of dexterous grasps for novel objects. The International Journal of Robotics Research 35 (8), pp. 959–976. Cited by: §II.
  • [11] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg (2017) Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312. Cited by: §II.
  • [12] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis (2016) Detecting object affordances with convolutional neural networks. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2765–2770. Cited by: §II.
  • [13] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §II.
  • [14] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §III-B, §III-B, §III-B.
  • [15] N. Shafii, S. H. Kasaei, and L. S. Lopes (2016) Learning to grasp familiar objects using object view recognition and template matching. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2895–2900. Cited by: §II.
  • [16] J. Sock, S. Hamidreza Kasaei, L. Seabra Lopes, and T. Kim (2017-10)

    Multi-view 6d object pose estimation and camera motion planning using rgbd images

    In The IEEE International Conference on Computer Vision (ICCV) Workshops, Cited by: §III.
  • [17] H. O. Song, M. Fritz, D. Goehring, and T. Darrell (2015) Learning to detect visual grasp affordance. IEEE Transactions on Automation Science and Engineering 13 (2), pp. 798–809. Cited by: §II.
  • [18] N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy, and T. Asfour (2016) Part-based grasp planning for familiar objects. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 919–925. Cited by: §II.
  • [19] J. Yang, B. Price, S. Cohen, H. Lee, and M. Yang (2016) Object contour detection with a fully convolutional encoder-decoder network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 193–202. Cited by: §III-B.