A Novel Geometry-based Algorithm for Robust Grasping in Extreme Clutter Environment

07/27/2018
by   Olyvia Kundu, et al.
Tata Consultancy Services
0

This paper looks into the problem of grasping unknown objects in a cluttered environment using 3D point cloud data obtained from a range or an RGBD sensor. The objective is to identify graspable regions and detect suitable grasp poses from a single view, possibly, partial 3D point cloud without any apriori knowledge of the object geometry. The problem is solved in two steps: (1) identifying and segmenting various object surfaces and, (2) searching for suitable grasping handles on these surfaces by applying geometric constraints of the physical gripper. The first step is solved by using a modified version of region growing algorithm that uses a pair of thresholds for smoothness constraint on local surface normals to find natural boundaries of object surfaces. In this process, a novel concept of edge point is introduced that allows us to segment between different surfaces of the same object. The second step is solved by converting a 6D pose detection problem into a 1D linear search problem by projecting 3D cloud points onto the principal axes of the object surface. The graspable handles are then localized by applying physical constraints of the gripper. The resulting method allows us to grasp all kinds of objects including rectangular or box-type objects with flat surfaces which have been difficult so far to deal with in the grasping literature. The proposed method is simple and can be implemented in real-time and does not require any off-line training phase for finding these affordances. The improvements achieved is demonstrated through comparison with another state-of-the-art grasping algorithm on various publicly-available and self-created datasets.

READ FULL TEXT VIEW PDF

Authors

page 7

page 9

page 10

page 11

page 12

page 13

06/23/2020

Generalized Grasping for Mechanical Grippers for Unknown Objects with Partial Point Cloud Representations

We present a generalized grasping algorithm that uses point clouds (i.e....
09/04/2018

Optimized edge-based grasping method for a cluttered environment

This paper looks into the problem of grasping region localization along ...
01/30/2021

A self-supervised learning-based 6-DOF grasp planning method for manipulator

To realize a robust robotic grasping system for unknown objects in an un...
07/26/2021

SpectGRASP: Robotic Grasping by Spectral Correlation

This paper presents a spectral correlation-based method (SpectGRASP) for...
09/20/2019

Object grasping planning for the situation when soft and rigid objects are mixed together

In this paper, we propose a object detection method expressed as rotated...
12/01/2020

Fast and Robust Bin-picking System for Densely Piled Industrial Objects

Objects grasping, also known as the bin-picking, is one of the most comm...
06/18/2021

Variable-Grasping-Mode Gripper With Different Finger Structures For Grasping Small-Sized Items

This letter presents a novel small gripper capable of grasping various t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A robot that can manipulate its environment is much more useful than one that can only perceive. Such robots can act as active agents which will someday replace humans from all types of dull, dangerous and dirty works completely, thereby, freeing them for more creative pursuits. Grasping is an important capability necessary for realizing this end. Solving the grasping problem involves two steps. This first step uses perception module to estimate the pose (position and orientation) of the object and hence, the gripper pose needed for picking it. Then, the second step uses a motion planner to generate necessary robot and gripper movement to make contact with the object. The usual approach, currently employed in industries, uses the available knowledge of accurate geometry of objects as well as the environment to solve the grasping problem off-line for all objects and then, apply it to pick objects based on their recognition during online implementation. Many of the recent approaches now solve the grasping problem independent of the object identity, thanks to the availability of 3D or RGBD point cloud available from low cost depth and range sensors. In general, grasping unknown objects in a cluttered environment still remains an open problem and has attracted considerable amount of interest in the recent past.

There are several approaches to solve the grasping problem which are reviewed briefly in the next section. In this paper, we are primarily interested in solving the first part of the problem, namely, finding graspable regions and suitable grasp poses (together known as graspable affordances) for a two-finger parallel-jaw gripper. This is more formally termed as robotic grasp detection JainICRA16 or simply, grasp pose detection (GPD) plattgrasppose2016 . The graspable affordances are to be detected in a RGBD or a 3D point cloud obtained from a single view of the range or depth sensor, without requiring any apriori knowledge of the object geometry. Our work is inspired by the approach presented in Pas2013LocalizingGA ten2016localizing which uses surface curvature to localize graspable regions in the point cloud. The advantage of this approach lies in its simplicity which allows real-time implementation and does not require any time-consuming and data-intensive training phase common in most of the learning-based methods saxena2008 lenz2015deepgrasp johns2016deep . However, this approach suffers from several limitations, which forms the basis for the work presented in this paper. For instance, it can not be used for grasping objects with flat surfaces, such as boxes, books etc. This is partially remedied in pas2015using where authors use Histogram of Gradients (HoG) features dalal2005histograms to generate multiple hand hypotheses and then, train a SVM network to detect the valid grasps among these hypotheses. Secondly, their algorithm necessitates creating several spherical regions of fixed user-defined radius to search for these graspable regions. Hence, it can not be used for localizing varying sizes of handles as it requires adjusting the radii of these spherical regions. Because of these limitations, the above algorithm performs poorly in extreme clutter environment where there could be multiple objects adjacent to each other.

In this paper, we propose a new approach based on surface normals to overcome these limitations. It primarily involves two steps. The first step uses surface continuity of surface normals to identify the natural boundaries of objects vosselman2004recognising rabbani2006segmentation . A modified version of the region-growing algorithm is proposed that can distinguish between difference surfaces of the same object having large variation in the direction of their surface normals. This is done by defining edge points and introducing a pair of thresholds on the similarity criterion used by the region growing algorithm. This is a novel concept which dramatically improves the performance of the region growing algorithm in removing spurious edges and thereby, identifying natural boundaries of each object even in a clutter. One of the clear advantage of this approach is that it allows us to find graspable affordances for box type objects with flat surfaces, which is considered to be a difficult problem in the vision-based grasping literature. Also unlike Pas2013LocalizingGA ten2016localizing , the proposed algorithm requires far too less number of user-defined parameters and does not require initialization using spherical regions at multiple locations.

Once the surfaces for different objects are segmented, the second step uses the gripper geometry to localize the graspable regions. The six dimensional grasp pose detection problem is simplified by making practical assumption of the gripper approaching the object in a direction opposite to the surface normal of the centroid of the segment with its gripper closing plane coplanar with the minor axis of the segment. The principal axes for each segment (major, minor and normal axes) are computed using Principal Component Analysis (PCA)

wold1987principal . Essentially, the valid grasping regions are localized by carrying out a one-dimensional search along the principal axes of the segment and imposing the geometrical constraints of the gripper. This is made possible by projecting the original 3D Cartesian points of the surface onto the principal axes of the surface segment. This approach is mathematically much simpler compared to other methods vezzani2017grasping tableTopGrasping which require more complex processes to make such decisions. The proposed grasping method is found to be more robust in localizing graspable affordances in extreme clutter scenario as compared to the state-of-the-art algorithms. This improvement is achieved without compromising on the real-time performance of the algorithm.

In short, the major contributions made in this paper are as follows: (1) a new method is proposed that exploits segmentation based on smoothness of surface normals rabbani2006segmentation to find graspable affordances in a 3D point cloud. This allows us to grasp box-type objects with flat surfaces, which is considered to be a difficult problem in the grasping literature. (2) The introduction of a novel concept of edge point and a modified region growing algorithm that uses a pair of thresholds on smoothness constraint allows us to distinguish between different surfaces of the same object and helps in finding the natural boundaries of objects in a clutter by removing spurious edges. This, in turn, allows us to grasp box-type objects which is otherwise considered to be difficult. (3) The problem of 6D pose detection is simplified by constraining the search space by using the gripper geometry and converted into a one-dimensional search problem through scalar projections of 3D points on to the principal axes of the surface segment. The resulting method is quite simple and can be implemented in real-time and does not require any data-intensive and off-line training phase. (4) In this process, we contribute a new dataset that can be used for analyzing the performance of various grasping algorithms. This dataset exhibit various real world scenario including extreme clutter, constrained view within a bin etc. Finally, the improvement achieved by the proposed algorithm is demonstrated through rigorous experiments on several publicly available dataset including our own.

The rest of this paper is organized as follows. A brief overview of related literature is provided in the next section. The symbols and notations used for explaining the method is provided in Section 3. The proposed method is explained in Section 4 and the experimental and simulation results are provided in Section 6. The summary and conclusion is made in Section 7.

2 Related Work

Grasping is a challenging problem which has attracted considerable amount of interest over last couple of decades. This section provides an overview of various related work in this domain. We will particularly focus on precision grasping where the object is held with the tips of the fingers providing higher sensitivity and dexterity compared to power grasping which involves large area of contact between the hand and the object gori2013ranking . If the accurate 3D model of the object is available, one can leverage force-closure and form-closure to find stable grasp configurations for holding the object but it presupposes the availability of contact points on the object bicchi2000robotic

. These contact points are, however, not easily available in real world scenarios and hence, forms the subject matter of our interest in this paper. The rapid advancement in computer vision algorithms has enabled researchers to use vision sensors to identify graspable regions in objects. These methods extract 2D image features and combine them with geometric methods either to compute size or shape of the object

saxena2008 kehoe2013cloud song2011grasp . The availability of low cost range or RGBD sensors have further simplified the grasping problem by providing 2.5D or 3D point cloud data. In the later category, a number of approaches have been tried to extract object shape information. For instance, Fischinger and Vincze fischinger2012empty use a novel Height accumulated feature (HAF) to detect shape and find grasping regions in a cluttered environment. Similarly, the authors in vezzani2017grasping varadarajan2011object use superquadric functions to model objects. There are other methods that exploit geometric properties of gripper and object surfaces to detect suitable grasping regions as in tableTopGrasping ten2016localizing . Some methods try to fit a primitive shape around the object point cloud and this is used for deciding the suitable graspable pose for the robotic gripper jain2016grasp somani2014shape behnke2012shape kragic2008boxprimitive

. Many of these methods produce multiple grasp hypotheses based on certain heuristics and then use machine learning methods to evaluate them

geidenstam2009learning plattgrasppose2016 saxena2008learning

. Quite recently, deep learning methods are being increasingly used for solving the grasping problem

lenz2015deepgrasp johns2016deep pinto2016supersizing Schwarz:7139363 . These methods require time-consuming data gathering and off-line training processes which limit their applications to many real world problems.

We are particularly inspired by the work by Ten pas and Platt ten2016localizing where the grasping regions (or affordances) are directly computed from a single view 3D point cloud obtained from a RGBD sensor, without requiring any apriori knowledge of the object geometry. They use a fixed radius circle to cluster 3D points in the environment at multiple locations. This radius is decided based on the geometry of the gripper and clearance required for avoiding collision with neighboring objects. In a sense, the authors do not make use of surface attributes to find object boundaries. The success of their method relies on the careful selection of this radius which can not be generalized to wide varieties of objects in a cluttered scenario. Secondly, it is biased towards identifying curved surfaces and hence, can not deal with objects with flat surfaces. This is partially remedied in pas2015using

where such rectangular edges are identified using HoG features and a trained SVM classifier. They further improve the accuracy of their method in

plattgrasppose2016 by training a deep network with multi-view images of the same object.

In contrast to the above approaches, we use region growing algorithm vosselman2004recognising to cluster 3D points based on the orientation of surface normals mitra2003estimating . Such methods are popular for segmenting point clouds based on smooth constraint rabbani2006segmentation vosselman2004recognising . We modify the existing region growing algorithm introducing a concept of edge points and incorporating two user-defined thresholds which remarkably improves the detection of discrete boundaries even on the same object. In other words, we exploit the discontinuity of surface normals to identify the natural object boundaries unlike Platt’s approach ten2016localizing which merely uses pre-defined spheres to define object boundaries. This forms the first main contribution of this paper which allows us to detect graspable regions for a wide variety of objects including flat surfaces in a heavily cluttered scene as will be demonstrated later in this paper.

MinimumRegionneeded forGraspingSurface of the objectto be graspedGripperClosing Plane
Object 1Object 2 Object 3Two fingerGripper
(a) Gripper Geometry (b) Grasping planes (c) Clearance between objects
Figure 1: Grasp Configuration for a two finger parallel jaw gripper. For a successful grasp, the clearance between objects must be greater than the width of each finger (). The gripper approaches the object along a direction opposite to the surface normal with its gripper closing plane coplanar with the minor axis of surface segment as shown in (b). The objects 1 and 2 in (c) are obstacles which are to be avoided by the gripper while grasping the target object 3. The 6D pose detection problem becomes a 1-D linear search for a volume of along the major axis .

Once these object boundaries are identified, the geometrical attributes (size, pose etc.) of these objects are computed using Principal Component Analysis wold1987principal . The information obtained from this step is further used to localize the graspable regions on a given object surface. The grasping problem involves search in a 6D space which is a computationally intensive task. The dimensionality of this search space is reduced by imposing constraints through primitive shape fitting jain2016grasp or by using superquadrics vezzani2017grasping Decomposition:Grasp or another functions to model object shape. In contrast to these methods, we convert the 6D search problem to a one-dimensional search problem by exploiting the geometrical attributes of the surface as well as the gripper. This simplification is achieved without necessitating any complex mathematical process and can be generalized for any kind of objects without necessitating any apriori knowledge. The problem formulation and the solution details are described next in this paper.

3 Problem Definition

In this paper, we look into the problem of finding graspable affordances for a two finger parallel-jaw gripper in a 3D point cloud obtained from a single view of a range or RGBD sensor. The affordances for objects are to be computed in an extreme clutter scenario where many objects could be partially occluded. The problem is solved by taking a geometric approach where the geometry of the robot gripper is utilized to simplify the problem.

Various geometrical parameters corresponding to the gripper and the object to be grasped is shown in Figure 1(a). The maximum hand aperture is the maximum diameter that can be grasped by the robot hand and is denoted by . It should be greater than the diameter of a cylinder encircling the object. It is further assumed that each finger of the gripper has a width , thickness and total length . The minimum amount of length needed for grasping an object successfully is assumed to be . There has to sufficient clearance between objects so that a gripper can make contact with the target object without colliding with its neighbors. Let this minimum clearance needed between two objects be and it should be more than the width of each finger, i.e., to avoid collision with non-target objects while making a grasping manoeuvre. This clearance is shown as blue ring in Figure 1(c). Each object surface is associated with three principal axes, namely, normal to the surface and two principal axes - major axis orthogonal to the plane of finger motion (gripper closing plane) and minor axis - which is orthogonal to other two axes as shown in Figure 1(b). Readers can refer to pas2015using to understand some of the terms which have been used here without being defined to avoid repetitions.

The proposed grasp pose detection algorithm takes a 3D point cloud and a geometric model of the robot hand as input and produces a six-dimensional grasp pose handle

. The six-dimensional grasp pose is represented by the vector

, where is the point where a closing plane of the gripper and object surface seen by the robot camera intersect; and, is the orientation of the gripper handle with respect to a global coordinate frame. Searching for a suitable 6 DOF grasp pose is a computationally intensive task and hence, a practical approach is taken where the search space is reduced by applying several constraints. For instance, it is assumed that the gripper approaches the object along a plane which is orthogonal to the object surface seen by the robot camera. In other words, the closing plane of the gripper is normal to the object surface as shown in Figure 1(b). Since the mean depth of the object surface is known, the pose detection problem becomes a search for three-dimensional bands along the major axis where is the minimum depth necessary for holding the object. Hence, the grasp pose detection becomes a one-dimensional search problem once an object surface is identified.

Hence, the problem of computing graspable affordances or grasp pose detection boils down to two steps: (1) creating surfaces in three point clouds and, (2) applying geometric constrains of a two finger parallel jaw gripper to reduce the search space for finding suitable gripper hand pose. The details of the proposed method to solve these two problems is described in the next section.

4 Proposed Method

As explained in the previous section, the proposed method for finding graspable affordances involves two steps: (1) Creating continuous surfaces in the 3D point cloud and then, (2) applying geometrical constraints to search for suitable gripper poses on these surfaces. This is described next in the following subsections.

4.1 Creating Continuous Surfaces in 3D point cloud

Figure 2: Defining an edge point. It is a point on the surface around which the surface normals are widely scattered in different directions.

The method involves creating several surface patches in the 3D point cloud using region growing algorithm vosselman2004recognising rabbani2006segmentation . The angle between surface normals is taken as the smoothness condition and is denoted by symbol . The process starts from one seed point and the points in its neighbourhood are added to the current region (or label) if the angle between the surface normals of new point and that of seed point is less than a user-defined threshold. Now the procedure is repeated with these neighboring points as the new seed points. This process continues until all points have been labeled to one region or the other. The quality of segmentation heavily depends on the choice of this threshold value. A very low value may lead to over segmentation and a very high value may lead to under segmentation. The presence of sensor noise further exacerbates this problem leading to spurious edges when only one threshold is used. This limitation of the standard region growing algorithm is overcome by introducing a concept called edge points and using a pair of thresholds instead of one. The use of two thresholds is inspired by a similar technique used in Canny edge filter chen2008double jie2012improved and is demonstrated to provide robustness against spurious edges. This modified version of the region growing algorithm is described next in the following section.

(b)(c)
Figure 3: Update criteria of proposed region growing is illustrated here a cuboid. Two cases are shown here,- one on a face (A) and other on the boundary (B). Noisy data leads to error in normal directions as shown in (c) compared to ideal input data (b) resulting either spurious boundary or undetected boundary. Our method combines the boundary condition with two thresholds (d) in order to achieve better performance.

To begin, we first describe the concept of edge points and then, explain how a pair of two thresholds on smoothness condition can improve the performance of the standard region growing algorithm. Some of the notations which will be used for describing the proposed method are as follows. Also refer to Figure 2 for a better understanding of these notations. Let us consider a seed point with its own spherical neighborhood shown as a circle in Figure 2. It is further assumed that this neighborhood consists of points () in the 3D point cloud. Mathematically, this neighborhood may be written as follows:

(1)

where is an user-defined radius of the spherical neighborhood. Each neighboring point has an associated surface normal which makes an angle of with the normal associated with the seed . As stated earlier, is the smoothness condition for the region growing algorithm. In this context, we define two thresholds and which are used for defining the region label for the neighboring point and creating new seed for further propagation. Let be the set of new seeds which will be used in the next iteration of the region growing algorithm. Before describing the modification to the standard region growing algorithm, it is necessary to introduce the concept of edge points which is defined as follows:

Definition 0 (Edge Point)

Let be a set of those neighbors of seed point for which . In other words,

(2)

Let be the cardinality of the set ,i.e., . Then, a seed point will be called as an edge point if the following condition is satisfied:

(3)

The set of all edge points for a given point cloud be denoted by the symbol and .

The value of is found to be empirically effective in providing better segmentation of surfaces as will be shown later in this section. Essentially, an edge point is a point on the edge of a surface where a majority of its neighbors will have surface normals scattered in all directions and for such a seed point, the neighboring points will have angles as mentioned above. One such edge point is shown in Figure 3 as point B. An edge point is different from a non-edge point in the sense that the later lies away from an edge and its neighbors have surface normals more or less in the same direction. One such non-edge point is shown as point A in Figure 3. Even with sensor noise, the neighboring points around such a seed point will have surface normals with smaller values of angles with respect to the surface normal of the seed point, i.e., .

Now, in the region growing algorithm starting with the seed point , the label for a neighboring point is defined as follows:

(4)

where the notation indicates that the point is added to the list of seed points which will be used by the region growing algorithm in the next iteration. However, if the angle between normals lies between the above two thresholds, i.e., , the label to the neighboring point is assigned as follows:

then (5)

The above equation only states that while the neighboring point is assigned the same label as that of the seed point , it is not considered as a new seed point if the current seed is an edge point. This allows the region growing algorithm to terminate at the edges of each surface where there is a sudden and large change in the direction of surface normals thereby obtaining the natural boundaries of the objects. The above process for deciding labels for neighboring points is demonstrated pictorially in Figure 3. It is also shown how a pair of thresholds are effective in dealing with sensor noise, thereby eliminating spurious edges. The effect of this modified version of region growing algorithm on the object segmentation can be seen clearly in Figure 4. Figures 4(a) and (b) shows the case of segmentation obtained with only one threshold. In the first case, a lower threshold cut-off value is used while in the later, upper cut-off threshold is used. As discussed earlier, lower value of threshold leads to under-segmentation and may generate multiple patches even on the same surface. On the other hand, higher value of thresholds leads to over-segmentation where different surfaces of a rectangular box may be identified as a single surface patch. In contrast to these two cases, the use of two thresholds provide better segmentation leading to creation of two separate surfaces one for each face of the rectangular box.

This modified version of region growing algorithm allows us to find graspable affordances for rectangular box-type objects which were hitherto difficult. For instance, authors in Pas2013LocalizingGA ten2016localizing find graspable affordances only for objects with cylindrical or spherical shapes as they relied on curve fitting methods. In pas2015using plattgrasppose2016 , authors use a trained SVM to identify rectangular edges using HoG features and pre-defined hand poses were used for grasping objects at these detected regions. Compared to these approaches, the above proposed method is much simpler which does not require any training phase and can be implemented in real-time. More details about real-time implementation will be provided in the experiment section later in this paper. The surfaces identified in this section are then used to find valid graspable regions on the object as described in the next section.

(a) (b) (c)
(d) (e) (f)
Figure 4: Effect of double thresholds on smooth condition in region growing algorithm. (a) Using single threshold: leads to under-segmentation - multiple and discontinuous patches on the same surface (b) Using single threshold: leads to over-segmentation where different surfaces (having different normals) are merge together into a single surface. (c) using double boundary thresholds provides better surface segmentation compared to the case when only one threshold is used. (d) Shows the case when only one threshold is used. Two orthogonal surfaces of the object gets merged into one continuous surface. (e) Shows the edge points in blue color (f) Shows that the use of double thresholds lead to creation of two surfaces for the rectangular object.

5 Finding Graspable Affordances

Once the surface segments are created, the grasping algorithm needs to find suitable handles which could be used by the gripper for picking objects. This is otherwise known as the problem of grasp pose detection plattgrasppose2016 which essentially aims at finding six dimensional pose for the gripper necessary for making a stable grasping contact with the object. However, this is a computationally intensive task as one has to search in a 6-dimensional pose space. The searching procedure is broadly handled in two ways. In one approach, the object to be picked is matched with its CAD model. Once a match is found, then the geometric parameters of the object model is used to compute the 6 DOF gripper pose directly. As CAD models may not always be available, the objects are generally approximated with some basic shape primitives jain2016grasp somani2014shape behnke2012shape or superquadric vezzani2017grasping models. While these methods take 3D point cloud as input, other methods can work with RGBD data. They generally take color and depth information as image and apply a sliding window based search with different scale to find valid grasping regions lenz2015deepgrasp johns2016deep . We simplify this search problem at first by grouping similar type of points based on the boundaries obtained in the previous step and, by making some practical assumptions about the grasping task. As described earlier in section 3, the gripper is assumed to approach the object in a direction opposite to surface normal of the object. It is also assumed that the gripper closing plane coincides with the minor axis of the surface segment under consideration as shown in Figure 1(b). In this way, the 6D pose problem is solved in a single step and can be implemented in real-time. However, it is still necessary to identify suitable regions on the surface segments that can fit within the fingers of the gripper while ensuring that the gripper does not collide with neighbouring objects. In other words, one still needs to search for a three-dimensional cube of dimension around the centroid of the object segment as shown in Figure 1(b). This requires carrying out a linear search along the three principal axes of the surface to find regions that meet this bounding box constraint. These regions are the graspable affordances for the object to be picked by the gripper. The details of the search process is described next in this section.

Let us assume that the region growing algorithm, described in the previous section, leads to the creation of segments in the 3D point cloud . As a first step, we extract the following parameters for each of these segments :

  • The centroid of the segment: .

  • The associated surface normal vector: .

  • First two dominant directions obtained from Principle Component Analysis (PCA) and their corresponding lengths. These two axes correspond to vectors and respectively in Figure 1(b).

The search for suitable handles starts from the centroid of the surface and proceeds along the three principal axes, i.e., major axis , minor axis and surface normal . In order to do this, the 3d point in the original point cloud corresponding to the surface segment under consideration are projected onto these new axes () as shown in Figure 5. So for every point in the orginal coordinate system that lies within a sphere of radius results in a vector in the new coordinate system (). The radius of the sphere is selected to be half of the maximum hand aperture of the gripper to be used for picking the object. The third axes and are normal to the surface of the paper and hence is not displayed in the figure.

Through this scalar projection, the three dimensional search problem is converted into three one-dimensional search problems, which is computationally much simpler compared to the former. The search is first performed along the direction and respectively. All the points that lie within the radius around the centroid is considered to be a part of the gripper handle. Similarly, all points of the surface that lie within the radius of along a direction of is considered to be part of the gripper handle. Please note that and are the width and the length of gripper fingers needed for holding the object. Once these two boundaries are defined, we get a horizontal patch of points extending along the minor axis as shown in Figure 6 (c), (e) and (f). So now, we need to find the boundary along the minor axis to see if it would fit within the gripper finger gap. This is done by searching for a gap along the minor axis which is at least bigger than a given user defined threshold which itself depends on the thickness of the gripper finger. The idea is that there should be sufficient gap between two objects to avoid collision with the neighboring objects. This is illustrated in Figure 6. The working of the search process could be understood by analyzing this figure as explained in the following paragraph.

The figure 6(a) shows two objects which has been kept adjacent to each other such that their boundaries touch each other. The figure (b) shows the surface segments obtained using the proposed region growing algorithm. The objective is to find a suitable graspable handle for the cylinder object. The figure (c) shows the horizontal patch obtained using the linear search as explained above. Since there is no gap along the minor axis (shown in red), the region belonging to both the objects within the yellow band gets included into the graspable region. Total horizontal length of this band may exceed the maximum hand aperture of the gripper making it an invalid grasping handle for the object. Now the next band of width on the top of the last band is taken into consideration. In this case, a gap is found immediately around the boundary of the cylindrical surface along the minor axis as shown in Figure 6(e). Since this length along the red axis fits within the gripper handle, it will be considered as a valid handle for the object. The figures 6(g)-(h) shows the case when these two objects have been kept apart. In this case, the gap is found along the minor axis and hence the handle for the bottle is detected successfully without any further search. Hence the search process involves four steps:

  1. Project all the points on the surface segment within a spherical radius of onto the axes , and .

  2. Fix boundary along the major axis at a distance of on either side of the centroid the patch under consideration.

  3. Fix boundary along the normal axis at a distance of from the top surface.

  4. Search for gap along the minor axis on either side of the centroid. If this gap is greater than or equal to , then search is stopped. The resulting patch is considered a valid grasping handle for the object if the total length of the patch along the minor axis is less than maximum hand aperture of the gripper.

A new patch along the major axis either side of the centre patch is analyzed for validity in case the current one fails to satisfy the gripper constraints. So it is possible to obtain multiple handles on the same object, which is very useful, as the robot motion planner may not be able to provide a valid end-effector trajectory for a given graspable affordance. The gripper approaches the object at the centroid of the yellow patch shown in Figure 6(c) or (e) along the direction of surface normal (shown in blue color in Figure 6 (d) or (f) respectively, towards the object with its gripper closing plane coinciding with the minor axis (show in red color). As one can appreciate, the pose detection problem is solved by a simple method that converts a 6D search problem into a simple 1-D search problem. This is much faster computer other method such as vezzani2017grasping that use complex optimization methods to arrive at the same conclusion. The proposed provides remarkable improvement over the state of the art method Pas2013LocalizingGA ten2016localizing which provides much inferior performance in a cluttered environment as will be shown in the next section.

SurfaceSegment
Figure 5: Scalar Projection of 3D cloud points to a new coordinate frame. The 3D point cloud points on the surface segment represented by within the sphere of radius is projected onto the axes of the new coordinate frame represented by the vector . These projected points are used for finding suitable graspable affordances for the object. The axes and are perpendicular to the plane of the paper and point outward.
(a) (b)
(c) (d)
(e) (f)
(g) (h)
Figure 6: Searching for suitable grasping handle. (a) Actual picture for a case where objects are stacked very close to each other (b) Segmented point cloud obtained after applying region growing on surface normals; (c) - (d) Suitable handle is not found as discontinuity is detected in the horizontal axis (e) - (f) suitable handle is found for another patch on the same object; (g)-(h) suitable handle found when objects are separate as a discontinuity is detected along the red axis

6 Experimental Results

In this section, we provide results of various experiments performed to establish the usefulness of the proposed algorithm in comparison to the existing state-of-the-art methods. As explained before, our focus is to find suitable graspable affordances for various household items. The input to our algorithm is a 3D point cloud obtained from an RGBD or a range sensor and, the output is a set of graspable affordances comprising of graspable regions and gripper pose required to pick the objects. We have particularly tested our algorithms on datasets obtained using Kinect zhang2012microsoft , realsense draelos2015intel and Ensenso ensenso depth sensors. An additional smoothing pre-processing step is applied to the Ensenso point cloud which are otherwise quite noisy compared to that obtained using either Kinect or realsense sensors. As we will demonstrate shortly, we have considered grasping of individual objects in an extremely cluttered environment. The performance of the proposed algorithm is compared with other methods on four different datasets, namely, (1) Big bird dataset singh2014bigbird , (2) Cornell Grasping dataset lenz2015deepgrasp , (3) ECCV dataset Aldoma2012 , (4) Kinect Dataset SegIROS11 (5) Willow garage dataset willow , (5) the TCS Grasping Dataset-1 and (6) TCS Grasping Dataset-2. The last two are created by us as a part of this work and is made available online tcsgraspdata2018 along with the program source code for the convenience of users. A snapshot of images for these two datasets are shown in Figure 7. The first TCS dataset contains 382 frames each having only single object in its view inside the bin of a rack where the view could be slightly constrained due to poor illumination. Similarly, the second dataset consists of 40 frames with multiple objects in extreme clutter environment. Each dataset contains RGB images, point cloud data (as .pcd files) and annotations in the text format. These datasets exhibit more difficult real world scenarios compared to what is available in the existing datasets. The algorithm is implemented on a Linux laptop with a i7 processor and 16 GB RAM.

6.1 Performance Measure

Different authors use different parameters to evaluate the performance of their algorithm. For instance, authors in plattgrasppose2016 use recall at high precision as a measure while few others as in lenz2015deepgrasp use accuracy as a measure. In some cases, accuracy may not be a good measure for grasping algorithms because the number of true negatives in a grasping dataset is usually much more than the number of true positives. So, the accuracy could be high even when the number of true positives (actual handles detected) are less (or the precision is less). There are other researchers as in pas2015using jain2016grasp vezzani2017grasping who use success rate as a performance measure which is defined as the number of times a robot is able to successfully pick an object in a physical experiment. The success rate is usually directly linked to the precision of the algorithm as the false detections or mistakes could be detrimental to the robot operation. In other words, a grasping algorithm with high precision is expected to yield high success rate. The precision is usually defined as the fraction of total number of handles detected which are true. However, in a cluttered scenario, the precision may not always provide an effective measure to evaluate the performance of the grasping algorithm. For instance, it is possible to detect multiple handles for some objects and no handles at all for some others, without affecting the total precision score. In other words, the fact that no handles are detected for a set of objects may not have any effect on the final score as long as there are other objects for which more than one handle is detected.

In our case, the precision is considered to be 100% as any handle that does not satisfy the gripper and the environment constraints is rejected. In order to address the concerns mentioned above, we use recall at high precision as a measure of the performance of our algorithm which is defined as the fraction of total number of graspable objects for which at least one valid handle is detected. Mathematically, it can be written as

(6)

The total number of graspable objects includes objects which could be actually picked up by the robot gripper in a real world experiment. It excludes the objects in the clutter which can not picked up due to substantial occlusion. This forms the ground truth for the experiment. Note that the above definition is slightly different from the conventional definition of recall in the sense that the later may include multiple handles for a given object which are not considered in our definition. We analyze and compare the performance of our algorithm with an existing state-of-the-art algorithm using this new metric as described in the next section.

Figure 7: Snapshot of frames in TCS Grasping datasets 1 and 2. Each dataset consists of images and point cloud data files along with annotations in text files.
(a) Toothpaste
(b) Fevicol
(c) Battery
(d) Sprout Brush
(e) Cup
(f) Cleaning Brush
Figure 8: Finding graspable affordances for few objects inside a rack. The objects in (d), (e) and (f) show few cases where it is difficult to find graspable affordances.

6.2 Grasping of Individual Objects

First, we demonstrate the performance of the proposed algorithm in picking individual objects. Table 1 shows the performance of the proposed algorithm on TCS dataset 1. This dataset has 382 frames along with the corresponding 3D point cloud data and annotations for ground truth. A snapshot of objects present in these dataset is shown in Figure 7. The performance of our proposed algorithm on this dataset is compared with Platt’s algorithm reported in Pas2013LocalizingGA ten2016localizing . As one can see in Table 1, the proposed algorithm is able to find graspable affordances for objects in more number of frames and hence it is more robust compared to the previous approach. On an average, our algorithm is able to detect handles in 94% of the frames compared to Platt’s approach which can detect handles only for 51% of frames. This could be attributed to the fact that Platt’s algorithm primarily relies on surface curvature to find handles and hence, can not deal with rectangular objects with flat surfaces. They try to overcome this limitation in pas2015using by training a SVM classifier to detect valid grasp out of a number of hypotheses created using HoG features. Compared to this approach, our proposed method is much simpler to implement as it does not require any training and can be implemented in real-time. It also does not depend on image features which are more susceptible to various photometric effects. Some of the handles detected by our algorithm for individual objects are shown in Figure 8. Examples (a)-(c) shows few instances of simple objects where it is easier to find affordances while the (d)-(f) shows few difficult objects for which finding a suitable handle is challenging.

Object Total Number of frames % of frames where a valid handle is detected
Platt’s Method ten2016localizing Proposed Method
Toothpaste 40 38 90
Cup 50 70 96
Dove Soap 40 25 100
Fevicol 40 75 92
Battery 50 36 98
Clips 21 45 90
CleaningBrush 40 30 90
SproutBrush 21 63 95
Devi Coffee 40 76 93
Tissue Paper 40 40 96
Total 382 51 94
Table 1: Performance Comparison for TCS Grasping Dataset 1 - Individual Objects

6.3 Grasping Objects in a Clutter

In this section, we demonstrate the performance of our proposed algorithm in a cluttered environment. A new dataset is created for this purpose. It is called ‘TCS Grasp Dataset 2’ and it contains 40 frames each one showing multiple objects in extreme clutter situation. The objects in the clutter have different shapes and sizes and, may exhibit partial or full occlusion. The performance of our algorithm on some of these frames are shown in Figure 9. The performance comparison with Platt’s algorithm ten2016localizing Pas2013LocalizingGA is shown in Figure 10. As one can see in Figure 9, the proposed algorithm is successful in finding graspable affordances for rectangular objects with flat surfaces such as books in addition to objects with curved surfaces. It also shows multiple handles detected for some of the objects. All those handles which do not satisfy the geometric constraints of the gripper are rejected and hence not shown in this figure. The maximum hand aperture considered for finding these affordances is 8 cm. In contrast, Platt’s algorithm ten2016localizing Pas2013LocalizingGA fails to detect any handles for flat rectangular objects as shown in Figure 10. The Table 2 provides a more quantitative comparison between these two algorithms. It shows that the proposed algorithm is able to detect at least 86% of unique handles in the dataset compared to 36% recall achieved with Platt’s algorithm. The performance of these two algorithms on various publicly available datasets is summarized in Table 3. Cornell Grasping Dataset lenz2015deepgrasp contains single object per frame and grasping rectangle as ground truth. Their best result (93.7%) reported is in terms of accuracy whereas recall from our method is 96% at 100% precision. The Bird Bird dataset singh2014bigbird consists of segmented individual objects and yields a maximum recall of 99%. This high level of performance is due to the fact that the object point cloud is segmented and processed for noise removal. This dataset, as such, does not include clutter and has been included in this section for the sake of completeness. The ECCV dataset Aldoma2012 , Kinect Dataset SegIROS11 and the Willowgarage dataset willow have multiple objects in one frame and may exhibit low level of clutter. All of these dataset are created for either segmentation or pose estimation purposes, therefore ground truth for grasping is not provided. We have evaluated the performance (as reported in Table 3) using manual annotation. The extent of clutter in these datasets is not comparable to what one will encounter in a real world scenario. This is one of the reasons why we had to create our own dataset. As one can see in Figure 7 (b), the TCS grasp dataset 2 exhibits extreme clutter scenario. As one can observe in Table 2, the proposed algorithm provides better grasping performance compared to the current state-of-the-art reported in literature.

(a) (b)
(c) (d)
(e) (f)
Figure 9: Finding graspable affordances in extreme clutter. The proposed algorithm is capable of finding graspable affordances for rectangular objects as well as objects with curved surface. The maximum hand aperture () considered here is 8 cm.
(a) (b)
(c) (d)
(e) (f)
Figure 10: Visual comparison of the performance of the proposed algorithm with Platt’s algorithm Pas2013LocalizingGA on TCS Dataset 2. The Cyan coloured patches on left hand side figures are the handles detected using Platt’s algorithm. The patches on right side figures along with gripper pose show affordances obtained using the propose algorithm.
. Platt’s Method Pas2013LocalizingGA ten2016localizing Proposed Method
Frame No. No. of graspable objects in the frame max no. of handles detected % Recall max no. of handles detected % Recall
#1 8 2 25 6 75
#3 8 3 38 6 75
#5 6 3 50 6 100
#7 7 2 28 7 100
#10 6 3 50 5 83
#12 7 2 28 7 100
#13 7 2 28 7 100
#16 8 1 13 6 75
#20 8 2 25 6 75
#23 9 2 22 8 89
#24 6 3 50 5 83
#26 5 3 60 3 60
#28 5 2 40 5 100
#30 6 2 33 6 100
#32 6 2 33 5 83
#37 5 1 20 5 100
#38 2 2 100 2 100
#39 4 2 50 3 75
#36 5 1 20 4 80
Total 118 40 33 102 86
Table 2: Performance Comparison for TCS Dataset 2 - Multiple objects in a Cluttered Environment
% Recall
S. No. Dataset Proposed Method Platt’s Algorithm Pas2013LocalizingGA ten2016localizing
1 Big Bird singh2014bigbird 99% 85% plattgrasppose2016
2 Cornell Dataset lenz2015deepgrasp 95.7% 93.7% lenz2015deepgrasp
3 ECCV Aldoma2012 93% 53%
4 Kinect Dataset SegIROS11 91% 52%
5 Willow Garage willow 98% 60%
6 TCS Dataset-1 tcsgraspdata2018 94% 48%
7 TCS Dataset-2 tcsgraspdata2018 85% 34%
Table 3: Performance Comparison on Various Publicly Available Datasets

6.4 Computation Time

The computational performance of the algorithm can be assessed by analyzing the Table 4. This table shows the average computation time per frame for two TCS datasets. As one can observe, the bulk of the time is taken by the region growing algorithm which is the first step of our proposed method. This time is proportional to the size of the point cloud data. The second stage of our algorithm detects valid handles by applying geometric constraints on the surface segments found in the first step. This step is considerably faster compared to the first step. Many of the segments created in the first step are rejected in the second step to identify valid grasping handles as can be see in the 4th and 5th columns in this table. The computation time for each valid handle for the two datasets is 4 and 5 ms respectively.

The total processing time for a complete frame with around 40K data point is approximately 800 ms to 1 second. This is quite reasonable in the sense that the robot can process around 60 frames per second which is very good for most of the industrial applications. This time can be further reduced by detecting a particular ROI within the image thereby reducing the number of points to be processed in the frame. The computation time per frame can also be reduced significantly by downsampling the point cloud. There is a limit to the extent of downsampling allowed as it is directly linked to the quality and quantity of handles detected. For high speed applications, one may use FPGA or GPU based embedded computing platform.

Dataset # data in point cloud Time for Region Growing algorithm (sec) # segments detected # valid handles detected Handle detection time (sec)
TCS Dataset 1 37050 0.729 77 10 0.055
TCS Dataset 2 42461 0.82 182 43 0.171
Table 4: Average computation time per frame. All values are reported per frame basis and are averaged over all frames.

7 Conclusion

This paper looks into the problem of finding graspable affordances (or suitable grasp poses) needed for picking various household objects using two finger parallel-jaw gripper in extreme clutter environment. These affordances are to be extracted from a single view 3D point cloud obtained from a RGBD or a range sensor without any apriori knowledge of object geometry. The problem is solved by first creating surface segments using a modified version region growing algorithm based on surface smoothness condition. This modified version of region growing algorithm makes use of a pair of user-defined thresholds and a concept called edge point to discard false boundaries arising out of sensor noise. The problem of real-time pose detection is simplified by transforming the 6D search problem to a 1D search problem through scalar projection and exploiting the geometry of the two-finger gripper. Through experiments on several datasets, it is demonstrated that the proposed algorithm outperforms the existing state-of-the-art methods in this field. In the process, we have also contributed a new dataset to demonstrate its working in extreme clutter environment and is being made available online for use by the research community.

References

References

  • (1) S. Jain, B. Argall, Grasp detection for assistive robotic manipulation, in: 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 2015–2021. doi:10.1109/ICRA.2016.7487348.
  • (2) M. Gualtieri, A. ten Pas, K. Saenko, R. Platt, High precision grasp pose detection in dense clutter, CoRR abs/1603.01564.
    URL http://arxiv.org/abs/1603.01564
  • (3) A. ten Pas, R. Platt, Localizing grasp affordances in 3-d points clouds using taubin quadric fitting, CoRR abs/1311.3192.
  • (4) A. Ten Pas, R. Platt, Localizing handle-like grasp affordances in 3d point clouds, in: Experimental Robotics, Springer, 2016, pp. 623–638.
  • (5) A. Saxena, J. Driemeyer, A. Y. Ng, Robotic grasping of novel objects using vision, Int. J. Rob. Res. 27 (2) (2008) 157–173. doi:10.1177/0278364907087172.
    URL http://dx.doi.org/10.1177/0278364907087172
  • (6) I. Lenz, H. Lee, A. Saxena, Deep learning for detecting robotic grasps, Int. J. Rob. Res. 34 (4-5) (2015) 705–724. doi:10.1177/0278364914549607.
    URL http://dx.doi.org/10.1177/0278364914549607
  • (7) E. Johns, S. Leutenegger, A. J. Davison, Deep learning a grasp function for grasping under gripper pose uncertainty, in: Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, IEEE, 2016, pp. 4461–4468.
  • (8) A. t. Pas, R. Platt, Using geometry to detect grasps in 3d point clouds, arXiv preprint arXiv:1501.03100.
  • (9)

    N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, Vol. 1, IEEE, 2005, pp. 886–893.

  • (10) G. Vosselman, B. G. Gorte, G. Sithole, T. Rabbani, Recognising structure in laser scanner point clouds, International archives of photogrammetry, remote sensing and spatial information sciences 46 (8) (2004) 33–38.
  • (11) T. Rabbani, F. Van Den Heuvel, G. Vosselmann, Segmentation of point clouds using smoothness constraint, International archives of photogrammetry, remote sensing and spatial information sciences 36 (5) (2006) 248–253.
  • (12) S. Wold, K. Esbensen, P. Geladi, Principal component analysis, Chemometrics and intelligent laboratory systems 2 (1-3) (1987) 37–52.
  • (13) G. Vezzani, U. Pattacini, L. Natale, A grasping approach based on superquadric models, in: Robotics and Automation (ICRA), 2017 IEEE International Conference on, IEEE, 2017, pp. 1579–1586.
  • (14) M. Richtsfeld, M. Vincze., Grasping of unknown objects from a table top, in: Workshop on Vision in Action, 2008.
  • (15) I. Gori, U. Pattacini, V. Tikhanoff, G. Metta, Ranking the good points: A comprehensive method for humanoid robots to grasp unknown objects, in: Advanced Robotics (ICAR), 2013 16th International Conference on, IEEE, 2013, pp. 1–7.
  • (16) A. Bicchi, V. Kumar, Robotic grasping and contact: A review, in: Robotics and Automation, 2000. Proceedings. ICRA’00. IEEE International Conference on, Vol. 1, IEEE, 2000, pp. 348–353.
  • (17) B. Kehoe, A. Matsukawa, S. Candido, J. Kuffner, K. Goldberg, Cloud-based robot grasping with the google object recognition engine, in: Robotics and Automation (ICRA), 2013 IEEE International Conference on, IEEE, 2013, pp. 4263–4270.
  • (18) H. O. Song, M. Fritz, C. Gu, T. Darrell, Visual grasp affordances from appearance-based cues, in: Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, 2011, pp. 998–1005. doi:10.1109/ICCVW.2011.6130360.
  • (19) D. Fischinger, M. Vincze, Empty the basket-a shape based learning approach for grasping piles of unknown objects, in: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE, 2012, pp. 2051–2057.
  • (20) K. M. Varadarajan, M. Vincze, Object part segmentation and classification in range images for grasping, in: Advanced Robotics (ICAR), 2011 15th International Conference on, IEEE, 2011, pp. 21–27.
  • (21) S. Jain, B. Argall, Grasp detection for assistive robotic manipulation, in: Robotics and Automation (ICRA), 2016 IEEE International Conference on, IEEE, 2016, pp. 2015–2021.
  • (22) N. Somani, C. Cai, A. Perzylo, M. Rickert, A. Knoll, Object Recognition Using Constraints from Primitive Shape Matching, Springer International Publishing, Cham, 2014, pp. 783–792. doi:10.1007/978-3-319-14249-4_75.
    URL http://dx.doi.org/10.1007/978-3-319-14249-4_75
  • (23) M. Nieuwenhuisen, J. Stueckler, A. Berner, R. Klein, S. Behnke, Shape-primitive based object recognition and grasping, in: Robotics; Proceedings of ROBOTIK 2012; 7th German Conference on, 2012, pp. 1–5.
  • (24) K. Huebner, D. Kragic, Selection of robot pre-grasps using box-based shape approximation, in: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, pp. 1765–1770. doi:10.1109/IROS.2008.4650722.
  • (25) S. Geidenstam, K. Huebner, D. Banksell, D. Kragic, Learning of 2d grasping strategies from box-based 3d object approximations., in: Robotics: Science and Systems, Vol. 2008, 2009.
  • (26) A. Saxena, L. L. Wong, A. Y. Ng, Learning grasp strategies with partial shape information., in: AAAI, Vol. 3, 2008, pp. 1491–1494.
  • (27)

    L. Pinto, A. Gupta, Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours, in: Robotics and Automation (ICRA), 2016 IEEE International Conference on, IEEE, 2016, pp. 3406–3413.

  • (28)

    M. Schwarz, H. Schulz, S. Behnke, Rgb-d object recognition and pose estimation based on pre-trained convolutional neural network features, in: 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 1329–1335.

    doi:10.1109/ICRA.2015.7139363.
  • (29) N. J. Mitra, A. Nguyen, Estimating surface normals in noisy point cloud data, in: Proceedings of the nineteenth annual symposium on Computational geometry, ACM, 2003, pp. 322–328.
  • (30) C. Goldfeder, P. K. Allen, C. Lackner, R. Pelossof, Grasp planning via decomposition trees, in: Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007, pp. 4679–4684. doi:10.1109/ROBOT.2007.364200.
  • (31)

    Q. Chen, Q.-s. Sun, P. A. Heng, D.-s. Xia, A double-threshold image binarization method based on edge detector, Pattern recognition 41 (4) (2008) 1254–1267.

  • (32) G. Jie, L. Ning, An improved adaptive threshold canny edge detection algorithm, in: Computer Science and Electronics Engineering (ICCSEE), 2012 International Conference on, Vol. 1, IEEE, 2012, pp. 164–168.
  • (33) Z. Zhang, Microsoft kinect sensor and its effect, IEEE multimedia 19 (2) (2012) 4–10.
  • (34) M. Draelos, Q. Qiu, A. Bronstein, G. Sapiro, Intel realsense= real low cost gaze, in: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE, 2015, pp. 2520–2524.
  • (35) I. Imaging, Ensenso 3d cameras, https://en.ids-imaging.com/ensenso-stereo-3d-camera.html (2018).
  • (36) A. Singh, J. Sha, K. S. Narayan, T. Achim, P. Abbeel, Bigbird: A large-scale 3d database of object instances, in: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE, 2014, pp. 509–516.
  • (37) A. Aldoma, F. Tombari, L. Di Stefano, M. Vincze, A Global Hypotheses Verification Method for 3D Object Recognition, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
  • (38) S. G. F. Tombari, L. Di Stefano, Rgb-d semantic segmentation dataset, http://vision.deis.unibo.it/fede/kinectDataset.html (2011).
  • (39) Willowgarage test set dataset, https://repo.acin.tuwien.ac.at/tmp/permanent/dataset_index.php.
  • (40) O. Kundu, TCS grasping datasets for individual and multiple objects, https://sites.google.com/view/grasping-tcs-research/home (2018).