Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping

09/12/2019 ∙ by Yunzhi Lin, et al. ∙ Georgia Institute of Technology 0

A segmentation-based architecture is proposed to decompose objects into multiple primitive shapes from monocular depth input for robotic manipulation. The backbone deep network is trained on synthetic data with 6 classes of primitive shapes generated by a simulation engine. Each primitive shape is designed with parametrized grasp families, permitting the pipeline to identify multiple grasp candidates per shape primitive region. The grasps are priority ordered via proposed ranking algorithm, with the first feasible one chosen for execution. On task-free grasping of individual objects, the method achieves a 94 Overall, the method supports the hypothesis that shape primitives can support task-free and task-relevant grasp prediction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Manipulation is a multi-step task consisting of sequential actions applied to an object, including: perception, path planning, and closure of the gripper, followed by a task-relevant motion with the grasped object [1]

. Due to the diversity of objects that a robotic arm could grasp, the grasping process remains an open problem in the field of robotics. Analytical or model-based approaches have trouble addressing this diversity. Recent research has turned to deep learning as a robust means to score or detect grasps.

Deep learning is a data-driven approach typically requiring large datasets to recover the desired input/output function, which for grasping is often an image/grasp pair (permitting multiple grasps per image). Manual annotation for dataset creation tends to be limited in volume [2], leading roboticists to exploit robotic simulators [3, 4] or actual deployment [5] for automatic generation of training data. Many of these emphasize the grasp as opposed to the object, with good reason as object-centric approaches would require creating 3D models or scanning real objects in large volume [6, 7], and concomitantly require a high accuracy detector. Instead, the deep networks aim to generalize across object classes to establish general grasping rules from visual evidence. Implicitly, the deep network is to learn the notion of shape within its internal feature representation.

This paper proposes to complement that idea with a more geometric and explicitly shape-centric approach by employing the notion of primitive shapes. Primitive shapes offer a powerful means to alleviate the data inefficiency problem [8, 9, 10] by abstracting target objects to primitive shapes with a priori known grasp configurations. Most primitive shape methods represent objects as a single shape from a small library [9] or apply model-based rules to deconstruct objects [8]. As a result, they do not handle novel shapes having unmodeled geometry or being the union of primitive shapes.

Building from Yamanobe et al.’s model decomposition idea [8], this paper aims to permit a primitive shapes detector to generalize its grasp strategies to household objects. The detector exploits the state-of-the-art instance segmentation deep network Mask R-CNN [11] trained to segment a depth image according to the primitive shapes it contains. Model-based matching of the primitive shape’s point cloud, from a primitive shape database, recovers candidate grasp families associated to the shape. Grasp candidates are ranked then tested in order to find a feasible grasp.

Synthetic ground truth data based on parametrized sets of primitive shape classes and their grasp families avoids extensive manual annotation. The shape classes are sufficiently representative of object parts associated to household objects yet low enough in cardinality that grasp family modeling is quick. Domain randomization [10, 12] over the parametrized shapes generates a rich set of ground truth input/ouput data based on a robotics simulation engine [13]. The result is a large synthetic dataset composed of different primitive shapes combinations, quantities, and layouts.

Modern robotics simulation engines [13, 14] render and simulate virtual environments quite cleanly, often leading to sensor imagery with less defects than real sensors. Depth sensor like the Kinect v1 loses details and introduce noise during the depth capture [15, 16], leading to a distribution shift or gap between the simulated data and depth sensor output. The gap is addressed in a bi-directional manner by lightly corrupting the simulated data and denoising the depth image data. The intent is to introduce real sensor artifacts that cannot be removed in the simulated images, and to denoise the real images to match the simulated images.

Employing the deep network for grasping enables a robotic arm with depth camera to identify primitive shapes within the scene, and to identify suitable grasp options for an object from the shapes. To arrive at the overall system, from training to deployment, the paper covers the following contributions:

1em1 - An automated ground truth generation strategy to rapidly generate input/output data. It uses 6 classes of primitive shapes with parametrized grasp families in concert with a robotics simulation engine and domain randomization;

- A deep network, segmentation-based pipeline first decomposing objects into multiple primitive shapes from a depth image, followed by surface model-fitting, grasp prioritization/selection via proposed ranking algorithm.

- Experimental testing and evaluation of grasping accuracy using a 7dof robotic arm + depth sensor setup, including both standard and task-oriented benchmarking.

Ii Related Work

Grasping is a mechanical process which can be described mathematically given prior knowledge of the target object’s properties (geometry, hardness, etc.), the hand contact model, and the hand dynamics. Mechanics-based approaches with analytical solutions work well for some objects but cannot successfully apply to other, often novel, targets [17, 18, 19]

. With advances in machine learning, these methods gave way to purely data-driven approaches

[20] or combined approaches employing analytic scoring with image-to-grasp learning [3]. Contemporary solutions employ deep learning [21] and leverage available training data.

Deep learning strategies primarily take one of three types. The first type exploits the strong detection or classification capabilities of deep networks to recognize candidate structured grasping representations [22, 23, 24, 25]. The most common representation is the

grasp representation associated to a parallel plate gripper. As a computer vision problem, recognition accuracy is high (up to around 95%), with a performance drop during robotic implementation (to around 90% or less). Training involves image/grasp datasets obtained from manual annotation

[26] or simulated grasping [4, 25]. Within this category there is also a mixed approach, DexNet [3]

, using random sampling and analytical scoring followed by deep network regression to output refined, learnt grasp quality scores for grasp selection. By using simulation with an imitation learning methodology, tens of thousands to millions of annotations support DexNet regression training. Success rates vary from 80% to 93% depending on the task. When sufficient resources are available, the second strategy type replaces simulation with actual experiential data coupled to deep network reinforcement learning methods

[5, 27]. Often, methods based on simulation or experience are configuration-dependent; they learn for specific robot and camera setups. The third strategy is based on object detection or recognition [20]. Recent work employed deep learning to detect objects and relative poses to inform grasp planning [28], while another learnt to perform object-agnostic scene segmentation to differentiate objects [29] and aided DexNet grasp selection process. Like [29], this paper focuses on where to find candidate grasps as opposed to quality scoring candidate grasps.

Deep learning grasp methods suffer from two related problems, sparse grasp annotations or insufficiently rich data (i.e., covariate shift). The former can be seen in Figure 1, which shows an image from the Cornell dataset [2] and another from the Jacquard dataset [4]. Both lack annotations in graspable regions due to missing manual annotation or a false negative in the simulated scenario (either due to poor sampling or incorrect physics). Sampling insufficiency can be seen in [25], where the DexNet training policy was augmented with an improved (on-policy) oracle providing a richer sampling space. Yet, sampling from a continuous space is bound to under-represent the space of possible options, especially as the dimension of the parametric grasp space increases. This paper proposes to more fully consider shape primitives [30] due to their known, parametrized grasp families [8, 31]. The parameterized families provide a continuum of grasp options rather than a sparse sampling. A complex object can be decomposed into parts representing distinct surface categories based on established primitives.

Deep network approaches for shape primitive segmentation to inform grasping do not appear to be studied. Past research explored shape primitive approaches in the context of traditional point cloud processing and fitting for the cases of superquadric [32] and box surfaces [33]. Approaches were also proposed to simply model each object as a single primitive shape [9, 34], which did not exploit the potential of primitive shapes to generalize to unseen/novel objects.

Cornell

Jacquard
Fig. 1: Grasp annotation data with missing grasp candidates.

Iii Grasping from Primitive Shapes Recognition

(a)

(b)

(c)

Fig. 2: The proposed deep network, segmentation-based pipeline. From monocular depth input, objects are segmented into primitive shape classes, with the object to grasp extracted and converted into primitive shape point clouds. Surface model-fitting, grasp scoring, and grasp selection processes follow. (a) Depth input is segmented into primitive shapes; (b) The best matching shape and pose per primitive shape is identified; (c) Candidate grasps are priority ranked and tested for feasibility with the first feasible grasp chosen for physical robot executions.

Setup (no zoom)

Fig. 3: Sample segmentation outcomes for test scenarios consisting of individual and multiple objects (zoomed and cropped images).

The intent behind this investigation is to explore the potential value of using deep networks to segment a scene according to the surface primitives contained within it, in order to establish how one may grasp. Once the object or region to grasp is known, post-processing recovers the shape geometry and the grasp family associated to the shape. The state-of-the-art instance segmentation deep network Mask R-CNN [11] serves as the backbone network for converting depth images into primitive shape segmentation images. Importantly, a synthetically generated training set using only shape primitives in concert with domain randomization [10] covers a large set of scene visualizations. The ability to decompose unseen/novel objects into distinct shape regions, often with explicitly distinct manipulation affordances, permits task-oriented grasping [34].

The vision-based robotic grasping problem here presumes the existence of a depth image ( and are image height and width) capturing a scene containing an object to grab. The objective is to abstract the scene into a set of primitive shapes and generate grasp configurations from them. A complete solution involves establishing a routine or process, , mapping the depth image to a grasp . The grasp configuration specifies the final pose in the world frame of the end-effector.

Per Figure 3, the process is divided into three stages. In the first stage, the depth image gets segmented according to defined primitive shape categories indexed by the set . The primitive shape segmentation images are for . The segmentation and the depth image generate segmented point clouds in 3D space for the primitive surfaces attached to the label . In the second stage, when the grasp target is established, the surface primitives attached to the target grasp region are converted into a corresponding set of primitive shapes in 3D space, where indexes the different surface primitive segments. In the third stage, the parametrized grasp families of the surface primitives are used to generate grasp configurations for . A prioritization process leads to rank ordered grasps with the first feasible grasp being the one to execute. This section details the three stages and the deep network training method.

Iii-a Primitive Shape Segmentation Using Mask R-CNN

The proposed approach hypothesizes that commonly seen household objects can be decomposed into one or more primitive shapes and represented by pre-defined shape parameters. After studying several household object datasets [7, 35, 36, 37], the set of primitive shapes was decided to be: cylinder, cuboid, ring, sphere, semi-sphere, and stick.

Class Parameters Shape Template (unit: cm)
Cylinder (wide, tall)
Ring

Cuboid

(wide, tall)

Stick
Semi-sphere
Sphere

- radius, - inner radius, - outer radius

- height, - width, - depth, - length

TABLE I: Primitive Shape Classes

The value of using primitive shapes is in the ability to automatically synthesize a vast library of shapes through gridded sampling within the parametric domain of each class. Table I lists the parameter coordinates (middle column) for each primitive shape class. Sections IV-A and III-D detail the training method used to synthetically generate depth images and known segmentations from the parametric shape classes. Once trained, the Mask-RCNN [11] network decomposes an input depth image into a set of segmentations reflecting hypothesized primitive shapes, as shown in Figure 3(a). Segmentations for different input depth images overlaid on the corresponding, cropped RGB images are demonstrated in Figure 3 . The color coding is red: cylinder, orange: ring, green: cuboid, yellow: stick, purple: semi-sphere, and blue: sphere. For individual, sparsely distributed, and clustered objects, the segmentation method captures the primitive shape regions of the sensed objects.

Iii-B Shape Extraction and Estimation, and Grasp Family

Given a target object region to grasp, the intersecting shape primitive regions are collected and converted into separate partial point clouds. Each point cloud needs to be associated to a parametric model of the known shape. Using an object and grasp database of examplar shapes, a multi-start Iterative Closet Point (ICP) algorithm matches the exemplar shapes to the point cloud. The multiple starting points prevent local minima issues by providing different orientation guesses for the initial estimate. The final match with the lowest error score is selected as the object model for hypothesizing grasps. In addition to the model, the transformation aligning the point cloud with the shape is kept for mapping grasps to the world frame. This matching step is depicted in Figure

3(b). Given the identified type of shape and its corresponding shape parameters, one or more families of grasps are recovered. The geometric primitives do not need to match the target region exactly. As long as the error between the object and the best matching primitive shapes is small relative to the gripper approach properties.

Each object class has a family of grasps based on its geometry, with each member of the family corresponding to a set of grasps associated to transformation by a group action due to the symmetry of the primitive. For grabbing wide cylinders, there are two members corresponding to grabbing from the top or from the bottom, with the free parameter being rotation about the cylinder axis. For grabbing a thin cylinder, there is one two-parameter set corresponding to translation along or rotation about the cylinder axis. For the tall cuboid there are four sets in the family, each one associated to grasping from one edge and translating parallel to the edge. Example grasps from these described families are depicted in Figure 4. Under ideal conditions (i.e., no occlusion, no collision, and reachable) all grasps are possible (e.g., the object is floating). In reality, only a subset of all possible grasps will be feasible. Using the shape primitive grasp family avoids the problem of incomplete annotations or sparse sampling from regions of the grasp space. Since the geometry of each shape is known, the predicted grasp family is robust (modulo the weight distribution of the object).

Fig. 4: Grasp family for wide cylinder, tall cylinder, and semi-sphere.

Iii-C Grasp Prioritization and Selection

The final step prior to execution is to select one grasp from the set of candidate grasps as the one to use. DexNet 2.0 [3] was explored as a means to score the grasps, but performance degraded for the angled camera perspective of the setup here. Instead, a simple geometric grasp prioritization scoring function was used (unrelated to existing grasp quality scoring functions). It considers the required pose of the hand relative to the world frame, , which is located at the manipulator base. The prioritization scheme prefers the desired grasp to minimize translation and favor approaching from above. The first collision free grasp, when tested in the prioritized ordering, is the grasp selected.

The grasp prioritization score will consist of contributions from the translational and rotational components a test grasp pose. The translation score contribution will depend on the length of the translation element (i.e, the distance from the world/base frame). Define the translation cost to be where

is the translation interpreted to be a vector in

. The rotational contribution regards the equivalent quaternion as a vector in and applies the following positive, scalar binary operation to obtain the orientation grasp cost:

(1)

with . This cost prioritizes vertical grasps by penalizing grasps that do not point up/down. Alternative weightings are possible depending on the given task, or the robot to workspace configuration.

The costs are computed for all grasp candidates, then converted into a score by normalizing them over the range of obtained scores,

(2)

where the and superscripts denote the min and max over all scores grasps. Two methods are tested for combining and into a final grasp prioritization score:

1em1 - Mixed Criteria Score Ranking (MC): Simply generate the weighted sum of the two scores,

(3)

for .

- Two Stage Filtered Ranking (TS): The two stage approach first selects the top grasps based on their translation score , then re-ranks them based on the rotation score . Denote this ranking method by .

After ordering the grasps according to their grasp prioritization score, the actual grasp applied is the first one to be feasible when a grasp plan is made from the current end-effector pose to the target grasp pose. This third and final step in the grasp identification process is shown Figure 3(c).

Simulated

Kinect

Too Clean

Too Noisy

Better Matched

Corrupt

Denoise
Fig. 5: Bi-directional image filtering to align training data and real data. An oil painting filter applied to training imagery simulates the noise of the Kinect depth sensor. Temporal averaging and spatial median filtering regularize the Kinect depth image during run-time.

Iii-D Domain Alignment between the simulation and the reality

State-of-the-art simulators [13, 14] benefit data generation by automating data collection in virtual environments, but do so using idealized physics or sensing. Some physical effects are too burdensome to model. To alleviate this problem, the images from both sources, the simulation and the depth sensor, are modified to better match. The objective is to minimize the corrections applied, therefore the first step was to reduce or eliminate the sources of discrepancies. Discrepancy reduction involves configuring both environments to match, which includes the camera’s intrinsic and extrinsic parameters, and the background scene. Comparing images from both sources, the main gap remaining is the sensing noise introduced by the low-fidelity Kinect v1 depth sensor [15, 16]. The Kinect has occlusion artifacts arising from the baseline between the active illuminator and the imaging sensor, plus from measurement noise. The denoising process includes temporal averaging, boundary cropping, and median filtering, in that order. Once the Kinect depth imagery is denoised, the next step is to corrupt the simulated depth imagery to better match the visual characteristics of the Kinect. The primary source of uncertainty is at the depth edges or object boundaries due to the properties of the illuminator/sensor combination. The simulated imagery should be corrupted at these same locations. The simulated environment has both a color image and a depth image. The color image is designed to provide both the shape primitive label and the object ID, thereby permitting the extraction of object-wise boundaries. After establishing the object boundary pixels, they are dilated to obtain an enlarged object boundaries region, then an oil painting filter [38, 39] corrupts the depth data in this region. Considering that manipulation is only possible within a certain region about the robotic arm, the depth values from both sources were clipped and scaled to map to a common interval. Figure 5 depicts this bi-directional process showing how it improves alignment between the two sources.

Iv Training, Experiments, and Evaluation

This section describes the automated ground truth synthesis process, the training process, the robot arm setup, and experimental evaluation criteria. Code will be open-sourced.

Iv-a Dataset Generation

Based on the hypothesis that a dataset with diverse cominations of primitive shapes could induce learning generalizable to household objects, the dataset generation procedure consists of the following degrees of freedom (1) primitive shape parameter; (2) placement order; (3) initial

pose assigned; and (4) mode of placement, for which there are three modes, free-fall, straight up from the table-top, and floating in the air. For simplicity, the shape primitives from Table I are reduced to one parameter families after analyzing the household object databases. Each is denoted by a fixed parameter vector and the scaling factor , and is described in the third column of Table I . Both cylinders and cuboids are modeled to have a wide-and-short category, and a tall-and-thin category (denoted wide and tall base on the largest parameter) by defining a default parameter set for each category. The scaling factor defines the one parameter family per class. The ring, semi-sphere, and sphere vectors are scaled for all coordinates, while the stick category is only scaled for the length coordinate . When combined with the other domain randomization aspects, the process samples a sufficiently rich set of visualized shapes once self-occlusion and object-object occlusion effects are factored in.

For creating instances of the world, the scaling factor is discretized into 10 steps. Uniform random scalings would have one of the 10 possible values. Every image has one instance of each primitive class uniformly selected from the possible options. The insertion order of the 6 objects is randomly determined. The initial poses are uniformly randomly determined within a bounding volume above the table-top. The placement type is uniformly randomly determined. Through random selection, 100,000 scenes of different primitive shapes combinations are generated. For each instance, RGB and depth images are collected. Shape color coding provides segmentation ground truth and primitive shape ID.

Iv-B Data Preprocessing and Training

Per §III-D, the simulated depth images are re-scaled then corrupted by a region-specific oil-painting filter. To align with the input of Mask R-CNN, the single depth channel is duplicated across the three input channels. The ResNet-50-FPN as the backbone is trained from scratch on corrupted dpeth images in PyTroch 1.0 It runs for 100,000 iterations with 4 images per mini-batch. The primitive shape dataset is divided into a 75%/25% training and testing splits. The learning rate is set to 0.01 and divided by 10 at iterations 25000, 40000, and 80000. The workstation consists of a single NVIDIA 1080Ti (Pascal architecture) with cudnn-7.5 and cuda-9.0. Dataset generation takes 72 hours, and training takes 24 hours (4 days total).

Iv-C Robotic Arm Experimental Setup and Parameters

The robotic arm and RGB-D camera setup used for evaluating the outcomes of the proposed pipeline is shown in Figure 6 (left). It is eye-to-hand. The camera to manipulator base frame is established based on an ArUco tag captured by the camera. Both the described method and the implemented baseline methods are tested on this setup. A set of 3D printed shapes designed to fit the training shapes is used for known objects testing and evaluation, Figure 6 (top right). An additional set of objects is subsequently used for novel objects testing and evaluation, Figure 6 (bottom right).

The primitive shape and grasp family database is populated by selecting 10 exemplars from each primitive shape type. Each continuous grasp set is likewise discretized according to the dimensions of our gripper so that neighboring grasps are not too similar. For MC ranking, the weights in (3) are set equal, . For TS ranking, the top 10 are chosen. Collision checking for grasp feasibility is done by the Planning Scene module in MoveIt! [40]. Open-loop execution is performed with the plan of the top grasp.

Fig. 6: Experimental setup and images of known and novel object sets.

Iv-D Evaluation Metrics

Evaluation of the proposed pipeline consists of testing on novel input data as purely a visual recognition problem, followed by testing on the experimental system. For the visual segmentation evaluation, the segmentation accuracy is computed by [41]:

(4)

where is precision and is recall, , and is a Gaussian smoothing factor. For the robotic arm testing, only the final outcomes of the grasping tests are scored. Each experiment consists of 10 repeated runs for a given configuration. A run or attempt is considered to be a success if the target object is grasped, lifted, and held for at least 10 seconds. The scoring metric is the success rate (percentage).

V Results

This section details the experiments performed and their outcomes, starting with a segmentation output test then several manipulation tests. The first manipulation test is an in-class grasping test consisting of inidivudal objects whose shape matches one of the primitive shape classes. The second is an out-of-class grasping test consisting of novel objects, some of which composed of multiple shape classes. Grasping outcomes for individual objects are quantitatively compared to other methods on the same test, as well as compared to published methods for similar tests. The third experiment is a task-oriented grasping task where a specific primitive shape must be grasped, as would be done when performing a specific task with the object. The fourth is a stress test of the system: a multi-object clearing task.

The baseline implementations are [9] and [24]. The method in [9] is a primitive-shape-based system for household objects, while [24] is a publicly available deep network RGB-D grasping approach. The implementation [9] was reproduced, with the implemented primitives being spherical, cylindrical and box-like primitives. Each object is approximated by the best fitting primitive shape.

V-a Vision

Segmentation tests are performed on a set of 3D printed primitive shapes and compared to manual segmentations. The tested implementations include a network trained with the original simulated images (no corruption) and with the oil-painting corrupted images. Per Table IV , the segmentation accuracy improved from 0.835 to 0.872 (ranges over ), with the primary improvement sources being for the sphere shape class followed by the stick class. The segmentation accuracy is sufficient to capture and label significant portions of an object’s graspable shape regions, see Figure 3.

Original Corrupted Original Corrupted
Cylinder 0.918 0.917 Ring 0.903 0.904
Cuboid 0.822 0.824 Stick 0.787 0.838
Semi-sphere 0.919 0.913 Sphere 0.661 0.835
All 0.835 0.872
TABLE III: Primitive Shapes Grasping (Known)
PS-CNN PS-CNN [9] [24]
Cuboid 8 9 8 5
Cylinder 10 10 9 8
Semi-sphere 9 10 5 7
Stick 10 8 7 6
Ring 9 9 6 5
Sphere 9 9 8 8
Success (%) 91.7 91.7 71.7 65.0
TABLE IV: Real Objects Grasping (Unkown)
PS-CNN PS-CNN [9] [24]
Bowl 10 10 6 7
Mug 10 10 6 8
Box 10 10 9 9
Baseball 8 7 8 8
Tape 9 8 9 8
Bottle 9 6 7 8
Success (%) 93.3 85.0 75.0 80.0
TABLE II: Performance of PS-CNN on 3D printed primitive shapes

V-B Physical Grasping of Known and Unknown Objects

These tests evaluate the ability of the pipeline to pick up an individual object in the absence of obstacles or clutter. The outcomes for the known objects grasping test are presented in Table IV . The two variants (MS and TS) of the primitive shape Mask R-CNN segmentation (PS-CNN) approach outperformed two baseline methods. Trained purely on primitive synthetic data, the proposed approach shows the ability to bridge the gap between simulation and real-world application. While both grasp prioritization schemes gave the same success rate, they differed with regards to which objects led to more failures. The baseline [9] also achieved competitive success rate on shapes easily approximated by spherical, cylindrical and box-like primitives, but less so for the other shapes. The baseline [24] had the lowest success rate, perhaps an indication of the limitation of using the image-based grasp representation.

The outcomes for the unknown objects grasp test are presented in Table IV. Now, there is a difference between the two grasp prioritization schemes with MC outperforming TS. It is explained by the filtering effect of the prioritization mechanism for TS. In prioritizing and selecting a small set of grasps based on translation, the resulting grasp subset consists of less successful grasp orientations. A top scoring one may not map to the most robust grasp possible for the object in question, suggesting that mixed prioritization is better over sequential. Ultimately, a true grasp quality scoring implementation is needed. The baseline methods also showed improved performance, but the success rate continued to be lower than the proposed approach. The larger boost for [24] over [9] may be attributed to the deep network model being trained on a real object dataset [2].

Approach Year Settings Success Rate (%)
Objects Trials
Lenz [26] 2015 30 100 84/89*
Pinto et al. [42] 2016 15 150 66
Watson et al. [43] 2017 10 - 62
DexNet 2.0 [3] 2017 10 50 80
Lu . [44] 2018 10 - 84
DexNet 2.0 [45] 2019 50 5 50
Satish et al. [25] 2019 8 80 87.5
PS-CNN - 10** 100 94
  • Success rate of 84% / 89% achieved on Baxter / PR2 robot.

  • Combines the unique objects from Tables IV and VII .

TABLE V: Grasping Comparison from Published Works

Table V collects success rate statistics and testing details for other state-of-the-art approaches employing only a single gripper (no suction-based end-effector). Even though the test objects may differ from ours, the general object classes are similar. Rough comparison provides some context for the relative performance of the proposed segmentation-based system on household objects. Organized sequentially, the top performing results are the earliest (Lenz et al.) and the latest (Satish et al.) approaches, achieving below 90% over up to 100 trials. The proposed approach has a higher success rate suggesting that identifying primitive shape regions for identifying candidate grasps is beneficial.

V-C Task-Oriented Grasping on Real Objects

The value of segmenting objects is that the distinct shape regions may correspond to grasp preferences based on the task sought to accomplish. Almost all baselines examined in this paper focused on task-free grasping (or simply pick-n-place operations). In this set of tests, each connected primitive shape region is presumed to correspond to a specific functional part. A subsequent grasping test is performed to compare task-free grasping versus trask-oriented grasping, where in the latter a specific region must the grasped. Each object consists of more than one functional part. Only [9] was tested to serve as a baseline since it is shape-based. Table VII reports the performance of the systems on the grasping tests. The method [9] did not perform as well as in prior tests on account of the more complex shape profiles of some of the objects. Since it applies a single shape model, it cannot perform task-oriented grasping. Overall, the proposed approach did well but experienced a 20% drop in success rate when limited to a specific object part suggesting that more research should be placed on precision grasping of specific object parts in addition to general grasping (again, grasp quality scoring would help). Nevertheless the task-oriented approach scores comparably to some of the outcomes of Table V, suggesting that strong task-oriented performance should be possible soon. Achieving task-oriented grasping permits follow-up research on realizing more advanced semantic grasping of objects.

Target Objects Task-Free Task-Oriented
[9] Ours [9] Ours
Mug 6/10 10/10 - 8/10
Pot 3/10 9/10 - 7/10
Pan 5/10 9/10 - 7/10
Basket 6/10 10/10 - 8/10
Handbag 9/10 10/10 - 8/10
Success Rate (%) 58.0 96.0 0 76.0
TABLE VII: Multi-Object Grasping Comparison
Method #Obj. #Sel. #Trials TC S OC C
[42] 21 10 5 None 38 100 100
[46] 10 10 15 3Seq 84 77
[3] 25 5 20 5Seq 92 100 100
[5] 25 25 4 31G 82.1 99 75
[47] 16 8 15 +2 86.1 87.5
[45] 25 5 20 5Seq 94 100 100
PS-CNN 10 3-5 10 +2 50 80 70
TABLE VI: Task-Oriented and Task-free Grasp

V-D Multi-Object Grasping on Real Objects

The current study is aimed at establishing grasp candidates for individual objects rather than clutter removal or bin-picking, however additional testing was performed to gauge the limits of the implementation. Performance is not expected to be high, since the current version would be equivalent to DexNet 1.0 or 2.0 in terms of maturity (vs DexNet 2.1). From the set of 10 objects, a smaller set of 3 to 5 objects are randomly selected and placed on the workspace, see Figure 3 . Following [47] the robot has grasp attempts to remove the objects ([5] used for ). We then calculatd the grasp success percentage (S), the object clearance percentage (OC), and the completion percentage (C). Table VII compares these statistics to other published works. There are some differences regarding trial termination criteria (TC), with G signifying up to grasp attempts, signifying allowed grasps for objects, and Seq meaning sequential failures. As noted earlier, combining the PS-CNN with a downstream grasp quality CNN (GQ-CNN) trained to recognize grasps from a richer set of camera perspectives would improve object grasping in clutter. So would closed-loop operation to correct the tracking error of the low-cost servomotors (the arm build cost is under $3k).

Vi Conclusion

This paper leverages recent advances in deep learning to realize a shape primitive segmentation based approach to grasping. Having shape primitive knowledge permits grasp recovery from known grasp families. It returns to one of the classical paradigms for grasping and shows that high performance grasp candidates can be learnt from simulated visual data without simulating grasp attempts. The segmentation-based approach permits task-oriented object grasping in contrast to the current approaches emphasizing grasp learning. Robotic grasping experiments indicate a 94% grasp success rate in task-free grasping and 76% for task-oriented learning. Future work aims to improve grasping success by exploring the use of contemporary grasp networks to score the grasp candidates for robustness or success likelihood, which is what the current approach lacks. It should then succeed in clutter removal or bin-picking style problems.

References

  • [1] M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and I. A. Şucan, “Towards reliable grasping and manipulation in household environments,” in Experimental Robotics, 2014, pp. 241–252.
  • [2] Y. Jiang, S. Moseson, and A. Saxena, “Efficient grasping from rgbd images: Learning using a new rectangle representation,” in IEEE International Conference on Robotics and Automation, 2011, pp. 3304–3311.
  • [3] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” in Robotics: Science and Systems, 2017.
  • [4] A. Depierre, E. Dellandréa, and L. Chen, “Jacquard: A large scale dataset for robotic grasp detection,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 3511–3516.
  • [5] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International Journal of Robotics Research, vol. 37, no. 4-5, pp. 421–436, 2018.
  • [6] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, “3dnet: Large-scale object class recognition from cad models,” in IEEE International Conference on Robotics and Automation, 2012, pp. 5384–5391.
  • [7] B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M. Dollar, “Yale-cmu-berkeley dataset for robotic manipulation research,” The International Journal of Robotics Research, vol. 36, no. 3, pp. 261–268, 2017.
  • [8] N. Yamanobe and K. Nagata, “Grasp planning for everyday objects based on primitive shape representation for parallel jaw grippers,” in IEEE International Conference on Robotics and Biomimetics, 2010, pp. 1565–1570.
  • [9] S. Jain and B. Argall, “Grasp detection for assistive robotic manipulation,” in IEEE International Conference on Robotics and Automation, 2016, pp. 2015–2021.
  • [10]

    J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in

    IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 23–30.
  • [11] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in IEEE International Conference on Computer Vision, 2017, pp. 2961–2969.
  • [12] J. Tobin, L. Biewald, R. Duan, M. Andrychowicz, A. Handa, V. Kumar, B. McGrew, A. Ray, J. Schneider, P. Welinder et al., “Domain randomization and generative models for robotic grasping,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 3482–3489.
  • [13] E. Rohmer, S. P. Singh, and M. Freese, “V-rep: A versatile and scalable robot simulation framework,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. 1321–1326.
  • [14] B. O. Community, Blender - a 3D modelling and rendering package, Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018.
  • [15] B. Planche, Z. Wu, K. Ma, S. Sun, S. Kluckner, O. Lehmann, T. Chen, A. Hutter, S. Zakharov, H. Kosch et al., “Depthsynth: Real-time realistic synthetic data generation from cad models for 2.5 d recognition,” in IEEE International Conference on 3D Vision, 2017, pp. 1–10.
  • [16] C. Sweeney, G. Izatt, and R. Tedrake, “A supervised approach to predicting noise in depth images,” in IEEE International Conference on Robotics and Automation, 2019, pp. 796–802.
  • [17] C.-P. Tung and A. C. Kak, “Fast construction of force-closure grasps,” IEEE Transactions on Robotics and Automation, vol. 12, no. 4, pp. 615–626, 1996.
  • [18] D. Prattichizzo, M. Malvezzi, M. Gabiccini, and A. Bicchi, “On the manipulability ellipsoids of underactuated robotic hands with compliance,” Robotics and Autonomous Systems, vol. 60, no. 3, pp. 337–346, 2012.
  • [19] C. Rosales, R. Suárez, M. Gabiccini, and A. Bicchi, “On the synthesis of feasible and prehensile robotic grasps,” in IEEE International Conference on Robotics and Automation, 2012, pp. 550–556.
  • [20] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp synthesis—a survey,” IEEE Transactions on Robotics, vol. 30, no. 2, pp. 289–309, 2014.
  • [21] S. Caldera, A. Rassau, and D. Chai, “Review of deep learning methods in robotic grasp detection,” Multimodal Technologies and Interaction, vol. 2, no. 3, p. 57, 2018.
  • [22]

    J. Watson, J. Hughes, and F. Iida, “Real-world, real-time robotic grasping with convolutional neural networks,” in

    Annual Conference Towards Autonomous Robotic Systems, 2017, pp. 617–626.
  • [23] D. Park and S. Y. Chun, “Classification based grasp detection using spatial transformer network,” in arXiv preprint arXiv:1803.01356, 2018.
  • [24] F. Chu, R. Xu, and P. A. Vela, “Real-world multiobject, multigrasp detection,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3355–3362, 2018.
  • [25] V. Satish, J. Mahler, and K. Goldberg, “On-policy dataset synthesis for learning robot grasping policies using fully convolutional deep networks,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1357–1364, 2019.
  • [26] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, vol. 34, no. 4-5, pp. 705–724, 2015.
  • [27] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser, “Learning synergies between pushing and grasping with self-supervised deep reinforcement learning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 4238–4245.
  • [28] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” in Conference on Robot Learning, 2018.
  • [29] M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmenting unknown 3d objects from real depth images using mask r-cnn trained on synthetic data,” in IEEE International Conference on Robotics and Automation, 2019, pp. 7283–7290.
  • [30] A. T. Miller, S. Knoop, H. I. Christensen, and P. K. Allen, “Automatic grasp planning using shape primitives,” in IEEE International Conference on Robotics and Automation, 2003, pp. 1824–1829.
  • [31] Y. Shiraki, K. Nagata, N. Yamanobe, A. Nakamura, K. Harada, D. Sato, and D. N. Nenchev, “Modeling of everyday objects for semantic grasp,” in IEEE International Symposium on Robot and Human Interactive Communication, 2014, pp. 750–755.
  • [32] C. Goldfeder, P. K. Allen, C. Lackner, and R. Pelossof, “Grasp planning via decomposition trees,” IEEE International Conference on Robotics and Automation, 2007.
  • [33] K. Huebner and D. Kragic, “Selection of robot pre-grasps using box-based shape approximation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, pp. 1765–1770.
  • [34] K. Fang, Y. Zhu, A. Garg, A. Kuryenkov, V. Mehta, L. Fei-Fei, and S. Savarese, “Learning task-oriented grasping for tool manipulation from simulated self-supervision,” in Robotics: Science and Systems, 2018.
  • [35]

    D.-Y. Chen, X.-P. Tian, Y.-T. Shen, and M. Ouhyoung, “On visual similarity based 3d model retrieval,” in

    Computer graphics forum, vol. 22, no. 3, 2003, pp. 223–232.
  • [36] T. Funkhouser, P. Min, M. Kazhdan, J. Chen, A. Halderman, D. Dobkin, and D. Jacobs, “A search engine for 3d models,” ACM Transactions on Graphics, vol. 22, no. 1, pp. 83–105, 2003.
  • [37] P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser, “The princeton shape benchmark,” in Shape Modeling Applications, 2004, pp. 167–178.
  • [38] A. C. Sparavigna and R. Marazzato, “Cld-shaped brushstrokes in non-photorealistic rendering,” in arXiv preprint arXiv:1002.4317, 2010.
  • [39] S. Mukherjee, “Study on performance improvement of oil paint image filter algorithm using parallel pattern library,” Computer Science & Information Technology, p. 39, 2014.
  • [40] S. Chitta, I. Sucan, and S. Cousins, “Moveit![ros topics],” IEEE Robotics & Automation Magazine, vol. 19, no. 1, pp. 18–19, 2012.
  • [41] R. Margolin, L. Zelnik-Manor, and A. Tal, “How to evaluate foreground maps,” in

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2014, pp. 248–255.
  • [42]

    L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in

    IEEE International Conference on Robotics and Automation, 2016, pp. 3406–3413.
  • [43] J. Watson, J. Hughes, and F. Iida, “Real-world, real-time robotic grasping with convolutional neural networks,” in Annual Conference Towards Autonomous Robotic Systems, 2017, pp. 617–626.
  • [44] Q. Lu, K. Chenna, B. Sundaralingam, and T. Hermans, “Planning multi-fingered grasps as probabilistic inference in a learned deep network,” in International Symposium on Robotics Research, 2017.
  • [45] J. Mahler, M. Matl, V. Satish, M. Danielczuk, B. DeRose, S. McKinley, and K. Goldberg, “Learning ambidextrous robot grasping policies,” Science Robotics, vol. 4, no. 26, pp. 49–84, 2019.
  • [46] M. Gualtieri, A. Ten Pas, K. Saenko, and R. Platt, “High precision grasp pose detection in dense clutter,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 598–605.
  • [47] P. Ni, W. Zhang, W. Bai, M. Lin, and Q. Cao, “A new approach based on two-stream cnns for novel objects grasping in clutter,” Journal of Intelligent & Robotic Systems, vol. 94, no. 1, pp. 161–177, 2019.