A Joint Network for Grasp Detection Conditioned on Natural Language Commands

04/01/2021 ∙ by Yiye Chen, et al. ∙ Georgia Institute of Technology 0

We consider the task of grasping a target object based on a natural language command query. Previous work primarily focused on localizing the object given the query, which requires a separate grasp detection module to grasp it. The cascaded application of two pipelines incurs errors in overlapping multi-object cases due to ambiguity in the individual outputs. This work proposes a model named Command Grasping Network(CGNet) to directly output command satisficing grasps from RGB image and textual command inputs. A dataset with ground truth (image, command, grasps) tuple is generated based on the VMRD dataset to train the proposed network. Experimental results on the generated test set show that CGNet outperforms a cascaded object-retrieval and grasp detection baseline by a large margin. Three physical experiments demonstrate the functionality and performance of CGNet.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The ability to understand natural language instructions, written or spoken, is a desirable skill for an intelligent robot, as it allows non-experts to communicate their demands. For manipulation, grasping the target is an indispensable first step in fulfilling the request. This paper addresses the problem of object grasp recognition based on a natural language query. Two aspects are included in the problem: what is the target object required by the command, and how to grasp it. A substantial body of research focuses on the first aspect [karpathy2014deep, socher2014grounded, Hu_2016_CVPR, rohrbach2016grounding, nguyen2019object, Nguyen-RSS-20, hatori2018interactively], termed natural language object retrieval, but few consider explicitly and/or jointly the second.

A naive solution for the second is to first locate the target then plan the grasp in 3D space [Shridhar-RSS-18]

. An alternative is to first use task-agnostic multiple grasp detection, then select a grasp coinciding with the object using a suitable heuristic. Both are prone to error in visual clutter due to imprecise target localization or the inability disambiguate grasps when objects overlap (see §

IV for examples).

In this paper, we delay the explicit object retrieval process and propose a natural language grasp retrieval solution that directly detects grasps based on command understanding as depicted by the input/output structure of Fig. 1. We argue that doing so is more efficient compared to cascaded solutions and reduces the aforementioned errors because it removes the burden of accurate object segmentation, and is less influenced by distractor objects as they are rarely included in graspable regions as shown in the output grasp boxes of Fig. 1. Our hypothesis is that the retrieval and understanding of the target object can be implicitly done within a deep network, such that semantic and object information is encoded into the grasp feature representation. Confirming the hypothesis involves designing a natural language grasp retrieval

deep network to output grasps conditioned on the input command. The network outputs the grasp box location and the probability distribution over the classification space of discretized orientations, retrieval state, and background state. Two garbage classes induce decision competition to reject unsuitable candidates. Training the network requires the construction of a multi-objects dataset with associated command and grasp annotations, generated by semi-automatically augmenting the VMRD dataset

[roiGrasp2019] using a templates-parsing approach with paraphrase augmentation for the command set. Evaluating trained CGNet performance on a vision dataset and via physical experiments validates the hypothesis.

I-a Related Work

(a)

(b)

(c)

(d)

Input

Output

Fig. 1: Depiction of the deep network processing structure. Given an image and a text command: (a) multiple visual grasp features are extracted; (b) the command is decoded into a command feature; (c) the visual and command outputs are merged; (d) and then processed to identify applicable grasps.

I-A1 Natural Langauge Object Retrieval

Natural Language Object Retrieval addresses the problem of locating a target object in an image when specified by a natural language instruction. The main challenge lies in interpreting visual and textual input to capture their correspondence. Early works parsed perceptual and textual input with human-designed rules, which have less expressive embeddings [krishnamurthy2013jointly, tellex11].

The advent of deep learning provided tools to extract high level semantic meaning from both image and language signals. Text-based image retrieval research adopted convolutional neural networks (CNNs) and recurrent neural networks (RNNs) as image and text encoders

[socher2014grounded]. Inspired by the success of object detection, more recent work focuses on aligning a specific object to the text by segmenting the image and encoding each object region [guadarrama2014open, karpathy2014deep, rohrbach2016grounding, nguyen2019object]. Segmentation modules used include external region proposal algorithms [guadarrama2014open, rohrbach2016grounding], object detectors[karpathy2014deep], or built-in object region proposal network [nguyen2019object]. Work in [Hu_2016_CVPR] also investigated incorporating both local and global information for instance-level queries in the text. The correspondence between visual and textual input can then be obtained by grounding-via-generation strategy [guadarrama2014open, Hu_2016_CVPR, rohrbach2016grounding], or designing a proximity-based similarity score function in the shared image-and-text embedding space such as inner-product [guadarrama2014open, karpathy2014deep]

or cosine similarity

[Nguyen-RSS-20]. Instead of segmenting object-scale regions, our work aims to retrieve grasp regions. Though such regions are smaller in scale and contain object parts, our work demonstrates that sufficient object information is encoded in the grasp region representations to reason with commands and differentiate objects.

More specific topics studied in robotics include addressing object attributes specification in commands such as shape [cohen2019grounding], spatial relations [Shridhar-RSS-18], or functionality [Nguyen-RSS-20]. Sometimes ambiguous commands need clarification through human-robot dialogue [hatori2018interactively, Shridhar-RSS-18]. Again, a main difference is the desire to bypass the object retrieval stage and influence grasp predictions from commands. The research is complementary.

I-A2 Task-agnostic Grasp Detection

Task-agnostic grasp detection aims to detect all graspable locations from the perceptual input. While classical approaches are based on mechanical constrains [mechanic1, mechanic2], recent studies resorted to CNN to capture the geometric information to either predict quality scores for sampled grasps [Dexnet2], to directly output the grasp configurations [lenz2015deep, redmon2015real, watson2017real, kumra2017robotic], or to capture collision information [murali20206]. We leverage this research to design a command-influenced grasp detection network, by augmenting a 2-stage grasp detection network [chu2018real] whose architecture consists of grasp region proposals followed by grasp prediction.

I-A3 Semantic Grasp Detection

Semantic grasp detection seeks functionally suitable grasps for specific downstream tasks [dang2012semantic, rao2018learning, liu2020cage, fang2020learning], such as the identification of different grasps on an object for different purpose. More closely related to our task is that of grasping of a specific target in a cluttered environment [guo2016object, danielczuk2019mechanical, jang2017end, roiGrasp2019], where shared encoding of object and grasp content is shown to be important. Moving to task-agnostic grasp detection increases the sequential complexity of the solution for object retrieval [guo2016object, danielczuk2019mechanical]. As noted earlier, the problem at hand also needs to address textual comprehension from the input command and its alignment with the grasp regions. While [roiGrasp2019] investigated reasoning over object-scale regions, our hypothesis allows omission of the object classification in image-text feature fusion.

I-B Problem Statement

Given an RGB image and a corresponding natural language command (e.g., ”Give me the banana”) requesting an object, the command-based grasp selection problem is to generate a set of grasp configurations capable of grasping and retrieving the object if it is present in the image. It requires establishing a function such that

. The envisioned end-effector is a gripper with two parallel plates, such that grasping is executed with a vertically downward grasp approach direction relative to a horizontal surface. Outputting grasps as a 5D vector,

,  defined with respect to the 2D image is sufficient to plan such a grasping action. The 5D vector describes the region in the image plane that the manipulator gripper should project to prior to closing the parallel plates (or equivalent) on the object. The coordinates are the grasp center and orientation (relative to the world -axis), and are the width and height of the pre-closure grasp region.

The scene may contain a single or multiple objects. The target object can be partially occluded and the function should still provide a grasp if enough of the object is visible and can be grasped. It should also determine whether a detected grasp is suitable or not from the input command.

Ii Proposed Approach

The network architecture for grasps recognition based on visual and text query inputs is depicted in Figure 1. Inspired by the success of feature merging for joint image-and-text reasoning problems [huk2018multimodal, Kim2017, nguyen2019object], we integrate natural language command interpretation with grasp detection via merged visual and textual representations. To facilitate feature merging, a two-stage grasp detection model provides the base network structure [chu2018real] so that the merger occurs in the second stage. Consequently, the pipeline in Figure 1 depicts two independent and parallel processes first: an image feature extractor to get a set of visual grasp-and-object sensitive features from the input RGB image, and a language feature extractor to get the command feature representation. The image feature extractor relies on a region proposal network to identify a set of potential graspable regions from which the visual features are obtained. The final stage fuses both feature spaces and predicts the 5D parameters of candidate grasps plus the suitability of the grasps for the given command. This section covers the details of the network structure.

Ii-a Grasp Region Feature Extraction

The first stage of the visual processing pipeline proposes a set of grasp regions of interest (ROIs) [chu2018real, roiGrasp2019], and their feature representations, which are expected to contain not only geometric information for determining grasp configuration, but also semantic information for reasoning with commands. At the end of the grasp proposal pipeline [chu2018real] the fixed-size feature maps are passed to a shallow convolutional network to produce a set of vectors in , which are interpreted as the embedding for each candidate grasp region. The output is set of such vectors,

where the coordinates of

consist of the proposal probability, the predicted position, and the grasp region feature representation.

A natural approach would be to sequentially apply object detection then grasp recognition in a detect-then-grasp pipeline, where the object of interest comes from the command interpreter. Sequential use with independent training does not exploit a deep network’s ability to learn joint representations. Moving to a detect-to-grasp paradigm [roiGrasp2019] creates multiple branches after ROI pooling for learning joint representations similar to the detect-to-caption pipeline of [nguyen2019object]. In our case the object category is not given, nor visually derived, but must be decoded from the command. Since the object category is still unknown, it should not have primacy. The decision should be delayed. To give grasps primacy, the ROI detector and feature space primarily key on grasp regions, not object regions. Filtering grasp candidates by object category occurs when the visual grasp and command feature spaces merge. By virtue of the multi-task objective and joint training, the process of detecting the target object is implicitly accomplished within the network. A benefit is that the grasp ROIs have object-specific attention-like characteristics, which comes from our observation that a grasp coordinate output generally targets a specific object and rarely includes other objects (see §IV). As a result, it can reduce confusion when there are occlusion cases due to multiple objects in the scene. This deep network design reflects a grasp-to-detect paradigm. The results in §IV will demonstrate that it is feasible to encode the object information into these smaller regions, which is critical for reasoning with commands.

Ii-B Command Feature Extraction

Encoding a natural language command into a vector feature involves first mapping the command into a sequence of vectors using a trainable word embedding table. The vector sequence is then passed to a command encoder to map the sequence to a single vector representation.

Ii-B1 Word Embedding

A word embedding table is a set of vectors representing words separately, so that each word is encoded as . The set of known words and their vector representations is called the dictionary. For robustness to out-of-dictionary words, an unknown word token is appended to the dictionary. The embedding vectors will be learnable so that they can be optimized to the problem described in §I-B.

Ii-B2 Command Encoder

The encoder is a 2-layer LSTM [LSTM] with hyperbolic tangent () activation. LSTM networks extract content or knowledge from sequences, which makes them effective for joint vision and language problems [huk2018multimodal, Kim2017]

. An additional fully connected layer (FC 512) with ReLU activation is added after the second LSTM block to increase the model’s capacity. The LSTM maps each input vector (the embedding of a word in the command) to an output vector that factors in earlier input data due to its feedback connection. After sequentially providing the entire command the last sequentially output vector defines the command feature representation output. The command feature inference is summarized as:

Ii-C Feature Merging and Output Layer

The second stage takes the command vector and a set of grasp region features as input, then predicts grasp configurations and their matching probability for the command query:

with the classification probability and grasp vector.

The objective is to merge the grasp candidate information with the command information for conditioning the retrieved grasps in the visual and textual inputs. Prior to merging the two intermediate output streams, a fully-connected layer performs dimension reduction of the visual signals in to match the command signal in when there is dimension mismatch (e.g., ). The Hadamard product (point-wise product) then merges the features:

and get feature set , where is the element-wise product.

The final computation involves two sibling, single-layer, fully-connected networks for grasp fitting, with a position regression branch and an orientation classification branch [chu2018real]. For the position regression branch (location block), the position of a grasp should not be conditioned on the command query. Accordingly, the network receives the image branch outputs as inputs, and outputs a 4D position () for each orientation class. The orientation classification branch (lower block) includes output classes for rejecting the candidates in and is where the command feature influences the outcome. The input is a merged feature . The output orientation space consists of classes plus two additional classes for rejecting grasp candidates that are not sensible. The first, from [chu2018real], is a background (BG) class. The second is a not_target (NT) class for feasible grasps that do not reflect the target object associated to the command request. The two classes differ since the BG class indicates regions where it is not possible to grasp (e.g., no object should be there). Although a double-stream setup (i.e, singling out the language retrieval score prediction as a separate branch) is also an option [nguyen2019object, jang2017end], combining commands and grasps into the same outcome space (i.e., orientation class) eliminates the need for a retrieval confidence threshold and employs decision competition to determine the preferred outcome, resulting in a set of candidate grasps .

Ii-D Loss

Though the network structure includes a second command branch that merges with the main grasping branch, the loss function for it does not significantly differ from that of

[chu2018real]. It consists of proposal and grasp configuration losses, , to propagate corrections back through both branches during training.

The proposal loss primarily affects the grasp ROI branch, e.g., . A ground truth (GT) binary proposal position and label (for positive ROI) is defined for each ROI (,) where the GT binary class label is True if a grasp is specified, and False if not. Both and are 4-dimensional vectors specifying center location and size: . The loss is:

(1)

where and are normalization constants, and

is a hyperparameter specifying the number of ROIs sampled for the loss calculation. The grasp binary label loss

is the cross entropy loss, and the grasp location loss is the smooth L1 loss [Fast-RCNN].

The term guides the final grasp detection output . GT position and class for each ROI is denoted as and separately. is assigned in the following way: (1) if the ROI is assigned as negative in the proposal stage, then is set to the BG class; (2) if the positive ROI is associated to a non-target ROI, then the is set to NT class; (3) otherwise, is set to the corresponding orientation class according to GT orientation angle. The grasp loss is:

(2)

where is a hyperparameter; ; is the indicator function that is one when the condition is satisfied and zero otherwise.

Iii Methodology

This section covers the network configuration and training process for instantiating CGNet. Training will require an annotated dataset compatible with the network’s input-output structure, whose construction is described here. Since the network will be evaluated as an individual perception module and integrated into a manipulation pipeline, this section also details the experiments and their evaluation criteria.

Iii-a Network Structure and Parameters

The visual processing backbone network is ResNet-50. The base layer and the first three blocks of ResNet-50 are used as the image encoder, while the last block and the final global average pooling layer is used to extract vector representations after ROI-pooling. The visual features have dimension . For the command processing pipeline, the word embedding dimension is . The output dimension for the LSTM layers are set to , . The dimension of the textual command feature is set to . The grasp orientation uses classes. (w/ symmetry).

Iii-B Annotated Training Data

Training the proposed network requires a dataset with tuples , where the grasp configurations in are associated with the command . Not aware of such a dataset, we created one by applying template-based command generation to the multi-objects VMRD dataset [roiGrasp2019]. The VMRD dataset provides labelled objects and grasps, , where each grasp is associated with an object present in . We convert the object label to a natural language command by parsing a randomly chosen template, such as ”Pass me the , from a pre-defined set [Nguyen-RSS-20]. This generation is limited to commands with object categories explicitly stated, and the object class must be in VMRD. The network can learn more free-formed commands, like those demanding a function instead of an object [Nguyen-RSS-20].

Iii-B1 Command Augmentation

The 11 templates adopted from [Nguyen-RSS-20] only include subject-verb-object and verb-object. To enrich the vocabulary size and syntactic diversity, we augment the template set based on the automatic paraphraser. We first append 7 templates with different grammatical structures (e.g. ”Fetch that for me”. Then a group of commands is generated using the initial template sets and VMRD object labels, one paraphrase is obtained for each command using the paraphraser, meaningless paraphrases are filtered out manually, and finally the new template sets are acquired by removing the object label in the paraphrases. The above step is repeated 10 times, with 35 commands generated for each. The paraphraser used is QuillBot. In the end, 123 templates are generated, that differ from the initial set, such as ”Grab the and bring it to me” and ”The is the one I must have”.

Iii-B2 VMRD+C Dataset

The base VMRD dataset consists of 4683 images with over grasps, and approximately 17000 objects labelled into 31 categories. It is split into 4233 and 450 images for training and testing set respectively. For each object in a source image, an tuple is generated with the strategy described above. Ground truth orientation classes for grasps associated to the target object are set according to their angle, with the rest are labelled NT. Furthermore, command requests with target objects not present in the image are added to the data set for each image. The command is generated from a random object excluding the ground truth ones present in the image, with all grasps set to NT. The strategy results in 17387(12972 have-target and 4415 no-target ) training data and 1852 (1381 have-target and 471 no-target ) testing instances.

Iii-C Training And Testing

Iii-C1 Training Details

Merging all ROI vision features with textual features slows down training. Applying feature fusion for only positive regions [nguyen2019object] is inconsistent with the inference procedure, where background features will also be merged since no ground truth label is available. Instead, we sample an equal number of negative ROIs as positive ones for feature fusion. The rest are retained for training the visual feature vector output of . This strategy improves convergence.

The initial network is ResNet-50 pretrained on ImageNet, with the other layers and word embeddings randomly initialized. Training for the

token, applies random word dropout with 0.1 probability. The number of ROIs sampled for loss calculations are . The Adam Optimizer[adamOptimizer] with parameters and initial learning rate of is used. Training was for iterations with batch-size of 1.

Iii-C2 Testing Details

During testing, 300 ROIs with top proposal scores are sent to the final grasp prediction layer. After getting the prediction results, The higher value from two garbage classes is used as the threshold to reject candidates independently. Non-maximum suppression over the remaining results leads to the output grasp set .

Iii-D Physical Experiments

Three physical experiments are designed to demonstrate the effectiveness and the application value of the proposed method in a perception-execution pipeline. The objects used are unseen instances of the known categories.

Iii-D1 Single Unseen Object Grasping

The unseen single object grasping experiment evaluates the generalization ability of CGNet. An unseen instance is randomly placed on the table, and a command requiring an object type is given. For each presented object, we repeated 10 trials for the case where the required and presented objects match, and 5 trials where there is no match (i.e., the requested object is not the presented one). The command is automatically generated using a random template and an input object label. The experiment uses same 8 objects in [roiGrasp2019].

Iii-D2 Multiple Unseen Objects Grasping

The aim is to select and grasp the target from amongst multiple objects, based on the command query. For each trial, a target object with 4 other interfering objects are presented, and the target is demanded by command. Each target is tested for 10 trials, and cover both cases where the target is on top or is partially covered. The target set and the command generation method is the same as in single-object experiment.

Iii-D3 Voice Command Handover

A human will submit a command verbally to test CGNet’s to generalize to unknown words or sentence structures. After the voice command is translated to text using Google Speech Text API, the experiment follows the multi-objects experiment. The robot arm executes the predicted grasp and passes the object to the human operator. The experiment is repeated for 33 trials, with 17 visible targets and 16 partially occluded.

Iii-E Metrics

The perception-only experiments test the correctness of the grasp output, while the manipulation experiments test how well the outputs function in a perceive-plan-act pipeline with an embodied manipulator. The scoring is described below.

Iii-E1 Perception

We evaluate the model’s response to the natural language command query. Scoring adopts the object retrieval top-k recall(R@k) and top-k precision(P@k) metrics to evaluate multiple grasp detections [Hu_2016_CVPR]

. R@k is the percentage of cases where at least one of the top-k detections is correct. P@k computes the correct rate for all top-k selections. A correctly detected grasp has a Jaccard Index (intersection over union) greater than

and an absolute orientation error of less than relative to at least one of the ground truth grasps of the target object.

Iii-E2 Physical Experiment

To evaluate physical experiment, we separate the pipeline into different stages and record their success rate respectively. The three stages considered are: object retrieval, grasp detection, and grasp execution, to identify which step leads to trial failure. The success of the first two are visually confirmed from the bounding box, while the last requires the target to be grasped, picked up, and held for 3 seconds. The voice command handover experiment includes the percentage of the spoken command being exactly translated. The percentage of successful trials, termed overall success rate, is also recorded.

Method R@1 R@3 R@5 R@10 P@1 P@3 P@5 P@10 FPS
Agn-Rnd 25.4 25.1 26.5 24.9 27.0 26.1 25.6 25.6 9.1
Ret-Gr 63.1 73.1 76.5 80.8 67.4 68.5 68.3 66.5 4.5
CGNet 74.9 88.2 91.0 93.2 76.1 75.3 74.8 72.2 8.7
CG+Ret 74.6 86.8 88.7 90.1 78.0 77.5 76.2 73.3 4.4
TABLE I: VMRD Multi-Grasp Evaluation

Iv Results

(a)

(b)

(c)

(d)

(e)
Fig. 2: Grasp detection results for CGNet and Ret-Gr. The red box denotes the top grasp detection, and the green box object retrieval results. CGNet avoids mistakes caused by: (a)(b)(c) inaccurate localization of the target object; and (d)(e) target bounding boxes with interfering objects.

This section discusses the evaluation results of both the perception module and the designed physical experiments. Three baselines are compared with our methods:
1) Random (Agn-Rnd): A state-of-art task-agnostic grasp detection network MultiGrasp [chu2018real] followed by a random choice. The model is re-trained on VMRD.
2) Cascade (Ret-Gr): A cascading state-of-art natural language object retrieval model [nguyen2019object] and Multigrasp[chu2018real]. The target object is the retrieval region with highest score. Grasps within the retrieval region are kept and ranked based on center-to-center distance (minimum first). Both models were re-trained with VMRD+C.
3) Obj-Gr: A grasp detection with object classification network from ROI-based grasp detection[roiGrasp2019]. The results are obtained from their published work, which we take as an upper bound since it skips the need to interpret commands.

We also evaluate CG+Ret, which takes CGNet output and ignores grasps not within the object retrieval region.

Iv-a Vision

Table I lists the visual accuracy and efficiency of the evaluated methods. All methods outperform Agn-Rnd. The hypothesized value of encoding semantic information into grasp regions and skipping object detection is evident from the higher values for CGNet over Ret-Gr, by around 10% on average. Examples presented Fig. 2 demonstrate some of the problem of the cascade baseline. One class of errors is inaccurate target localization, where the natural language object retrieval model yield imprecise locations, which leads to false grasp detection. CGNet avoids this step and this type of error. Another class has errors from overlapping or occluding objects, whereby the object boxes include distractors objects. Grasp regions are small enough that usualy asingle object is attached to them, thereby avoiding confusion from overlapping objects.

Objects CGNet Ret-Gr Obj-Gr1
Obj Grp Exe Obj Grp Exe Obj Grasp Exe
Apple - 10 9 10 10 8 - 10 9
Banana - 10 10 10 10 10 - 10 10
Wrist Developer - 10 9 10 9 9 - 10 10
Tape - 10 10 10 10 10 - 8 8
Toothpaste - 10 10 9 9 9 - 10 10
Wrench - 9 9 5 5 5 - 10 8
Pliers - 10 8 1 1 1 - 10 10
Screwdriver - 10 9 5 5 5 - 10 9
Mean - 9.86 9.25 7.5 7.38 7.13 - 9.75 9.25
  • Results adopted from original paper.

TABLE II: Unseen Single Object Grasping

CGNet tests on the 471 NT data, for which no grasps should be proposed, achieves success rate, indicating that sometimes CGNet fails to recognize objects at the grasp-level. Applying the object retrieval information as prior helps distinguish between objects, as evidenced by the improved P@K of CG+Ret over CGNet alone. The trade-off is a lower recall thereby causing a drop in R@K value.

Iv-B Physical Experiments

Iv-B1 Single Unseen Object Grasping

The generalization ability of CGNet is evident in Table II. Though the tested objects were not in the training data the overall detection and execution success rate matches Obj-Gr which does not have to perform command interpretation. The Ret-Gr baseline is expected to have strong results also, but there is a significant performance drop for the Wrench, Pliers, and Screwdriver. It may result from the domain shift of the unseen objects.

Iv-B2 Multiple Unseen Object Grasping

Here, all algorithms experience a performance drop as seen in Table III. CGNet performs closer to Obj-Gr than to Ret-Gr, indicating that CGNet has learnt to encode similar object-level content in the grasp feature descriptors. However, the reduced value also indicates that object-level discrimination is not as strong as it could be. Some form of nonlocal attention is most likely needed, or loose coupling to object-level feature descriptors.

Objects CGNet Ret-Gr Obj-Gr1
Obj Grp Exe Obj Grp Exe Obj Grasp Exe
Apple - 10 8 10 10 10 - 10 9
Banana - 9 9 9 7 7 - 10 10
Wrist Developer - 8 8 9 6 6 - 7 6
Tape - 7 7 9 7 7 - 7 7
Toothpaste - 8 8 5 4 4 - 8 8
Wrench - 7 6 5 4 4 - 10 8
Pliers - 7 7 5 2 2 - 9 9
Screwdriver - 7 7 3 3 3 - 10 10
Mean - 7.88 7.50 6.88 5.38 5.38 - 8.88 8.38
  • Results adopted from original paper.

TABLE III: Unseen Multiple Objects Grasping

Iv-B3 Voice Command Handover

For the voice command version, the comand interpreter does a better job than the voice-to-text network, see Table IV . Out-of-vocabulary words are input to CGNet due to translation error (”want” to ”walked”, ”Help” to ”How”) , unexpected descriptions (”… on the table”), or colloquial words (”…please”). Under these challenges, CGNet still extracts the key information and achieves a reasonable task execution rate. It outperforms Ret-Gr on multiple objects and almost matches its single object performance.

All of the outcomes support the value of task prioritization (here grasps) for contextual interpretation of action commands. They also indicate that non-local information provides important information in cases of clutter, where other objects may have similar grasp-level feature encodings.

Voice Translation Grasp Prediction Execution
19/33 25/33 23/33
TABLE IV: Voice Command Handover

V Conclusion

This paper presents Command Grasping Network(CGNet), a network that detects grasp conditioned on text input corresponding to a natural language command. By skipping the object retrieval step and directly detecting grasps, CGNet avoids the errors incurred by inaccurate object localization and post-processing of cascaded object retrieval and grasp detection models. Vision dataset evaluation and three proposed physical experiment demonstrate the effectiveness and the generalization ability of CGNet.

Future work will explore implicit commands where the object is not in the comand proper but one of its properties is referenced. We also would like to incorporate higher-level or nonlocal visual cues to enhance grasp recognition rates. With both improvements, we envision that the system would be more effective at interacting with a human.

References