Learning 6-DoF Object Poses to Grasp Category-level Objects by Language Instructions

05/09/2022
by   Chilam Cheang, et al.
0

This paper studies the task of any objects grasping from the known categories by free-form language instructions. This task demands the technique in computer vision, natural language processing, and robotics. We bring these disciplines together on this open challenge, which is essential to human-robot interaction. Critically, the key challenge lies in inferring the category of objects from linguistic instructions and accurately estimating the 6-DoF information of unseen objects from the known classes. In contrast, previous works focus on inferring the pose of object candidates at the instance level. This significantly limits its applications in real-world scenarios.In this paper, we propose a language-guided 6-DoF category-level object localization model to achieve robotic grasping by comprehending human intention. To this end, we propose a novel two-stage method. Particularly, the first stage grounds the target in the RGB image through language description of names, attributes, and spatial relations of objects. The second stage extracts and segments point clouds from the cropped depth image and estimates the full 6-DoF object pose at category-level. Under such a manner, our approach can locate the specific object by following human instructions, and estimate the full 6-DoF pose of a category-known but unseen instance which is not utilized for training the model. Extensive experimental results show that our method is competitive with the state-of-the-art language-conditioned grasp method. Importantly, we deploy our approach on a physical robot to validate the usability of our framework in real-world applications. Please refer to the supplementary for the demo videos of our robot experiments.

READ FULL TEXT

page 1

page 2

page 5

research
02/24/2023

A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter

We focus on the task of language-conditioned grasping in clutter, in whi...
research
06/27/2021

DONet: Learning Category-Level 6D Object Pose and Size Estimation from Depth Observation

We propose a method of Category-level 6D Object Pose and Size Estimation...
research
10/25/2019

Learning Task-Oriented Grasping from Human Activity Datasets

We propose to leverage a real-world, human activity RGB datasets to teac...
research
12/06/2021

DemoGrasp: Few-Shot Learning for Robotic Grasping with Human Demonstration

The ability to successfully grasp objects is crucial in robotics, as it ...
research
08/30/2023

WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model

Enabling robots to understand language instructions and react accordingl...
research
04/06/2023

Object-centric Inference for Language Conditioned Placement: A Foundation Model based Approach

We focus on the task of language-conditioned object placement, in which ...
research
07/12/2021

Leveraging Explainability for Comprehending Referring Expressions in the Real World

For effective human-robot collaboration, it is crucial for robots to und...

Please sign up or login with your details

Forgot password? Click here to reset