Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model

05/18/2023
by   Siyuan Huang, et al.
3

Foundation models have made significant strides in various applications, including text-to-image generation, panoptic segmentation, and natural language processing. This paper presents Instruct2Act, a framework that utilizes Large Language Models to map multi-modal instructions to sequential actions for robotic manipulation tasks. Specifically, Instruct2Act employs the LLM model to generate Python programs that constitute a comprehensive perception, planning, and action loop for robotic tasks. In the perception section, pre-defined APIs are used to access multiple foundation models where the Segment Anything Model (SAM) accurately locates candidate objects, and CLIP classifies them. In this way, the framework leverages the expertise of foundation models and robotic abilities to convert complex high-level instructions into precise policy codes. Our approach is adjustable and flexible in accommodating various instruction modalities and input types and catering to specific task demands. We validated the practicality and efficiency of our approach by assessing it on robotic tasks in different scenarios within tabletop manipulation domains. Furthermore, our zero-shot method outperformed many state-of-the-art learning-based policies in several tasks. The code for our proposed approach is available at https://github.com/OpenGVLab/Instruct2Act, serving as a robust benchmark for high-level robotic instruction tasks with assorted modality inputs.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 17

research
08/30/2023

LLaSM: Large Language and Speech Model

Multi-modal large language models have garnered significant interest rec...
research
08/22/2023

ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts

In this paper, we argue that the next generation of robots can be comman...
research
09/16/2022

Code as Policies: Language Model Programs for Embodied Control

Large language models (LLMs) trained on code completion have been shown ...
research
10/10/2022

Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks

Demonstrations and natural language instructions are two common ways to ...
research
06/09/2023

Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots

Improving the generalization capabilities of general-purpose robotic age...
research
08/04/2022

LaTTe: Language Trajectory TransformEr

Natural language is one of the most intuitive ways to express human inte...
research
05/17/2023

Evaluating Object Hallucination in Large Vision-Language Models

Inspired by the superior language abilities of large language models (LL...

Please sign up or login with your details

Forgot password? Click here to reset