Embodied Executable Policy Learning with Language-based Scene Summarization

06/09/2023
by   Jielin Qiu, et al.
0

Large Language models (LLMs) have shown remarkable success in assisting robot learning tasks, i.e., complex household planning. However, the performance of pretrained LLMs heavily relies on domain-specific templated text data, which may be infeasible in real-world robot learning tasks with image-based observations. Moreover, existing LLMs with text inputs lack the capability to evolve with non-expert interactions with environments. In this work, we introduce a novel learning paradigm that generates robots' executable actions in the form of text, derived solely from visual observations, using language-based summarization of these observations as the connecting bridge between both domains. Our proposed paradigm stands apart from previous works, which utilized either language instructions or a combination of language and visual data as inputs. Moreover, our method does not require oracle text summarization of the scene, eliminating the need for human involvement in the learning loop, which makes it more practical for real-world robot learning tasks. Our proposed paradigm consists of two modules: the SUM module, which interprets the environment using visual observations and produces a text summary of the scene, and the APM module, which generates executable action policies based on the natural language descriptions provided by the SUM module. We demonstrate that our proposed method can employ two fine-tuning strategies, including imitation learning and reinforcement learning approaches, to adapt to the target test tasks effectively. We conduct extensive experiments involving various SUM/APM model selections, environments, and tasks across 7 house layouts in the VirtualHome environment. Our experimental results demonstrate that our method surpasses existing baselines, confirming the effectiveness of this novel learning paradigm.

READ FULL TEXT
research
05/23/2023

Learning from demonstrations: An intuitive VR environment for imitation learning of construction robots

Construction robots are challenging the traditional paradigm of labor in...
research
05/30/2023

AlphaBlock: Embodied Finetuning for Vision-Language Reasoning in Robot Manipulation

We propose a novel framework for learning high-level cognitive capabilit...
research
03/16/2023

A Picture is Worth a Thousand Words: Language Models Plan from Pixels

Planning is an important capability of artificial agents that perform lo...
research
11/21/2022

Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models

In recent years, much progress has been made in learning robotic manipul...
research
10/08/2019

Model-based Behavioral Cloning with Future Image Similarity Learning

We present a visual imitation learning framework that enables learning o...
research
06/15/2023

Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation

Humans have the remarkable ability to navigate through unfamiliar enviro...
research
07/04/2022

OS-MSL: One Stage Multimodal Sequential Link Framework for Scene Segmentation and Classification

Scene segmentation and classification (SSC) serve as a critical step tow...

Please sign up or login with your details

Forgot password? Click here to reset