Code as Policies: Language Model Programs for Embodied Control

09/16/2022
by   Jacky Liang, et al.
0

Large language models (LLMs) trained on code completion have been shown to be capable of synthesizing simple Python programs from docstrings [1]. We find that these code-writing LLMs can be re-purposed to write robot policy code, given natural language commands. Specifically, policy code can express functions or feedback loops that process perception outputs (e.g.,from object detectors [2], [3]) and parameterize control primitive APIs. When provided as input several example language commands (formatted as comments) followed by corresponding policy code (via few-shot prompting), LLMs can take in new commands and autonomously re-compose API calls to generate new policy code respectively. By chaining classic logic structures and referencing third-party libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way can write robot policies that (i) exhibit spatial-geometric reasoning, (ii) generalize to new instructions, and (iii) prescribe precise values (e.g., velocities) to ambiguous descriptions ("faster") depending on context (i.e., behavioral commonsense). This paper presents code as policies: a robot-centric formalization of language model generated programs (LMPs) that can represent reactive policies (e.g., impedance controllers), as well as waypoint-based policies (vision-based pick and place, trajectory-based control), demonstrated across multiple real robot platforms. Central to our approach is prompting hierarchical code-gen (recursively defining undefined functions), which can write more complex code and also improves state-of-the-art to solve 39.8 problems on the HumanEval [1] benchmark. Code and videos are available at https://code-as-policies.github.io

READ FULL TEXT

page 1

page 2

page 11

page 14

research
05/18/2023

Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model

Foundation models have made significant strides in various applications,...
research
07/13/2022

DocCoder: Generating Code by Retrieving and Reading Docs

Natural-language-to-code models learn to generate a code snippet given a...
research
01/26/2022

Synchromesh: Reliable code generation from pre-trained language models

Large pre-trained language models have been used to generate code,provid...
research
07/17/2023

In-IDE Generation-based Information Support with a Large Language Model

Understanding code is challenging, especially when working in new and co...
research
06/05/2023

AutoScrum: Automating Project Planning Using Large Language Models

Recent advancements in the field of large language models have made it p...
research
10/06/2022

Binding Language Models in Symbolic Languages

Though end-to-end neural approaches have recently been dominating NLP ta...
research
05/21/2023

SLaDe: A Portable Small Language Model Decompiler for Optimized Assembler

Decompilation is a well-studied area with numerous high-quality tools av...

Please sign up or login with your details

Forgot password? Click here to reset