Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

01/18/2022
by   Wenlong Huang, et al.
0

Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (e.g. "make breakfast"), to a chosen set of actionable steps (e.g. "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. Website at https://huangwl18.github.io/language-planner

READ FULL TEXT

page 1

page 8

research
04/04/2022

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

Large language models can encode a wealth of semantic knowledge about th...
research
10/10/2022

Generating Executable Action Plans with Environmentally-Aware Language Models

Large Language Models (LLMs) trained using massive text datasets have re...
research
11/17/2022

Planning with Large Language Models via Corrective Re-prompting

Extracting the common sense knowledge present in Large Language Models (...
research
09/18/2023

Conformal Temporal Logic Planning using Large Language Models: Knowing When to Do What and When to Ask for Help

This paper addresses a new motion planning problem for mobile robots tas...
research
09/29/2020

Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions

The recently proposed ALFRED challenge task aims for a virtual robotic a...
research
05/26/2023

Learning and Leveraging Verifiers to Improve Planning Capabilities of Pre-trained Language Models

There have been wide spread claims in the literature about the emergent ...
research
07/10/2023

Large Language Models as General Pattern Machines

We observe that pre-trained large language models (LLMs) are capable of ...

Please sign up or login with your details

Forgot password? Click here to reset