Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models

05/17/2023
by   Hanxu Hu, et al.
0

In this paper, we take the initiative to investigate the performance of LLMs on complex planning tasks that require LLMs to understand a virtual spatial environment simulated via natural language and act correspondingly in text. We propose a benchmark named Natural Language Planning (NLP) composed of a set of novel tasks: Brick World, NLVR-based Manipulations, and Natural Language Navigation. We found that current popular LLMs such as ChatGPT still lack abilities in complex planning. This arises a question – do the LLMs have a good understanding of the environments described in natural language, or maybe other alternatives such as symbolic representations are neater and hence better to be understood by LLMs? To this end, we propose a novel method called CoS (Chain-of-Symbol Prompting) that represents the complex environments with condensed symbolic spatial representations during the chained intermediate thinking steps. CoS is easy to use and does not need additional training on LLMs. Extensive experiments indicate that CoS clearly surpasses the performance of the Chain-of-Thought (CoT) Prompting in all three planning tasks with even fewer tokens used in the inputs compared with CoT on ChatGPT and InstructGPT. The performance gain is strong, by up to 60.8 on Brick World for ChatGPT. CoS also reduces the number of tokens in the prompt obviously, by up to 65.8 steps from demonstrations on Brick World.

READ FULL TEXT
research
05/29/2023

Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models

Large language models (LLMs) have scaled up to unlock a wide range of co...
research
04/16/2023

Chain of Thought Prompt Tuning in Vision Language Models

Language-Image Pre-training has demonstrated promising results on zero-s...
research
02/10/2023

Translating Natural Language to Planning Goals with Large-Language Models

Recent large language models (LLMs) have demonstrated remarkable perform...
research
12/10/2017

Learning Interpretable Spatial Operations in a Rich 3D Blocks World

In this paper, we study the problem of mapping natural language instruct...
research
12/31/2022

Rethinking with Retrieval: Faithful Large Language Model Inference

Despite the success of large language models (LLMs) in various natural l...
research
12/16/2022

Plansformer: Generating Symbolic Plans using Transformers

Large Language Models (LLMs) have been the subject of active research, s...
research
08/09/2023

Learning Type-Generalized Actions for Symbolic Planning

Symbolic planning is a powerful technique to solve complex tasks that re...

Please sign up or login with your details

Forgot password? Click here to reset