Neuro-Symbolic Causal Language Planning with Commonsense Prompting

06/06/2022
by   Yujie Lu, et al.
0

Language planning aims to implement complex high-level goals by decomposition into sequential simpler low-level steps. Such procedural reasoning ability is essential for applications such as household robots and virtual assistants. Although language planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack deep-level commonsense knowledge in the real world. Previous methods require either manual exemplars or annotated programs to acquire such ability from LLMs. In contrast, this paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. Pre-trained knowledge in LLMs is essentially an unobserved confounder that causes spurious correlations between tasks and action plans. Through the lens of a Structural Causal Model (SCM), we propose an effective strategy in CLAP to construct prompts as a causal intervention toward our SCM. Using graph sampling techniques and symbolic program executors, our strategy formalizes the structured causal prompts from commonsense knowledge bases. CLAP obtains state-of-the-art performance on WikiHow and RobotHow, achieving a relative improvement of 5.28 This indicates the superiority of CLAP in causal language planning semantically and sequentially.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2023

TaskLAMA: Probing the Complex Task Understanding of Language Models

Structured Complex Task Decomposition (SCTD) is the problem of breaking ...
research
03/10/2023

Task and Motion Planning with Large Language Models for Object Rearrangement

Multi-object rearrangement is a crucial skill for service robots, and co...
research
10/14/2021

Symbolic Knowledge Distillation: from General Language Models to Commonsense Models

The common practice for training commonsense models has gone from-human-...
research
07/31/2019

Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language

To be responsive to dynamically changing real-world environments, an int...
research
05/05/2021

TANGO: Commonsense Generalization in Predicting Tool Interactions for Mobile Manipulators

Robots assisting us in factories or homes must learn to make use of obje...
research
06/19/2023

Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference

Fine-tuning has been proven to be a simple and effective technique to tr...
research
07/17/2023

Automated Action Model Acquisition from Narrative Texts

Action models, which take the form of precondition/effect axioms, facili...

Please sign up or login with your details

Forgot password? Click here to reset