Learning and Leveraging Verifiers to Improve Planning Capabilities of Pre-trained Language Models

05/26/2023
by   Daman Arora, et al.
0

There have been wide spread claims in the literature about the emergent reasoning capabilities of Pretrained Large Language Models. However, recent studies, have found that their ability to plan remains questionable. Through our experiments using GPT-2, we empirically demonstrate that the performance of a finetuned baseline remains poor because it violates pre-conditions of actions in the plans that it generates. To improve the planning capabilities of a finetuned LLM, we train a verifier, which can classify actions as being valid or invalid in a particular state. By randomly sampling actions from the same dataset, we generate examples of invalid actions which are then used to train a verifier which can check for action applicability. In the presence of diverse sampling from a generator and a verifier which can prune invalid trajectories, we show significant gains in the success rate on the Blocksworld domain. Additionally, we show that finetuning the GPT-2 generator itself to create the verifier generalizes better than finetuning the base GPT-2. Lastly, we investigate the role of the sampling temperature which can be used to control the exploration-exploitation tradeoff.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2023

On the Planning, Search, and Memorization Capabilities of Large Language Models

The rapid advancement of large language models, such as the Generative P...
research
05/25/2023

On the Planning Abilities of Large Language Models – A Critical Investigation

Intrigued by the claims of emergent reasoning capabilities in LLMs train...
research
08/24/2023

SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge

Large Language Models (LLMs) have demonstrated impressive planning abili...
research
05/25/2023

Understanding the Capabilities of Large Language Models for Automated Planning

Automated planning is concerned with developing efficient algorithms to ...
research
01/18/2022

Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

Can world knowledge learned by large language models (LLMs) be used to a...
research
03/28/2023

Planning with Sequence Models through Iterative Energy Minimization

Recent works have shown that sequence modeling can be effectively used t...
research
04/10/2021

NLI Data Sanity Check: Assessing the Effect of Data Corruption on Model Performance

Pre-trained neural language models give high performance on natural lang...

Please sign up or login with your details

Forgot password? Click here to reset