DeepAI AI Chat
Log In Sign Up

What do Large Language Models Learn about Scripts?

by   Abhilasha Sancheti, et al.

Script Knowledge (Schank and Abelson, 1975) has long been recognized as crucial for language understanding as it can help in filling in unstated information in a narrative. However, such knowledge is expensive to produce manually and difficult to induce from text due to reporting bias (Gordon and Van Durme, 2013). In this work, we are interested in the scientific question of whether explicit script knowledge is present and accessible through pre-trained generative language models (LMs). To this end, we introduce the task of generating full event sequence descriptions (ESDs) given a scenario in the form of natural language prompts. In zero-shot probing experiments, we find that generative LMs produce poor ESDs with mostly omitted, irrelevant, repeated or misordered events. To address this, we propose a pipeline-based script induction framework (SIF) which can generate good quality ESDs for unseen scenarios (e.g., bake a cake). SIF is a two-staged framework that fine-tunes LM on a small set of ESD examples in the first stage. In the second stage, ESD generated for an unseen scenario is post-processed using RoBERTa-based models to filter irrelevant events, remove repetitions, and reorder the temporally misordered events. Through automatic and manual evaluations, we demonstrate that SIF yields substantial improvements (1-3 BLUE points) over a fine-tuned LM. However, manual analysis shows that there is great room for improvement, offering a new research direction for inducing script knowledge.


page 1

page 2

page 3

page 4


proScript: Partially Ordered Scripts Generation via Pre-trained Language Models

Scripts - standardized event sequences describing typical everyday activ...

Language Models are Few-Shot Butlers

Pretrained language models demonstrate strong performance in most NLP ta...

What Makes Data-to-Text Generation Hard for Pretrained Language Models?

Expressing natural language descriptions of structured facts or relation...

Language Models as Fact Checkers?

Recent work has suggested that language models (LMs) store both common-s...

Improving CTC-based speech recognition via knowledge transferring from pre-trained language models

Recently, end-to-end automatic speech recognition models based on connec...

Generating Hypothetical Events for Abductive Inference

Abductive reasoning starts from some observations and aims at finding th...

Modeling Semantic Plausibility by Injecting World Knowledge

Distributional data tells us that a man can swallow candy, but not that ...