Goal-Directed Story Generation: Augmenting Generative Language Models with Reinforcement Learning

12/16/2021
by   Amal Alabdulkarim, et al.
1

The advent of large pre-trained generative language models has provided a common framework for AI story generation via sampling the model to create sequences that continue the story. However, sampling alone is insufficient for story generation. In particular, it is hard to direct a language model to create stories to reach a specific goal event. We present two automated techniques grounded in deep reinforcement learning and reward shaping to control the plot of computer-generated stories. The first utilizes proximal policy optimization to fine-tune an existing transformer-based language model to generate text continuations but also be goal-seeking. The second extracts a knowledge graph from the unfolding story, which is used by a policy network with graph attention to select a candidate continuation generated by a language model. We report on automated metrics pertaining to how often stories achieve a given goal event as well as human participant rankings of coherence and overall story quality compared to baselines and ablations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2018

Controllable Neural Story Generation via Reinforcement Learning

Open story generation is the problem of automatically creating a story f...
research
11/03/2020

Modeling Event Salience in Narratives via Barthes' Cardinal Functions

Events in a narrative differ in salience: some are more important to the...
research
09/07/2020

Improving Language Generation with Sentence Coherence Objective

Conditional story generation and contextual text continuation have becom...
research
02/18/2022

CLSEG: Contrastive Learning of Story Ending Generation

Story Ending Generation (SEG) is a challenging task in natural language ...
research
12/20/2022

Future Sight: Dynamic Story Generation with Large Pretrained Language Models

Recent advances in deep learning research, such as transformers, have bo...
research
07/24/2023

RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment

We propose Reinforcement Learning from Contrast Distillation (RLCD), a m...
research
11/20/2020

Collaborative Storytelling with Large-scale Neural Language Models

Storytelling plays a central role in human socializing and entertainment...

Please sign up or login with your details

Forgot password? Click here to reset