A Plug-and-Play Method for Controlled Text Generation

09/20/2021
by   Damian Pascual, et al.
0

Large pre-trained language models have repeatedly shown their ability to produce fluent text. Yet even when starting from a prompt, generation can continue in many plausible directions. Current decoding methods with the goal of controlling generation, e.g., to ensure specific words are included, either require additional models or fine-tuning, or work poorly when the task at hand is semantically unconstrained, e.g., story generation. In this work, we present a plug-and-play decoding method for controlled language generation that is so simple and intuitive, it can be described in a single sentence: given a topic or keyword, we add a shift to the probability distribution over our vocabulary towards semantically similar words. We show how annealing this distribution can be used to impose hard constraints on language generation, something no other plug-and-play method is currently able to do with SOTA language generators. Despite the simplicity of this approach, we see it works incredibly well in practice: decoding from GPT-2 leads to diverse and fluent sentences while guaranteeing the appearance of given guide words. We perform two user studies, revealing that (1) our method outperforms competing methods in human evaluations; and (2) forcing the guide words to appear in the generated text has no impact on the fluency of the generated text.

READ FULL TEXT

page 9

page 25

research
12/04/2019

Plug and Play Language Models: a Simple Approach to Controlled Text Generation

Large transformer-based language models (LMs) trained on huge text corpo...
research
06/18/2022

Collocation2Text: Controllable Text Generation from Guide Phrases in Russian

Large pre-trained language models are capable of generating varied and f...
research
03/11/2021

Topical Language Generation using Transformers

Large-scale transformer-based language models (LMs) demonstrate impressi...
research
05/22/2023

Look-back Decoding for Open-Ended Text Generation

Given a prefix (context), open-ended generation aims to decode texts tha...
research
05/19/2023

What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability

In Natural Language Generation (NLG) tasks, for any input, multiple comm...
research
12/31/2020

Directed Beam Search: Plug-and-Play Lexically Constrained Language Generation

Large pre-trained language models are capable of generating realistic te...
research
03/08/2023

On the Risks of Stealing the Decoding Algorithms of Language Models

A key component of generating text from modern language models (LM) is t...

Please sign up or login with your details

Forgot password? Click here to reset