Control Prefixes for Text Generation

10/15/2021
by   Jordan Clive, et al.
0

Prompt learning methods adapt pre-trained language models to downstream applications by using a task-specific prompt together with the input. Most of the current work on prompt learning in text generation relies on a shared dataset-level prompt for all examples in the dataset. We extend this approach and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information in each prompt. Control Prefixes is at the intersection of prompt learning and controlled generation, empowering the model to have finer-grained control during text generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). We present state-of-the-art results on several data-to-text datasets, including WebNLG.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2021

Attribute Alignment: Controlling Text Generation from Pre-trained Language Models

Large language models benefit from training with a large amount of unlab...
research
06/30/2020

Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation

We introduce a simple and efficient method, called Auxiliary Tuning, for...
research
01/16/2020

Multimodal Story Generation on Plural Images

Traditionally, text generation models take in a sequence of text as inpu...
research
09/11/2019

CTRL: A Conditional Transformer Language Model for Controllable Generation

Large-scale language models show promising text generation capabilities,...
research
06/05/2020

CoCon: A Self-Supervised Approach for Controlled Text Generation

Pretrained Transformer-based language models (LMs) display remarkable na...
research
09/06/2021

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

Multimodal abstractive summarization (MAS) models that summarize videos ...
research
05/10/2020

Posterior Control of Blackbox Generation

Text generation often requires high-precision output that obeys task-spe...

Please sign up or login with your details

Forgot password? Click here to reset