POINTER: Constrained Text Generation via Insertion-based Generative Pre-training

05/01/2020
by   Yizhe Zhang, et al.
5

Large-scale pre-trained language models, such as BERT and GPT-2, have achieved excellent performance in language representation learning and free-form text generation. However, these models cannot be directly employed to generate text under specified lexical constraints. To address this challenge, we present POINTER, a simple yet novel insertion-based approach for hard-constrained text generation. The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner. This procedure is recursively applied until a sequence is completed. The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable. Since our training objective resembles the objective of masked language modeling, BERT can be naturally utilized for initialization. We pre-train our model with the proposed progressive insertion-based objective on a 12GB Wikipedia dataset, and fine-tune it on downstream hard-constrained generation tasks. Non-autoregressive decoding yields a logarithmic time complexity during inference time. Experimental results on both News and Yelp datasets demonstrate that POINTER achieves state-of-the-art performance on constrained text generation. We intend to release the pre-trained model to facilitate future research.

READ FULL TEXT
research
10/24/2022

ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation

We study the text generation task under the approach of pre-trained lang...
research
09/26/2021

Parallel Refinements for Lexically Constrained Text Generation with BART

Lexically constrained text generation aims to control the generated text...
research
03/17/2021

ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer

Pretrained using large amount of data, autoregressive language models ar...
research
06/14/2021

Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation

Advanced large-scale neural language models have led to significant succ...
research
12/08/2022

Momentum Calibration for Text Generation

The input and output of most text generation tasks can be transformed to...
research
10/10/2022

Leveraging Key Information Modeling to Improve Less-Data Constrained News Headline Generation via Duality Fine-Tuning

Recent language generative models are mostly trained on large-scale data...
research
03/12/2021

Constrained Text Generation with Global Guidance – Case Study on CommonGen

This paper studies constrained text generation, which is to generate sen...

Please sign up or login with your details

Forgot password? Click here to reset