Extract, Denoise, and Enforce: Evaluating and Predicting Lexical Constraints for Conditional Text Generation

04/18/2021
by   Yuning Mao, et al.
0

Recently, pre-trained language models (PLMs) have dominated conditional text generation tasks. Given the impressive performance and prevalence of the PLMs, it is seemingly natural to assume that they could figure out what to attend to in the input and what to include in the output via seq2seq learning without more guidance than the training input/output pairs. However, a rigorous study regarding the above assumption is still lacking. In this paper, we present a systematic analysis of conditional generation to study whether current PLMs are good enough for preserving important concepts in the input and to what extent explicitly guiding generation with lexical constraints is beneficial. We conduct extensive analytical experiments on a range of conditional generation tasks and try to answer in what scenarios guiding generation with lexical constraints works well and why. We then propose a framework for automatic constraint extraction, denoising, and enforcement that is shown to perform comparably or better than unconstrained generation. We hope that our findings could serve as a reference when determining whether it is appropriate and worthwhile to use explicit constraints for a specific task or dataset.[Our code is available at <https://github.com/morningmoni/LCGen-eval>.]

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

Conditional text generation often requires lexical constraints, i.e., wh...
research
04/15/2023

Tractable Control for Autoregressive Language Generation

Despite the success of autoregressive large language models in text gene...
research
06/21/2023

Towards Accurate Translation via Semantically Appropriate Application of Lexical Constraints

Lexically-constrained NMT (LNMT) aims to incorporate user-provided termi...
research
06/30/2020

Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation

We introduce a simple and efficient method, called Auxiliary Tuning, for...
research
03/17/2021

ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer

Pretrained using large amount of data, autoregressive language models ar...
research
04/11/2022

Uniform Complexity for Text Generation

Powerful language models such as GPT-2 have shown promising results in t...
research
05/08/2023

HistAlign: Improving Context Dependency in Language Generation by Aligning with History

Language models (LMs) can generate hallucinations and incoherent outputs...

Please sign up or login with your details

Forgot password? Click here to reset