CoCon: A Self-Supervised Approach for Controlled Text Generation

06/05/2020
by   Alvin Chan, et al.
9

Pretrained Transformer-based language models (LMs) display remarkable natural language generation capabilities. With their immense potential, controlling text generation of such LMs is getting attention. While there are studies that seek to control high-level attributes (such as sentiment and topic) of generated text, there is still a lack of more precise control over its content at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to control an LM's output text with a target content, at a fine-grained level. In our self-supervised approach, the CoCon block learns to help the LM complete a partially-observed text sequence by conditioning with content inputs that are withheld from the LM. Through experiments, we show that CoCon can naturally incorporate target content into generated texts and control high-level text attributes in a zero-shot manner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Control Prefixes for Text Generation

Prompt learning methods adapt pre-trained language models to downstream ...
research
04/16/2021

An Empirical Study of Extrapolation in Text Generation with Scalar Control

We conduct an empirical evaluation of extrapolation performance when con...
research
12/09/2022

Plug-and-Play Recipe Generation with Content Planning

Recent pre-trained language models have shown promising capabilities in ...
research
03/11/2021

Topical Language Generation using Transformers

Large-scale transformer-based language models (LMs) demonstrate impressi...
research
10/24/2020

CaM-Gen:Causally-aware Metric-guided Text Generation

Content is created for a well-defined purpose, often described by a metr...
research
03/29/2021

Changing the Mind of Transformers for Topically-Controllable Language Generation

Large Transformer-based language models can aid human authors by suggest...
research
05/27/2022

Diffusion-LM Improves Controllable Text Generation

Controlling the behavior of language models (LMs) without re-training is...

Please sign up or login with your details

Forgot password? Click here to reset