GeDi: Generative Discriminator Guided Sequence Generation

09/14/2020
by   Ben Krause, et al.
0

Class-conditional language models (CC-LMs) can be used to generate natural language with specific attributes, such as style or sentiment, by conditioning on an attribute label, or control code. However, we find that these models struggle to control generation when applied to out-of-domain prompts or unseen control codes. To overcome these limitations, we propose generative discriminator (GeDi) guided contrastive generation, which uses CC-LMs as generative discriminators (GeDis) to efficiently guide generation from a (potentially much larger) LM towards a desired attribute. In our human evaluation experiments, we show that GeDis trained for sentiment control on movie reviews are able to control the tone of book text. We also demonstrate that GeDis are able to detoxify generation and control topic while maintaining the same level of linguistic acceptability as direct generation from GPT-2 (1.5B parameters). Lastly, we show that a GeDi trained on only 4 topics can generalize to new control codes from word embeddings, allowing it to guide generation towards wide array of topics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2022

Controllable Natural Language Generation with Contrastive Prefixes

To guide the generation of large pretrained language models (LM), previo...
research
09/24/2021

Style Control for Schema-Guided Natural Language Generation

Natural Language Generation (NLG) for task-oriented dialogue systems foc...
research
07/07/2021

Deep Extrapolation for Attribute-Enhanced Generation

Attribute extrapolation in sample generation is challenging for deep neu...
research
05/12/2022

Sampling with Attribute-Related Information for Controlling Language Models

The dominant approaches for controlling language models are based on fin...
research
10/18/2022

DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation

Prompt learning with immensely large Casual Language Models (CLMs) has b...
research
10/06/2022

Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

We explore the idea of compressing the prompts used to condition languag...
research
06/02/2023

PassGPT: Password Modeling and (Guided) Generation with Large Language Models

Large language models (LLMs) successfully model natural language from va...

Please sign up or login with your details

Forgot password? Click here to reset