Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

10/06/2022
by   David Wingate, et al.
0

We explore the idea of compressing the prompts used to condition language models, and show that compressed prompts can retain a substantive amount of information about the original prompt. For severely compressed prompts, while fine-grained information is lost, abstract information and general sentiments can be retained with surprisingly few parameters, which can be useful in the context of decode-time algorithms for controllability and toxicity reduction. We explore contrastive conditioning to steer language model generation towards desirable text and away from undesirable text, and find that some complex prompts can be effectively compressed into a single token to guide generation. We also show that compressed prompts are largely compositional, and can be constructed such that they can be used to control independent aspects of generated text.

READ FULL TEXT

page 4

page 9

page 14

research
05/24/2023

Text encoders are performance bottlenecks in contrastive vision-language models

Performant vision-language (VL) models like CLIP represent captions usin...
research
05/12/2023

Surfacing Biases in Large Language Models using Contrastive Input Decoding

Ensuring that large language models (LMs) are fair, robust and useful re...
research
04/25/2023

Semantic Compression With Large Language Models

The rise of large language models (LLMs) is revolutionizing information ...
research
09/14/2022

Out of One, Many: Using Language Models to Simulate Human Samples

We propose and explore the possibility that language models can be studi...
research
09/14/2020

GeDi: Generative Discriminator Guided Sequence Generation

Class-conditional language models (CC-LMs) can be used to generate natur...
research
02/08/2020

Blank Language Models

We propose Blank Language Model (BLM), a model that generates sequences ...
research
05/04/2023

Multi-Modality Deep Network for JPEG Artifacts Reduction

In recent years, many convolutional neural network-based models are desi...

Please sign up or login with your details

Forgot password? Click here to reset