Extracting Latent Steering Vectors from Pretrained Language Models

05/10/2022
by   Nishant Subramani, et al.
0

Prior work on controllable text generation has focused on learning how to control language models through trainable decoding, smart-prompt design, or fine-tuning based on a desired objective. We hypothesize that the information needed to steer the model to generate a target sentence is already encoded within the model. Accordingly, we explore a different approach altogether: extracting latent vectors directly from pretrained language model decoders without fine-tuning. Experiments show that there exist steering vectors, which, when added to the hidden states of the language model, generate a target sentence nearly perfectly (> 99 BLEU) for English sentences from a variety of domains. We show that vector arithmetic can be used for unsupervised sentiment transfer on the Yelp sentiment benchmark, with performance comparable to models tailored to this task. We find that distances between steering vectors reflect sentence similarity when evaluated on a textual similarity benchmark (STS-B), outperforming pooled hidden states of models. Finally, we present an analysis of the intrinsic properties of the steering vectors. Taken together, our results suggest that frozen LMs can be effectively controlled through their latent steering space.

READ FULL TEXT

page 2

page 6

research
07/10/2019

Can Unconditional Language Models Recover Arbitrary Sentences?

Neural network-based generative language models like ELMo and BERT can w...
research
08/20/2020

Discovering Useful Sentence Representations from Large Pretrained Language Models

Despite the extensive success of pretrained language models as encoders ...
research
10/18/2022

Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning

While transferring a pretrained language model, common approaches conven...
research
04/10/2022

Parameter-Efficient Tuning by Manipulating Hidden States of Pretrained Language Models For Classification Tasks

Parameter-efficient tuning aims to distill knowledge for downstream task...
research
05/27/2022

Diffusion-LM Improves Controllable Text Generation

Controlling the behavior of language models (LMs) without re-training is...
research
03/02/2022

Controlling the Focus of Pretrained Language Generation Models

The finetuning of pretrained transformer-based language generation model...
research
09/18/2020

Hierarchical GPT with Congruent Transformers for Multi-Sentence Language Models

We report a GPT-based multi-sentence language model for dialogue generat...

Please sign up or login with your details

Forgot password? Click here to reset