Self-conditioning pre-trained language models

09/30/2021
by   Xavier Suau, et al.
0

We study the presence of expert units in pre-trained Transformer-based Language Models (TLMs), and how they can be used to condition text generation to contain specific concepts. We define expert units to be neurons that are able to detect a concept in the input with a given average precision. A concept is represented with a set of sentences that either do or do not contain the concept. Leveraging the OneSec dataset, we compile a dataset of 1344 concepts that allows diverse expert units in TLMs to be discovered. Our experiments demonstrate that off-the-shelf pre-trained TLMs can be conditioned on their own knowledge (self-conditioning) to generate text that contains a given concept. To this end, we intervene on the top expert units by fixing their output during inference, and we show experimentally that this is an effective method to condition TLMs. Our method does not require fine-tuning the model or using additional parameters, which allows conditioning large TLM with minimal compute resources. Furthermore, by intervening on a small number of experts in GPT2, we can achieve parity with respect to two concepts at generation time. The specific case of gender bias is explored, and we show that, for given contexts, gender parity is achieved while maintaining the model's perplexity.

READ FULL TEXT

page 12

page 13

research
05/15/2020

Finding Experts in Transformer Models

In this work we study the presence of expert units in pre-trained Transf...
research
07/13/2020

Do You Have the Right Scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods

It has been a common approach to pre-train a language model on a large c...
research
08/04/2019

A Repairable System Supported by Two Spare Units and Serviced by Two Types of Repairers

We study a one-unit repairable system, supported by two identical spare ...
research
04/04/2021

Recommending Metamodel Concepts during Modeling Activities with Pre-Trained Language Models

The design of conceptually sound metamodels that embody proper semantics...
research
07/06/2022

Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning

Language model debiasing has emerged as an important field of study in t...
research
04/13/2022

GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

Recent improvements in KG-to-text generation are due to additional auxil...
research
10/05/2022

COMPS: Conceptual Minimal Pair Sentences for testing Property Knowledge and Inheritance in Pre-trained Language Models

A characteristic feature of human semantic memory is its ability to not ...

Please sign up or login with your details

Forgot password? Click here to reset