Finding Skill Neurons in Pre-trained Transformer-based Language Models

11/14/2022
by   Xiaozhi Wang, et al.
0

Transformer-based pre-trained language models have demonstrated superior performance on various natural language processing tasks. However, it remains unclear how the skills required to handle these tasks distribute among model parameters. In this paper, we find that after prompt tuning for specific tasks, the activations of some neurons within pre-trained Transformers are highly predictive of the task labels. We dub these neurons skill neurons and confirm they encode task-specific skills by finding that: (1) Skill neurons are crucial for handling tasks. Performances of pre-trained Transformers on a task significantly drop when corresponding skill neurons are perturbed. (2) Skill neurons are task-specific. Similar tasks tend to have similar distributions of skill neurons. Furthermore, we demonstrate the skill neurons are most likely generated in pre-training rather than fine-tuning by showing that the skill neurons found with prompt tuning are also crucial for other fine-tuning methods freezing neuron weights, such as the adapter-based tuning and BitFit. We also explore the applications of skill neurons, including accelerating Transformers with network pruning and building better transferability indicators. These findings may promote further research on understanding Transformers. The source code can be obtained from https://github.com/THU-KEG/Skill-Neuron.

READ FULL TEXT

page 1

page 5

page 6

page 15

page 20

research
09/21/2023

On the Relationship between Skill Neurons and Robustness in Prompt Tuning

Prompt Tuning is a popular parameter-efficient finetuning method for pre...
research
05/28/2023

Emergent Modularity in Pre-trained Transformers

This work examines the presence of modularity in pre-trained Transformer...
research
05/23/2023

Skill-Based Few-Shot Selection for In-Context Learning

In-Context learning is the paradigm that adapts large language models to...
research
01/10/2023

Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding

Current natural language understanding (NLU) models have been continuous...
research
07/26/2023

Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models

The quality of training data impacts the performance of pre-trained larg...
research
08/25/2023

Journey to the Center of the Knowledge Neurons: Discoveries of Language-Independent Knowledge Neurons and Degenerate Knowledge Neurons

Pre-trained language models (PLMs) contain vast amounts of factual knowl...
research
09/03/2021

MitoVis: A Visually-guided Interactive Intelligent System for Neuronal Mitochondria Analysis

Neurons have a polarized structure, including dendrites and axons, and c...

Please sign up or login with your details

Forgot password? Click here to reset