Emergent Modularity in Pre-trained Transformers

05/28/2023
by   Zhengyan Zhang, et al.
0

This work examines the presence of modularity in pre-trained Transformers, a feature commonly found in human brains and thought to be vital for general intelligence. In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes. (2) function-based neuron grouping: we explore finding a structure that groups neurons into modules by function, and each module works for its corresponding function. Given the enormous amount of possible structures, we focus on Mixture-of-Experts as a promising candidate, which partitions neurons into experts and usually activates different experts for different inputs. Experimental results show that there are functional experts, where clustered are the neurons specialized in a certain function. Moreover, perturbing the activations of functional experts significantly affects the corresponding function. Finally, we study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage, which is faster than neuron stabilization. It suggests that Transformers first construct the modular structure and then learn fine-grained neuron functions. Our code and data are available at https://github.com/THUNLP/modularity-analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

Finding Skill Neurons in Pre-trained Transformer-based Language Models

Transformer-based pre-trained language models have demonstrated superior...
research
01/30/2023

Evaluating Neuron Interpretation Methods of NLP Models

Neuron Interpretation has gained traction in the field of interpretabili...
research
11/03/2020

Towards a Universal Gating Network for Mixtures of Experts

The combination and aggregation of knowledge from multiple neural networ...
research
10/10/2020

What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding

In recent years, pre-trained Transformers have dominated the majority of...
research
10/05/2017

BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages

We present BPEmb, a collection of pre-trained subword unit embeddings in...
research
05/03/2019

Computational analysis of laminar structure of the human cortex based on local neuron features

In this paper, we present a novel method for analysis and segmentation o...

Please sign up or login with your details

Forgot password? Click here to reset