Analyzing Individual Neurons in Pre-trained Language Models

10/06/2020
by   Nadir Durrani, et al.
0

While a lot of analysis has been carried to demonstrate linguistic knowledge captured by the representations learned within deep NLP models, very little attention has been paid towards individual neurons.We carry outa neuron-level analysis using core linguistic tasks of predicting morphology, syntax and semantics, on pre-trained language models, with questions like: i) do individual neurons in pre-trained models capture linguistic information? ii) which parts of the network learn more about certain linguistic phenomena? iii) how distributed or focused is the information? and iv) how do various architectures differ in learning these properties? We found small subsets of neurons to predict linguistic tasks, with lower level tasks (such as morphology) localized in fewer neurons, compared to higher level task of predicting syntax. Our study also reveals interesting cross architectural comparisons. For example, we found neurons in XLNet to be more localized and disjoint when predicting properties compared to BERT and others, where they are more distributed and coupled.

READ FULL TEXT
research
06/27/2022

Linguistic Correlation Analysis: Discovering Salient Neurons in deepNLP models

While a lot of work has been done in understanding representations learn...
research
10/13/2022

Sentence Ambiguity, Grammaticality and Complexity Probes

It is unclear whether, how and where large pre-trained language models c...
research
12/13/2021

Sparse Interventions in Language Models with Differentiable Masking

There has been a lot of interest in understanding what information is ca...
research
01/19/2022

Interpreting Arabic Transformer Models

Arabic is a Semitic language which is widely spoken with many dialects. ...
research
07/01/2021

What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis

End-to-end DNN architectures have pushed the state-of-the-art in speech ...
research
10/14/2021

On the Pitfalls of Analyzing Individual Neurons in Language Models

While many studies have shown that linguistic information is encoded in ...
research
10/18/2022

Post-hoc analysis of Arabic transformer models

Arabic is a Semitic language which is widely spoken with many dialects. ...

Please sign up or login with your details

Forgot password? Click here to reset