Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

09/13/2021
by   Mohsen Fayyaz, et al.
0

Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models. In this work, we extend the probing studies to two other models in the family, namely ELECTRA and XLNet, showing that variations in the pre-training objectives or architectural choices can result in different behaviors in encoding linguistic information in the representations. Most notably, we observe that ELECTRA tends to encode linguistic knowledge in the deeper layers, whereas XLNet instead concentrates that in the earlier layers. Also, the former model undergoes a slight change during fine-tuning, whereas the latter experiences significant adjustments. Moreover, we show that drawing conclusions based on the weight mixing evaluation strategy – which is widely used in the context of layer-wise probing – can be misleading given the norm disparity of the representations across different layers. Instead, we adopt an alternative information-theoretic probing with minimum description length, which has recently been proven to provide more reliable and informative results.

READ FULL TEXT

page 6

page 7

page 12

page 13

research
05/31/2021

How transfer learning impacts linguistic knowledge in deep NLP models?

Transfer learning from pre-trained neural language models towards downst...
research
04/29/2020

What Happens To BERT Embeddings During Fine-tuning?

While there has been much recent work studying how linguistic informatio...
research
03/20/2022

How does the pre-training objective affect what large language models learn about linguistic properties?

Several pre-training objectives, such as masked language modeling (MLM),...
research
06/10/2023

What Can an Accent Identifier Learn? Probing Phonetic and Prosodic Information in a Wav2vec2-based Accent Identification Model

This study is focused on understanding and quantifying the change in pho...
research
03/27/2020

Information-Theoretic Probing with Minimum Description Length

To measure how well pretrained representations encode some linguistic pr...
research
05/09/2022

EigenNoise: A Contrastive Prior to Warm-Start Representations

In this work, we present a naive initialization scheme for word vectors ...
research
04/07/2020

Information-Theoretic Probing for Linguistic Structure

The success of neural networks on a diverse set of NLP tasks has led res...

Please sign up or login with your details

Forgot password? Click here to reset