On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning

09/14/2021
by   Marc Tanti, et al.
6

Recent work has shown evidence that the knowledge acquired by multilingual BERT (mBERT) has two components: a language-specific and a language-neutral one. This paper analyses the relationship between them, in the context of fine-tuning on two tasks – POS tagging and natural language inference – which require the model to bring to bear different degrees of language-specific knowledge. Visualisations reveal that mBERT loses the ability to cluster representations by language after fine-tuning, a result that is supported by evidence from language identification experiments. However, further experiments on 'unlearning' language-specific representations using gradient reversal and iterative adversarial learning are shown not to add further improvement to the language-independent component over and above the effect of fine-tuning. The results presented here suggest that the process of fine-tuning causes a reorganisation of the model's limited representational capacity, enhancing language-independent representations at the expense of language-specific ones.

READ FULL TEXT

page 10

page 17

page 21

page 22

research
01/26/2021

First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT

Multilingual pretrained language models have demonstrated remarkable zer...
research
06/24/2023

Comparison of Pre-trained Language Models for Turkish Address Parsing

Transformer based pre-trained models such as BERT and its variants, whic...
research
04/30/2020

Modular Representation Underlies Systematic Generalization in Neural Natural Language Inference Models

In adversarial (challenge) testing, we pose hard generalization tasks in...
research
04/13/2021

Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT Fine-tuning

This paper presents our contribution to SemEval-2021 Task 2: Multilingua...
research
06/10/2020

Revisiting Few-sample BERT Fine-tuning

We study the problem of few-sample fine-tuning of BERT contextual repres...
research
12/18/2021

Improving Learning-to-Defer Algorithms Through Fine-Tuning

The ubiquity of AI leads to situations where humans and AI work together...
research
03/09/2022

PALI-NLP at SemEval-2022 Task 4: Discriminative Fine-tuning of Deep Transformers for Patronizing and Condescending Language Detection

Patronizing and condescending language (PCL) has a large harmful impact ...

Please sign up or login with your details

Forgot password? Click here to reset