On the Prunability of Attention Heads in Multilingual BERT

09/26/2021
by   Aakriti Budhraja, et al.
6

Large multilingual models, such as mBERT, have shown promise in crosslingual transfer. In this work, we employ pruning to quantify the robustness and interpret layer-wise importance of mBERT. On four GLUE tasks, the relative drops in accuracy due to pruning have almost identical results on mBERT and BERT suggesting that the reduced attention capacity of the multilingual models does not affect robustness to pruning. For the crosslingual task XNLI, we report higher drops in accuracy with pruning indicating lower robustness in crosslingual transfer. Also, the importance of the encoder layers sensitively depends on the language family and the pre-training corpus size. The top layers, which are relatively more influenced by fine-tuning, encode important information for languages similar to English (SVO) while the bottom layers, which are relatively less influenced by fine-tuning, are particularly important for agglutinative and low-resource languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/25/2019

A Study of Multilingual Neural Machine Translation

Multilingual neural machine translation (NMT) has recently been investig...
research
04/13/2022

Multilingual Language Model Adaptive Fine-Tuning: A Study on African Languages

Multilingual pre-trained language models (PLMs) have demonstrated impres...
research
02/19/2020

Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning

Universal feature extractors, such as BERT for natural language processi...
research
04/13/2021

Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT Fine-tuning

This paper presents our contribution to SemEval-2021 Task 2: Multilingua...
research
04/03/2019

75 Languages, 1 Model: Parsing Universal Dependencies Universally

We present UDify, a multilingual multi-task model capable of accurately ...
research
09/10/2021

Block Pruning For Faster Transformers

Pre-training has improved model accuracy for both classification and gen...
research
10/12/2022

SQuId: Measuring Speech Naturalness in Many Languages

Much of text-to-speech research relies on human evaluation, which incurs...

Please sign up or login with your details

Forgot password? Click here to reset