On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment

10/06/2020
by   Zirui Wang, et al.
0

Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages. However, recent work has shown that this approach can degrade performance on high-resource languages, a phenomenon known as negative interference. In this paper, we present the first systematic study of negative interference. We show that, contrary to previous belief, negative interference also impacts low-resource languages. While parameters are maximally shared to learn language-universal structures, we demonstrate that language-specific parameters do exist in multilingual models and they are a potential cause of negative interference. Motivated by these observations, we also present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference, by adding language-specific layers as meta-parameters and training them in a manner that explicitly improves shared layers' generalization on all languages. Overall, our results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2022

HLT-MT: High-resource Language-specific Training for Multilingual Neural Machine Translation

Multilingual neural machine translation (MNMT) trained in multiple langu...
research
03/13/2020

Interference and Generalization in Temporal Difference Learning

We study the link between generalization and interference in temporal-di...
research
05/18/2022

Persian Natural Language Inference: A Meta-learning approach

Incorporating information from other languages can improve the results o...
research
04/16/2021

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning

The combination of multilingual pre-trained representations and cross-li...
research
07/20/2021

More Parameters? No Thanks!

This work studies the long-standing problems of model capacity and negat...
research
10/11/2022

Shapley Head Pruning: Identifying and Removing Interference in Multilingual Transformers

Multilingual transformer-based models demonstrate remarkable zero and fe...
research
04/15/2021

Demystify Optimization Challenges in Multilingual Transformers

Multilingual Transformer improves parameter efficiency and crosslingual ...

Please sign up or login with your details

Forgot password? Click here to reset