ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training

11/15/2022
by   Henry Tang, et al.
0

Multilingual pre-trained models exhibit zero-shot cross-lingual transfer, where a model fine-tuned on a source language achieves surprisingly good performance on a target language. While studies have attempted to understand transfer, they focus only on MLM, and the large number of differences between natural languages makes it hard to disentangle the importance of different properties. In this work, we specifically highlight the importance of word embedding alignment by proposing a pre-training objective (ALIGN-MLM) whose auxiliary loss guides similar words in different languages to have similar word embeddings. ALIGN-MLM either outperforms or matches three widely adopted objectives (MLM, XLM, DICT-MLM) when we evaluate transfer between pairs of natural languages and their counterparts created by systematically modifying specific properties like the script. In particular, ALIGN-MLM outperforms XLM and MLM by 35 and 30 F1 points on POS-tagging for transfer between languages that differ both in their script and word order (left-to-right v.s. right-to-left). We also show a strong correlation between alignment and transfer for all objectives (e.g., rho=0.727 for XNLI), which together with ALIGN-MLM's strong performance calls for explicitly aligning word embeddings for multilingual models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2021

When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer

While recent work on multilingual language models has demonstrated their...
research
06/03/2021

Bilingual Alignment Pre-training for Zero-shot Cross-lingual Transfer

Multilingual pre-trained models have achieved remarkable transfer perfor...
research
05/22/2023

Machine-Created Universal Language for Cross-lingual Transfer

There are two types of approaches to solving cross-lingual transfer: mul...
research
10/15/2020

Explicit Alignment Objectives for Multilingual Bidirectional Encoders

Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) a...
research
10/14/2019

Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment

In this paper, we focus on the problem of adapting word vector-based mod...
research
09/01/2018

Why is unsupervised alignment of English embeddings from different algorithms so hard?

This paper presents a challenge to the community: Generative adversarial...
research
02/10/2020

Multilingual Alignment of Contextual Word Representations

We propose procedures for evaluating and strengthening contextual embedd...

Please sign up or login with your details

Forgot password? Click here to reset