UNKs Everywhere: Adapting Multilingual Language Models to New Scripts

12/31/2020
by   Jonas Pfeiffer, et al.
4

Massively multilingual language models such as multilingual BERT (mBERT) and XLM-R offer state-of-the-art cross-lingual transfer performance on a range of NLP tasks. However, due to their limited capacity and large differences in pretraining data, there is a profound performance gap between resource-rich and resource-poor target languages. The ultimate challenge is dealing with under-resourced languages not covered at all by the models, which are also written in scripts unseen during pretraining. In this work, we propose a series of novel data-efficient methods that enable quick and effective adaptation of pretrained multilingual models to such low-resource languages and unseen scripts. Relying on matrix factorization, our proposed methods capitalize on the existing latent knowledge about multiple languages already available in the pretrained model's embedding matrix. Furthermore, we show that learning of the new dedicated embedding matrix in the target language can be improved by leveraging a small number of vocabulary items (i.e., the so-called lexically overlapping tokens) shared between mBERT's and target language vocabulary. Our adaptation techniques offer substantial performance gains for languages with unseen scripts. We also demonstrate that they can also yield improvements for low-resource languages written in scripts covered by the pretrained model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2023

Romanization-based Large-scale Adaptation of Multilingual Language Models

Large multilingual pretrained language models (mPLMs) have become the de...
research
05/23/2023

FOCUS: Effective Embedding Initialization for Specializing Pretrained Multilingual Models on a Single Language

Using model weights pretrained on a high-resource language as a warm sta...
research
05/18/2020

Are All Languages Created Equal in Multilingual BERT?

Multilingual BERT (mBERT) trained on 104 languages has shown surprisingl...
research
02/15/2023

Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models

Large multilingual models have inspired a new class of word alignment me...
research
11/22/2021

Knowledge Based Multilingual Language Model

Knowledge enriched language representation learning has shown promising ...
research
05/28/2023

Breaking Language Barriers with a LEAP: Learning Strategies for Polyglot LLMs

Large language models (LLMs) are at the forefront of transforming numero...
research
09/29/2020

Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank

Pretrained multilingual contextual representations have shown great succ...

Please sign up or login with your details

Forgot password? Click here to reset