Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models

04/17/2022
by   Terra Blevins, et al.
0

English pretrained language models, which make up the backbone of many modern NLP systems, require huge amounts of unlabeled training data. These models are generally presented as being trained only on English text but have been found to transfer surprisingly well to other languages. We investigate this phenomenon and find that common English pretraining corpora actually contain significant amounts of non-English text: even when less than 1 English (well within the error rate of strong language classifiers), this leads to hundreds of millions of foreign language tokens in large-scale datasets. We then demonstrate that even these small percentages of non-English data facilitate cross-lingual transfer for models trained on them, with target language performance strongly correlated to the amount of in-language data seen during pretraining. In light of these findings, we argue that no model is truly monolingual when pretrained at scale, which should be considered when evaluating cross-lingual transfer.

READ FULL TEXT

page 3

page 7

page 8

page 9

research
12/13/2021

WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models

Recently, large pretrained language models (LMs) have gained popularity....
research
09/04/2021

On the ability of monolingual models to learn language-agnostic representations

Pretrained multilingual models have become a de facto default approach f...
research
07/25/2020

Unsupervised Subword Modeling Using Autoregressive Pretraining and Cross-Lingual Phone-Aware Modeling

This study addresses unsupervised subword modeling, i.e., learning featu...
research
09/30/2021

Compositional generalization in semantic parsing with pretrained transformers

Large-scale pretraining instills large amounts of knowledge in deep neur...
research
10/21/2022

What do Large Language Models Learn beyond Language?

Large language models (LMs) have rapidly become a mainstay in Natural La...
research
04/08/2021

Grapheme-to-Phoneme Transformer Model for Transfer Learning Dialects

Grapheme-to-Phoneme (G2P) models convert words to their phonetic pronunc...
research
09/24/2020

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

Pretrained neural language models (LMs) are prone to generating racist, ...

Please sign up or login with your details

Forgot password? Click here to reset