UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining

04/18/2023
by   Hyung Won Chung, et al.
0

Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2022

Extending the Subwording Model of Multilingual Pretrained Models for New Languages

Multilingual pretrained models are effective for machine translation and...
research
05/27/2022

HiJoNLP at SemEval-2022 Task 2: Detecting Idiomaticity of Multiword Expressions using Multilingual Pretrained Language Models

This paper describes an approach to detect idiomaticity only from the co...
research
08/02/2020

Multilingual Translation with Extensible Multilingual Pretraining and Finetuning

Recent work demonstrates the potential of multilingual pretraining of cr...
research
05/24/2023

Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions

Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4,...
research
06/03/2021

How to Adapt Your Pretrained Multilingual Model to 1600 Languages

Pretrained multilingual models (PMMs) enable zero-shot learning via cros...
research
08/20/2020

PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data

In natural language processing (NLP), there is a need for more resources...
research
04/16/2023

Sabiá: Portuguese Large Language Models

As the capabilities of language models continue to advance, it is concei...

Please sign up or login with your details

Forgot password? Click here to reset