The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models

03/11/2021
by   Go Inoue, et al.
1

In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2021

Morphosyntactic Tagging with Pre-trained Language Models for Arabic and its Dialects

We present state-of-the-art results on morphosyntactic tagging across di...
research
03/25/2023

Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization

Most of previous work on learning diacritization of the Arabic language ...
research
10/02/2019

The merits of Universal Language Model Fine-tuning for Small Datasets – a case with Dutch book reviews

We evaluated the effectiveness of using language models, that were pre-t...
research
09/21/2023

AceGPT, Localizing Large Language Models in Arabic

This paper explores the imperative need and methodology for developing a...
research
03/13/2022

Towards Personalized Intelligence at Scale

Personalized Intelligence (PI) is the problem of providing customized AI...
research
11/18/2021

Supporting Undotted Arabic with Pre-trained Language Models

We observe a recent behaviour on social media, in which users intentiona...
research
09/05/2023

nanoT5: A PyTorch Framework for Pre-training and Fine-tuning T5-style Models with Limited Resources

State-of-the-art language models like T5 have revolutionized the NLP lan...

Please sign up or login with your details

Forgot password? Click here to reset