TunBERT: Pretrained Contextualized Text Representation for Tunisian Dialect

11/25/2021
by   Abir Messaoudi, et al.
9

Pretrained contextualized text representation models learn an effective representation of a natural language to make it machine understandable. After the breakthrough of the attention mechanism, a new generation of pretrained models have been proposed achieving good performances since the introduction of the Transformer. Bidirectional Encoder Representations from Transformers (BERT) has become the state-of-the-art model for language understanding. Despite their success, most of the available models have been trained on Indo-European languages however similar research for under-represented languages and dialects remains sparse. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for under represented languages, with a specific focus on the Tunisian dialect. We evaluate our language model on sentiment analysis task, dialect identification task and reading comprehension question-answering task. We show that the use of noisy web crawled data instead of structured data (Wikipedia, articles, etc.) is more convenient for such non-standardized language. Moreover, results indicate that a relatively small web crawled dataset leads to performances that are as good as those obtained using larger datasets. Finally, our best performing TunBERT model reaches or improves the state-of-the-art in all three downstream tasks. We release the TunBERT pretrained model and the datasets used for fine-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2019

CamemBERT: a Tasty French Language Model

Pretrained language models are now ubiquitous in Natural Language Proces...
research
11/04/2020

Indic-Transformers: An Analysis of Transformer Language Models for Indian Languages

Language models based on the Transformer architecture have achieved stat...
research
07/09/2020

Advances of Transformer-Based Models for News Headline Generation

Pretrained language models based on Transformer architecture are the rea...
research
05/17/2019

Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language

The paper introduces methods of adaptation of multilingual masked langua...
research
03/03/2023

Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT

ChatGPT has shown the potential of emerging general artificial intellige...
research
01/27/2021

KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding

A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional ...
research
03/15/2022

Data Contamination: From Memorization to Exploitation

Pretrained language models are typically trained on massive web-based da...

Please sign up or login with your details

Forgot password? Click here to reset