DeepAI AI Chat
Log In Sign Up

BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets

by   Yanzhu Guo, et al.
Ecole Polytechnique

We introduce BERTweetFR, the first large-scale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT which follows the base architecture of RoBERTa. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.


page 1

page 2

page 3

page 4


BERTweet: A pre-trained language model for English Tweets

We present BERTweet, the first public large-scale pre-trained language m...

HinFlair: pre-trained contextual string embeddings for pos tagging and text classification in the Hindi language

Recent advancements in language models based on recurrent neural network...

TimeLMs: Diachronic Language Models from Twitter

Despite its importance, the time variable has been largely neglected in ...

CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and Argumentation Detection

We present CoNTACT: a Dutch language model adapted to the domain of COVI...

An Empirical Survey of Unsupervised Text Representation Methods on Twitter Data

The field of NLP has seen unprecedented achievements in recent years. Mo...

A Continuously Growing Dataset of Sentential Paraphrases

A major challenge in paraphrase research is the lack of parallel corpora...

AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With

Large Pre-trained Language Models (PLMs) have become ubiquitous in the d...