AraBERT: Transformer-based Model for Arabic Language Understanding
The Arabic language is a morphologically rich and complex language with relatively little resources and a less explored syntax compared to English. Given these limitations, tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models proved to have a very efficient understanding of languages, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. We then compare the performance of AraBERT with multilingual BERT provided by Google and other state-of-the-art approaches. The results of the conducted experiments show that the newly developed AraBERT achieved state-of-the-art results on most tested tasks. The pretrained araBERT models are publicly available on hoping to encourage research and applications for Arabic NLP.
READ FULL TEXT