Post-hoc analysis of Arabic transformer models

10/18/2022
by   Ahmed Abdelali, et al.
0

Arabic is a Semitic language which is widely spoken with many dialects. Given the success of pre-trained language models, many transformer models trained on Arabic and its dialects have surfaced. While there have been an extrinsic evaluation of these models with respect to downstream NLP tasks, no work has been carried out to analyze and compare their internal representations. We probe how linguistic information is encoded in the transformer models, trained on different Arabic dialects. We perform a layer and neuron analysis on the models using morphological tagging tasks for different dialects of Arabic and a dialectal identification task. Our analysis enlightens interesting findings such as: i) word morphology is learned at the lower and middle layers, ii) while syntactic dependencies are predominantly captured at the higher layers, iii) despite a large overlap in their vocabulary, the MSA-based models fail to capture the nuances of Arabic dialects, iv) we found that neurons in embedding layers are polysemous in nature, while the neurons in middle layers are exclusive to specific properties

READ FULL TEXT

page 8

page 13

research
01/19/2022

Interpreting Arabic Transformer Models

Arabic is a Semitic language which is widely spoken with many dialects. ...
research
06/27/2022

Linguistic Correlation Analysis: Discovering Salient Neurons in deepNLP models

While a lot of work has been done in understanding representations learn...
research
04/08/2021

Low-Complexity Probing via Finding Subnetworks

The dominant approach in probing neural networks for linguistic properti...
research
10/06/2020

Analyzing Individual Neurons in Pre-trained Language Models

While a lot of analysis has been carried to demonstrate linguistic knowl...
research
03/25/2023

Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization

Most of previous work on learning diacritization of the Arabic language ...
research
04/06/2020

A Systematic Analysis of Morphological Content in BERT Models for Multiple Languages

This work describes experiments which probe the hidden representations o...
research
10/22/2022

A Benchmark Study of Contrastive Learning for Arabic Social Meaning

Contrastive learning (CL) brought significant progress to various NLP ta...

Please sign up or login with your details

Forgot password? Click here to reset