Pre-trained Transformer-Based Approach for Arabic Question Answering : A Comparative Study

11/10/2021
by   Kholoud Alsubhi, et al.
0

Question answering(QA) is one of the most challenging yet widely investigated problems in Natural Language Processing (NLP). Question-answering (QA) systems try to produce answers for given questions. These answers can be generated from unstructured or structured text. Hence, QA is considered an important research area that can be used in evaluating text understanding systems. A large volume of QA studies was devoted to the English language, investigating the most advanced techniques and achieving state-of-the-art results. However, research efforts in the Arabic question-answering progress at a considerably slower pace due to the scarcity of research efforts in Arabic QA and the lack of large benchmark datasets. Recently many pre-trained language models provided high performance in many Arabic NLP problems. In this work, we evaluate the state-of-the-art pre-trained transformers models for Arabic QA using four reading comprehension datasets which are Arabic-SQuAD, ARCD, AQAD, and TyDiQA-GoldP datasets. We fine-tuned and compared the performance of the AraBERTv2-base model, AraBERTv0.2-large model, and AraELECTRA model. In the last, we provide an analysis to understand and interpret the low-performance results obtained by some models.

READ FULL TEXT

page 4

page 12

research
06/12/2019

Neural Arabic Question Answering

This paper tackles the problem of open domain factual Arabic question an...
research
05/12/2022

DTW at Qur'an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource Domain

The task of machine reading comprehension (MRC) is a useful benchmark to...
research
06/03/2022

TCE at Qur'an QA 2022: Arabic Language Question Answering Over Holy Qur'an Using a Post-Processed Ensemble of BERT-based Models

In recent years, we witnessed great progress in different tasks of natur...
research
09/15/2021

Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?

There have been many efforts to try to understand what gram-matical know...
research
05/10/2021

ReadTwice: Reading Very Large Documents with Memories

Knowledge-intensive tasks such as question answering often require assim...
research
11/21/2022

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

While large pre-trained language models are powerful, their predictions ...
research
01/23/2023

PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development

The field of Question Answering (QA) has made remarkable progress in rec...

Please sign up or login with your details

Forgot password? Click here to reset