Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic

12/28/2019
by   Ali Fadel, et al.
0

In this paper, we describe our team's effort on the semantic text question similarity task of NSURL 2019. Our top performing system utilizes several innovative data augmentation techniques to enlarge the training data. Then, it takes ELMo pre-trained contextual embeddings of the data and feeds them into an ON-LSTM network with self-attention. This results in sequence representation vectors that are used to predict the relation between the question pairs. The model is ranked in the 1st place with 96.499 F1-score (same as the second place F1-score) and the 2nd place with 94.848 F1-score (differs by 1.076 F1-score from the first place) on the public and private leaderboards, respectively.

READ FULL TEXT
research
01/23/2019

Self-Attentive Model for Headline Generation

Headline generation is a special type of text summarization task. While ...
research
11/07/2020

NLP-CIC @ PRELEARN: Mastering prerequisites relations, from handcrafted features to embeddings

We present our systems and findings for the prerequisite relation learni...
research
12/30/2020

Similarity Classification of Public Transit Stations

We study the following problem: given two public transit station identif...
research
09/19/2017

Methodology and Results for the Competition on Semantic Similarity Evaluation and Entailment Recognition for PROPOR 2016

In this paper, we present the methodology and the results obtained by ou...
research
03/19/2019

Bidirectional Recurrent Models for Offensive Tweet Classification

In this paper we propose four deep recurrent architectures to tackle the...
research
07/04/2019

Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling

Indicators of Compromise (IOCs) are artifacts observed on a network or i...

Please sign up or login with your details

Forgot password? Click here to reset