XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract Meaning

03/30/2021
by   Yuxin Jiang, et al.
0

This paper presents our submitted system to SemEval 2021 Task 4: Reading Comprehension of Abstract Meaning. Our system uses a large pre-trained language model as the encoder and an additional dual multi-head co-attention layer to strengthen the relationship between passages and question-answer pairs, following the current state-of-the-art model DUMA. The main difference is that we stack the passage-question and question-passage attention modules instead of calculating parallelly to simulate re-considering process. We also add a layer normalization module to improve the performance of our model. Furthermore, to incorporate our known knowledge about abstract concepts, we retrieve the definitions of candidate answers from WordNet and feed them to the model as extra inputs. Our system, called WordNet-enhanced DUal Multi-head Co-Attention (WN-DUMA), achieves 86.67 of subtask 1 and subtask 2 respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2020

Dual Multi-head Co-attention for Multi-choice Reading Comprehension

Multi-choice Machine Reading Comprehension (MRC) requires model to decid...
research
05/31/2021

SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning

This paper introduces the SemEval-2021 shared task 4: Reading Comprehens...
research
02/26/2020

Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension

Multiple-choice Machine Reading Comprehension (MRC) is an important and ...
research
02/25/2021

ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning

This paper presents our systems for the three Subtasks of SemEval Task4:...
research
05/13/2022

TIE: Topological Information Enhanced Structural Reading Comprehension on Web Pages

Recently, the structural reading comprehension (SRC) task on web pages h...
research
02/25/2021

IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation

This paper introduces our systems for all three subtasks of SemEval-2021...
research
11/09/2020

Synonym Knowledge Enhanced Reader for Chinese Idiom Reading Comprehension

Machine reading comprehension (MRC) is the task that asks a machine to a...

Please sign up or login with your details

Forgot password? Click here to reset