NLP-IIS@UT at SemEval-2021 Task 4: Machine Reading Comprehension using the Long Document Transformer

05/08/2021
by   Hossein Basafa, et al.
0

This paper presents a technical report of our submission to the 4th task of SemEval-2021, titled: Reading Comprehension of Abstract Meaning. In this task, we want to predict the correct answer based on a question given a context. Usually, contexts are very lengthy and require a large receptive field from the model. Thus, common contextualized language models like BERT miss fine representation and performance due to the limited capacity of the input tokens. To tackle this problem, we used the Longformer model to better process the sequences. Furthermore, we utilized the method proposed in the Longformer benchmark on Wikihop dataset which improved the accuracy on our task data from 23.01 to 70.30

READ FULL TEXT
research
01/05/2022

Multi Document Reading Comprehension

Reading Comprehension (RC) is a task of answering a question from a give...
research
02/25/2021

ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning

This paper presents our systems for the three Subtasks of SemEval Task4:...
research
05/07/2021

VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension

Existing models on Machine Reading Comprehension (MRC) require complex m...
research
04/15/2020

Exploring Probabilistic Soft Logic as a framework for integrating top-down and bottom-up processing of language in a task context

This technical report describes a new prototype architecture designed to...
research
04/04/2021

ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction

This paper describes our system for Task 4 of SemEval-2021: Reading Comp...
research
05/16/2020

Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension

In this paper, we study machine reading comprehension (MRC) on long text...
research
05/19/2022

Automated Scoring for Reading Comprehension via In-context BERT Tuning

Automated scoring of open-ended student responses has the potential to s...

Please sign up or login with your details

Forgot password? Click here to reset