LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting

02/24/2021
by   Abheesht Sharma, et al.
21

In this article, we present our methodologies for SemEval-2021 Task-4: Reading Comprehension of Abstract Meaning. Given a fill-in-the-blank-type question and a corresponding context, the task is to predict the most suitable word from a list of 5 options. There are three sub-tasks within this task: Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection (subtask-III). We use encoders of transformers-based models pre-trained on the masked language modelling (MLM) task to build our Fill-in-the-blank (FitB) models. Moreover, to model imperceptibility, we define certain linguistic features, and to model non-specificity, we leverage information from hypernyms and hyponyms provided by a lexical database. Specifically, for non-specificity, we try out augmentation techniques, and other statistical techniques. We also propose variants, namely Chunk Voting and Max Context, to take care of input length restrictions for BERT, etc. Additionally, we perform a thorough ablation study, and use Integrated Gradients to explain our predictions on a few samples. Our best submissions achieve accuracies of 75.31 test sets for subtask-I and subtask-II, respectively. For subtask-III, we achieve accuracies of 65.64

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

02/25/2021

ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning

This paper presents our systems for the three Subtasks of SemEval Task4:...
04/04/2021

ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction

This paper describes our system for Task 4 of SemEval-2021: Reading Comp...
05/31/2021

SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning

This paper introduces the SemEval-2021 shared task 4: Reading Comprehens...
10/18/2020

Towards Interpreting BERT for Reading Comprehension Based QA

BERT and its variants have achieved state-of-the-art performance in vari...
05/24/2021

Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models

Pre-trained language models have achieved human-level performance on man...
06/24/2018

Subword-augmented Embedding for Cloze Reading Comprehension

Representation learning is the foundation of machine reading comprehensi...
04/15/2020

Exploring Probabilistic Soft Logic as a framework for integrating top-down and bottom-up processing of language in a task context

This technical report describes a new prototype architecture designed to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.