A Comparative Study of Word Embeddings for Reading Comprehension

03/02/2017
by   Bhuwan Dhingra, et al.
0

The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on (1) the use of pre-trained word embeddings, and (2) the representation of out-of-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance. We systematically explore several options for these choices, and provide recommendations to researchers working in this area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2020

An Experimental Study of Deep Neural Network Models for Vietnamese Multiple-Choice Reading Comprehension

Machine reading comprehension (MRC) is a challenging task in natural lan...
research
10/15/2021

Tracing Origins: Coref-aware Machine Reading Comprehension

Machine reading comprehension is a heavily-studied research and test fie...
research
01/25/2021

English Machine Reading Comprehension Datasets: A Survey

This paper surveys 54 English Machine Reading Comprehension datasets, wi...
research
03/15/2018

HFL-RC System at SemEval-2018 Task 11: Hybrid Multi-Aspects Model for Commonsense Reading Comprehension

This paper describes the system which got the state-of-the-art results a...
research
08/07/2018

Effective Character-augmented Word Embedding for Machine Reading Comprehension

Machine reading comprehension is a task to model relationship between pa...
research
06/24/2018

Subword-augmented Embedding for Cloze Reading Comprehension

Representation learning is the foundation of machine reading comprehensi...
research
02/24/2021

LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting

In this article, we present our methodologies for SemEval-2021 Task-4: R...

Please sign up or login with your details

Forgot password? Click here to reset