Neural Models for Reasoning over Multiple Mentions using Coreference

04/16/2018
by   Bhuwan Dhingra, et al.
0

Many problems in NLP require aggregating information from multiple mentions of the same entity which may be far apart in the text. Existing Recurrent Neural Network (RNN) layers are biased towards short-term dependencies and hence not suited to such tasks. We present a recurrent layer which is instead biased towards coreferent dependencies. The layer uses coreference annotations extracted from an external system to connect entity mentions belonging to the same cluster. Incorporating this layer into a state-of-the-art reading comprehension model improves performance on three datasets -- Wikihop, LAMBADA and the bAbi AI tasks -- with large gains when training data is scarce.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2018

Lightweight Convolutional Approaches to Reading Comprehension on SQuAD

Current state-of-the-art reading comprehension models rely heavily on re...
research
11/14/2019

Contextual Recurrent Units for Cloze-style Reading Comprehension

Recurrent Neural Networks (RNN) are known as powerful models for handlin...
research
10/05/2018

Entity Tracking Improves Cloze-style Reading Comprehension

Reading comprehension tasks test the ability of models to process long-t...
research
11/09/2017

An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading Comprehension Tasks

Reading comprehension (RC) is a challenging task that requires synthesis...
research
01/23/2020

A Study of the Tasks and Models in Machine Reading Comprehension

To provide a survey on the existing tasks and models in Machine Reading ...
research
03/07/2017

Linguistic Knowledge as Memory for Recurrent Neural Networks

Training recurrent neural networks to model long term dependencies is di...
research
11/06/2018

Recurrent Skipping Networks for Entity Alignment

We consider the problem of learning knowledge graph (KG) embeddings for ...

Please sign up or login with your details

Forgot password? Click here to reset