Emergent Predication Structure in Hidden State Vectors of Neural Readers

11/23/2016
by   Hai Wang, et al.
0

A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of "predication structure" in the hidden state vectors of these readers. More specifically, we provide evidence that the hidden state vectors represent atomic formulas Φ[c] where Φ is a semantic property (predicate) and c is a constant symbol entity identifier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2020

Relation/Entity-Centric Reading Comprehension

Constructing a machine that understands human language is one of the mos...
research
07/15/2016

Attention-over-Attention Neural Networks for Reading Comprehension

Cloze-style queries are representative problems in reading comprehension...
research
10/05/2018

Entity Tracking Improves Cloze-style Reading Comprehension

Reading comprehension tasks test the ability of models to process long-t...
research
06/15/2020

On the Multi-Property Extraction and Beyond

In this paper, we investigate the Dual-source Transformer architecture o...
research
12/13/2016

Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors

We present a dual contribution to the task of machine reading-comprehens...
research
01/23/2020

A Study of the Tasks and Models in Machine Reading Comprehension

To provide a survey on the existing tasks and models in Machine Reading ...
research
06/29/2023

On the Relationship Between RNN Hidden State Vectors and Semantic Ground Truth

We examine the assumption that the hidden-state vectors of recurrent neu...

Please sign up or login with your details

Forgot password? Click here to reset