Question-Answering with Grammatically-Interpretable Representations

05/23/2017
by   Hamid Palangi, et al.
0

We introduce an architecture, the Tensor Product Recurrent Network (TPRN). In our application of TPRN, internal representations learned by end-to-end optimization in a deep neural network performing a textual question-answering (QA) task can be interpreted using basic concepts from linguistic theory. No performance penalty need be paid for this increased interpretability: the proposed model performs comparably to a state-of-the-art system on the SQuAD QA task. The internal representation which is interpreted is a Tensor Product Representation: for each input word, the model selects a symbol to encode the word, and a role in which to place the symbol, and binds the two together. The selection is via soft attention. The overall interpretation is built from interpretations of the symbols, as recruited by the trained model, and interpretations of the roles as used by the model. We find support for our initial hypothesis that symbols can be interpreted as lexical-semantic word meanings, while roles can be interpreted as approximations of grammatical roles (or categories) such as subject, wh-word, determiner, etc. Fine-grained analysis reveals specific correspondences between the learned roles and parts of speech as assigned by a standard tagger (Toutanova et al. 2003), and finds several discrepancies in the model's favor. In this sense, the model learns significant aspects of grammar, after having been exposed solely to linguistically unannotated text, questions, and answers: no prior linguistic knowledge is given to the model. What is given is the means to build representations using symbols and roles, with an inductive bias favoring use of these in an approximately discrete manner.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/20/2017

Learning to Paraphrase for Question Answering

Question answering (QA) systems are sensitive to the many different ways...
research
07/20/2018

Question-Aware Sentence Gating Networks for Question and Answering

Machine comprehension question answering, which finds an answer to the q...
research
02/07/2017

Question Answering through Transfer Learning from Large Fine-grained Supervision Data

We show that the task of question answering (QA) can significantly benef...
research
11/07/2019

Contextualized Sparse Representation with Rectified N-Gram Attention for Open-Domain Question Answering

A sparse representation is known to be an effective means to encode prec...
research
09/26/2017

Tensor Product Generation Networks for Deep NLP Modeling

We present a new approach to the design of deep networks for natural lan...
research
02/15/2018

Deep contextualized word representations

We introduce a new type of deep contextualized word representation that ...
research
04/30/2020

Robust Question Answering Through Sub-part Alignment

Current textual question answering models achieve strong performance on ...

Please sign up or login with your details

Forgot password? Click here to reset