Empirical Study on Deep Learning Models for Question Answering

10/26/2015
by   Yang Yu, et al.
0

In this paper we explore deep learning models with memory component or attention mechanism for question answering task. We combine and compare three models, Neural Machine Translation, Neural Turing Machine, and Memory Networks for a simulated QA data set. This paper is the first one that uses Neural Machine Translation and Neural Turing Machines for solving QA tasks. Our results suggest that the combination of attention and memory have potential to solve certain QA problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2018

Tensor2Tensor for Neural Machine Translation

Tensor2Tensor is a library for deep learning models that is well-suited ...
research
10/10/2018

Exploring the Use of Attention within an Neural Machine Translation Decoder States to Translate Idioms

Idioms pose problems to almost all Machine Translation systems. This typ...
research
04/24/2017

Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets

Visual question answering (QA) has attracted a lot of attention lately, ...
research
12/06/2019

Machine Translation Evaluation Meets Community Question Answering

We explore the applicability of machine translation evaluation (MTE) met...
research
06/26/2019

Interpretable Question Answering on Knowledge Bases and Text

Interpretability of machine learning (ML) models becomes more relevant w...
research
03/11/2017

Ask Me Even More: Dynamic Memory Tensor Networks (Extended Model)

We examine Memory Networks for the task of question answering (QA), unde...
research
11/17/2017

Learning to Organize Knowledge with N-Gram Machines

Deep neural networks (DNNs) had great success on NLP tasks such as langu...

Please sign up or login with your details

Forgot password? Click here to reset