Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading

by   Martin Raison, et al.

This paper aims at improving how machines can answer questions directly from text, with the focus of having models that can answer correctly multiple types of questions and from various types of texts, documents or even from large collections of them. To that end, we introduce the Weaver model that uses a new way to relate a question to a textual context by weaving layers of recurrent networks, with the goal of making as few assumptions as possible as to how the information from both question and context should be combined to form the answer. We show empirically on six datasets that Weaver performs well in multiple conditions. For instance, it produces solid results on the very popular SQuAD dataset (Rajpurkar et al., 2016), solves almost all bAbI tasks (Weston et al., 2015) and greatly outperforms state-of-the-art methods for open domain question answering from text (Chen et al., 2017).


page 1

page 2

page 3

page 4


Revisiting the Open-Domain Question Answering Pipeline

Open-domain question answering (QA) is the tasl of identifying answers t...

Language Use Matters: Analysis of the Linguistic Structure of Question Texts Can Characterize Answerability in Quora

Quora is one of the most popular community Q&A sites of recent times. Ho...

Optimal sets of questions for Twenty Questions

In the distributional Twenty Questions game, Bob chooses a number x from...

Towards Near-imperceptible Steganographic Text

We show that the imperceptibility of several existing linguistic stegano...

Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems

A long-term goal of machine learning is to build intelligent conversatio...

Neural Paraphrase Identification of Questions with Noisy Pretraining

We present a solution to the problem of paraphrase identification of que...

Import test questions into Moodle LMS

The purpose of the study is to highlight the theoretical and methodologi...