Hierarchical Question Answering for Long Documents

11/06/2016 ∙ by Eunsol Choi, et al. ∙ Google Tel Aviv University University of Washington 0

We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the Wikireading and on a new dataset, while speeding up the model by 3.5x-6.7x.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reading a document and answering questions about its content are among the hallmarks of natural language understanding.

Query ()

Document ()

Answer ()

Sentence Selection (Latent)

Answer Generation (RNN)

Document Summary (
Figure 1: Hierarchical question answering: the model first selects relevant sentences that produce a document summary () for the given query (), and then generates an answer () based on the summary () and the query .

Recently, interest in question answering (QA) from unstructured documents has increased along with the availability of large scale datasets for reading comprehension Hermann et al. (2015); Hill et al. (2015); Rajpurkar et al. (2016); Onishi et al. (2016); Nguyen et al. (2016); Trischler et al. (2016a).

Current state-of-the-art approaches for QA over documents are based on recurrent neural networks (RNNs) that encode the document and the question to determine the answer Hermann et al. (2015); Chen et al. (2016); Kumar et al. (2016); Kadlec et al. (2016); Xiong et al. (2016). While such models have access to all the relevant information, they are slow because the model needs to be run sequentially over possibly thousands of tokens, and the computation is not parallelizable. In fact, such models usually truncate the documents and consider only a limited number of tokens Miller et al. (2016); Hewlett et al. (2016). Inspired by studies on how people answer questions by first skimming the document, identifying relevant parts, and carefully reading these parts to produce an answer Masson (1983), we propose a coarse-to-fine model for question answering.

Our model takes a hierarchical approach (see Figure 1), where first a fast model is used to select a few sentences from the document that are relevant for answering the question Yu et al. (2014); Yang et al. (2016a). Then, a slow RNN is employed to produce the final answer from the selected sentences. The RNN is run over a fixed number of tokens, regardless of the length of the document. Empirically, our model encodes the text up to 6.7 times faster than the base model, which reads the first few paragraphs, while having access to four times more tokens.

A defining characteristic of our setup is that an answer does not necessarily appear verbatim in the input (the genre of a movie can be determined even if not mentioned explicitly). Furthermore, the answer often appears multiple times in the document in spurious contexts (the year ‘2012’ can appear many times while only once in relation to the question). Thus, we treat sentence selection as a latent variable that is trained jointly with the answer generation model from the answer only using reinforcement learning. Treating sentence selection as a latent variable has been explored in classification Yessenalina et al. (2010); Lei et al. (2016), however, to our knowledge, has not been applied for question answering.

We find that jointly training sentence selection and answer generation is especially helpful when locating the sentence containing the answer is hard. We evaluate our model on the Wikireading dataset Hewlett et al. (2016), focusing on examples where the document is long and sentence selection is challenging, and on a new dataset called Wikisuggest that contains more natural questions gathered from a search engine.

To conclude, we present a modular framework and learning procedure for QA over long text. It captures a limited form of document structure such as sentence boundaries and deals with long documents or potentially multiple documents. Experiments show improved performance compared to the state of the art on the subset of Wikireading, comparable performance on other datasets, and a 3.5x-6.7x speed up in document encoding, while allowing access to much longer documents.

: : The 2011 Joplin tornado was a catastrophic EF5-rated multiple-vortex tornado that struck Joplin, Missouri : It was the third tornado to strike Joplin since May 1971. : Overall, the tornado killed 158 people , injured some 1,150 others, and caused damages : how many people died in joplin mo tornado : 158 people

Figure 2: A training example containing a document , a question and an answer in the WikiSuggest dataset. In this example, the sentence is necessary to answer the question.

2 Problem Setting

Given a training set of question-document-answer triples , our goal is to learn a model that produces an answer for a question-document pair . A document is a list of sentences , and we assume that the answer can be produced from a small latent subset of the sentences. Figure 2 illustrates a training example in which sentence is in this subset.

3 Data

We evaluate on WikiReading, WikiReading Long, and a new dataset, WikiSuggest.

Wikireading Hewlett et al. (2016) is a QA dataset automatically generated from Wikipedia and Wikidata: given a Wikipedia page about an entity and a Wikidata property, such as profession, or gender, the goal is to infer the target value based on the document. Unlike other recently released large-scale datasets Rajpurkar et al. (2016); Trischler et al. (2016a), Wikireading does not annotate answer spans, making sentence selection more challenging.

Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences. Thus, the data is not ideal for testing a sentence selection model compared to a model that uses the first few sentences. Table 1 quantifies this intuition: We consider sentences containing the answer as a proxy for sentences that should be selected, and report how often appears in the document. Additionally, we report how frequently this proxy oracle sentence is the first sentence. We observe that in Wikireading, the answer appears verbatim in 47.1% of the examples, and in 75% of them the match is in the first sentence. Thus, the importance of modeling sentence selection is limited.

% answer avg # of % match
string exists answer match first sent
Wikireading 47.1 1.22 75.1
WR-Long 50.4 2.18 31.3
Wikisuggest 100 13.95 33.6
Table 1: Statistics on string matches of the answer in the document. The third column only considers examples with answer match. Often the answer string is missing or appears many times while it is relevant to query only once.
# of uniq. # of # of words # of tokens
queries examples / query / doc.
Wikireading 858 16.03M 2.35 568.9
WR-Long 239 1.97M 2.14 1200.7
Wikisuggest 3.47M 3.47M 5.03 5962.2
Table 2: Data statistics.

To remedy that, we filter Wikireading and ensure a more even distribution of answers throughout the document. We prune short documents with less than 10 sentences, and only consider Wikidata properties for which Hewlett:16’s best model obtains an accuracy of less than 60%. This prunes out properties such as Gender, Given Name, and Instance Of.111These three relations alone account for 33% of the data. The resulting Wikireading Long dataset contains 1.97M examples, where the answer appears in 50.4% of the examples, and appears in the first sentence only 31% of the time. On average, the documents in Wikireading Long contain 1.2k tokens, more tokens than those of SQuAD (average 122 tokens) or CNN (average 763 tokens) datasets (see Table 2). Table 1 shows that the exact answer string is often missing from the document in Wikireading. This is since Wikidata statements include properties such as Nationality, which are not explicitly mentioned, but can still be inferred. A drawback of this dataset is that the queries, Wikidata properties, are not natural language questions and are limited to 858 properties.

To model more realistic language queries, we collect the Wikisuggest dataset as follows. We use the Google Suggest API to harvest natural language questions and submit them to Google Search. Whenever Google Search returns a box with a short answer from Wikipedia (Figure  3), we create an example from the question, answer, and the Wikipedia document. If the answer string is missing from the document this often implies a spurious question-answer pair, such as (‘what time is half time in rugby’, ‘80 minutes, 40 minutes’). Thus, we pruned question-answer pairs without the exact answer string. We examined fifty examples after filtering and found that 54% were well-formed question-answer pairs where we can ground answers in the document, 20% contained answers without textual evidence in the document (the answer string exists in an irreleveant context), and 26% contain incorrect QA pairs such as the last two examples in Figure 3.

WikiSuggest Query Answer
what year did virgina became a state 1788
general manager of smackdown Theodore Long
minnesota viking colors purple
coco martin latest movies maybe this time
longest railway station in asia Gorakhpur
son from modern family Claire Dunphy
north dakota main religion Christian
lands end’ brand Lands’ End
wdsu radio station WCBE
Figure 3: Example queries and answers of Wikisuggest.

4 Model

Our model has two parts (Figure 1): a fast sentence selection model (Section 4.1) that defines a distribution over sentences given the input question () and the document (), and a more costly answer generation model (Section 4.3) that generates an answer given the question and a document summary, (Section 4.2), that focuses on the relevant parts of the document.

4.1 Sentence Selection Model

Following recent work on sentence selection Yu et al. (2014); Yang et al. (2016b), we build a feed-forward network to define a distribution over the sentences . We consider three simple sentence representations: a bag-of-words (BoW) model, a chunking model, and a (parallelizable) convolutional model. These models are efficient at dealing with long documents, but do not fully capture the sequential nature of text.

BoW Model

Given a sentence , we denote by the bag-of-words representation that averages the embeddings of the tokens in

. To define a distribution over the document sentences, we employ a standard attention model (e.g.,

Hermann et al. (2015)), where the BoW representation of the query is concatenated to the BoW representation of each sentence , and then passed through a single layer feed-forward network:

where indicates row-wise concatenation, and the matrix

, the vector

, and the word embeddings are learned parameters.

Chunked BoW Model

To get more fine-grained granularity, we split sentences into fixed-size smaller chunks (seven tokens per chunk) and score each chunk separately Miller et al. (2016). This is beneficial if questions are answered with sub-sentential units, by allowing to learn attention over different chunks. We split a sentence into a fixed number of chunks (

), generate a BoW representation for each chunk, and score it exactly as in the BoW model. We obtain a distribution over chunks, and compute sentence probabilities by marginalizing over chunks from the same sentence. Let

be the distribution over chunks from all sentences, then:

with the same parameters as in the BoW model.

Convolutional Neural Network Model

While our sentence selection model is designed to be fast, we explore a convolutional neural network (CNN) that can compose the meaning of nearby words. A CNN is still efficient, since all filters can be computed in parallel. Following previous work 

Kim (2014); Kalchbrenner et al. (2014), we concatenate the embeddings of tokens in the query and the sentence , and run a convolutional layer with filters and width over the concatenated embeddings. This results in features for every span of length , and we employ max-over-time-pooling Collobert et al. (2011) to get a final representation . We then compute by passing through a single layer feed-forward network as in the BoW model.

4.2 Document Summary

After computing attention over sentences, we create a summary that focuses on the document parts related to the question using deterministic soft attention or stochastic hard attention. Hard attention is more flexible, as it can focus on multiple sentences, while soft attention is easier to optimize and retains information from multiple sentences.

Hard Attention

We sample a sentence and fix the document summary to be that sentence during training. At test time, we choose the most probable sentence. To extend the document summary to contain more information, we can sample without replacement sentences from the document and define the summary to be the concatenation of the sampled sentences .

Soft Attention

In the soft attention model Bahdanau et al. (2015) we compute a weighted average of the tokens in the sentences according to . More explicitly, let be the th token of the document summary. Then, by fixing the length of every sentence to tokens,222

Long sentences are truncated and short ones are padded.

the blended tokens are computed as follows:

where is the th word in the th sentence ().

As the answer generation models (Section 4.3) take a sequence of vectors as input, we average the tokens at the word level. This gives the hard attention an advantage since it samples a “real” sentence without mixing words from different sentences. Conversely, soft attention is trained more easily, and has the capacity to learn a low-entropy distribution that is similar to hard attention.

4.3 Answer Generation Model

State-of-the-art question answering models Chen et al. (2016); Seo et al. (2016) use RNN models to encode the document and question and selects the answer. We focus on a hierarchical model with fast sentence selection, and do not subscribe to a particular answer generation architecture.

Here we implemented the state-of-the-art word-level sequence-to-sequence model with placeholders, described by Hewlett:16. This models can produce answers that does not appear in the sentence verbatim. This model takes the query tokens, and the document (or document summary) tokens as input and encodes them with a Gated Recurrent Unit (GRU; Cho:14). Then, the answer is decoded with another GRU model, defining a distribution over answers

. In this work, we modified the original RNN: the word embeddings for the RNN decoder input, output and original word embeddings are shared.

5 Learning

We consider three approaches for learning the model parameters (denoted by ): (1) We present a pipeline model, where we use distant supervision to train a sentence selection model independently from an answer generation model. (2) The hard attention model is optimized with REINFORCE Williams (1992)

algorithm. (3) The soft attention model is fully differentiable and is optimized end-to-end with backpropagation.

Distant Supervision

While we do not have an explicit supervision for sentence selection, we can define a simple heuristic for labeling sentences. We define the gold sentence to be the first sentence that has a full match of the answer string, or the first sentence in the document if no full match exists. By labeling gold sentences, we can train sentence selection and answer generation independently with standard supervised learning, maximizing the log-likelihood of the gold sentence and answer, given the document and query. Let

and be the target answer and sentence , where also serves as the document summary. The objective is to maximize:

Since at test time we do not have access to the target sentence needed for answer generation, we replace it by the model prediction .

Reinforcement Learning

Because the target sentence is missing, we use reinforcement learning where our action is sentence selection, and our goal is to select sentences that lead to a high reward. We define the reward for selecting a sentence as the log probability of the correct answer given that sentence, that is, . Then the learning objective is to maximize the expected reward:

Following REINFORCE Williams (1992), we approximate the gradient of the objective with a sample, :

Sampling sentences is similar and omitted for brevity.

Training with REINFORCE is known to be unstable due to the high variance induced by sampling. To reduce variance, we use curriculum learning, start training with distant supervision and gently transition to reinforcement learning, similar to

DAgger Ross et al. (2011). Given an example, we define the probability of using the distant supervision objective at each step as , where is the decay rate and

is the index of the current training epoch.

333 We tuned on the development set.

Soft Attention

We train the soft attention model by maximizing the log likelihood of the correct answer given the input question and document . Recall that the answer generation model takes as input the query and document summary , and since is an average of sentences weighted by sentence selection, the objective is differentiable and is trained end-to-end.

6 Experiments

Experimental Setup

We used 70% of the data for training, 10% for development, and 20% for testing in all datasets. We used the first 35 sentences in each document as input to the hierarchical models, where each sentence has a maximum length of 35 tokens. Similar to Miller:16, we add the first five words in the document (typically the title) at the end of each sentence sequence for WikiSuggest

. We add the sentence index as a one hot vector to the sentence representation. We coarsely tuned and fixed most hyper-parameters for all models, and separately tuned the learning rate and gradient clipping coefficients for each model on the development set. The details are reported in the supplementary material.

Evaluation Metrics

Our main evaluation metric is answer accuracy, the proportion of questions answered correctly. For sentence selection, since we do not know which sentence contains the answer, we report approximate sentence selection accuracy by matching sentences that contain the answer string (

). For the soft attention model, we treat the sentence with the highest probability as the predicted sentence.

Models and Baselines

The models Pipeline, Reinforce, and SoftAttend correspond to the learning objectives in Section 5. We compare these models against the following baselines:

  • First always selects the first sentence of the document. The answer appears in the first sentence in 33% and 15% of documents in Wikisuggest and Wikireading Long.

  • Base is the re-implementation of the best model by Hewlett:16, consuming the first 300 tokens. We experimented with providing additional tokens to match the length of document available to hierarchical models, but this performed poorly.444Our numbers on Wikireading outperform previously reported numbers due to modifications in implementation and better optimization.

  • Oracle selects the first sentence with the answer string if it exists, or otherwise the first sentence in the document.

Dataset Learning Accuracy
First 26.7
Base 40.1
Oracle 43.9
Wikireading Pipeline 36.8
Long SoftAttend 38.3
Reinforce (=1) 40.1
Reinforce (=2) 42.2
First 44.0
Base 46.7
Oracle 60.0
Wiki Pipeline 45.3
suggest SoftAttend 45.4
Reinforce (=1) 45.4
Reinforce (=2) 45.8
First 71.0
Hewlett et al. (2016) 71.8
Base 75.6
Oracle 74.6
Wikireading SoftAttend 71.6
Pipeline 72.4
Reinforce (=1) 73.0
Reinforce (=2) 74.5
Table 3: Answer prediction accuracy on the test set. is the number of sentences in the document summary.
Figure 4: Runtime for document encoding on an Intel Xeon CPU E5-1650 @3.20GHz on Wikireading at test time. The boxplot represents the throughput of Base and each line plot shows the proposed models’ speed gain over Base. Exact numbers are reported in the supplementary material.

Answer Accuracy Results

Table 3 summarizes answer accuracy on all datasets. We use Bow encoder for sentence selection as it is the fastest. The proposed hierarchical models match or exceed the performance of Base, while reducing the number of RNN steps significantly, from 300 to 35 (or 70 for =2), and allowing access to later parts of the document. Figure 4 reports the speed gain of our system. While throughput at training time can be improved by increasing the batch size, at test time real-life QA systems use batch size , where Reinforce obtains a 3.5x-6.7x speedup (for =2 or =1). In all settings, Reinforce was at least three times faster than the Base model.

All models outperform the First baseline, and utilizing the proxy oracle sentence (Oracle) improves performance on Wikisuggest and Wikireadng Long. In Wikireading, where the proxy oracle sentence is often missing and documents are short, Base outperforms Oracle.

Jointly learning answer generation and sentence selection, Reinforce outperforms Pipeline, which relies on a noisy supervision signal for sentence selection. The improvement is larger in Wikireading Long, where the approximate supervision for sentence selection is missing for 51% of examples compared to 22% of examples in Wikisuggest.555The number is lower than in Table 1 because we cropped sentences and documents, as mentioned above.

On Wikireading Long, Reinforce outperforms all other models (excluding Oracle, which has access to gold labels at test time). In other datasets, Base performs slightly better than the proposed models, at the cost of speed. In these datasets, the answers are concentrated in the first few sentences. Base is advantageous in categorical questions (such as Gender), gathering bits of evidence from the whole document, at the cost of speed. Encouragingly, our system almost reaches the performance of Oracle in Wikireading, showing strong results in a limited token setting.

Sampling an additional sentence into the document summary increased performance in all datasets, illustrating the flexibility of hard attention compared to soft attention. Additional sampling allows recovery from mistakes in Wikireading Long, where sentence selection is challenging.666Sampling more help pipeline methods less. Comparing hard attention to soft attention, we observe that Reinforce performed better than SoftAttend. The attention distribution learned by the soft attention model was often less peaked, generating noisier summaries.777We provide a visualization of the attention distribution for different learning methods in the supplementary material.

Sentence Selection Results

Dataset Learning Model Accuracy
CNN 70.7
Pipeline BoW 69.2
ChunkBoW 74.6
Wiki CNN 74.2
reading Reinforce BoW 72.2
Long ChunkBoW 74.4
First 31.3
SoftAttend (BoW) 70.1
CNN 62.3
Pipeline BoW 67.5
ChunkBoW 57.4
Wiki CNN 64.6
suggest Reinforce BoW 67.3
ChunkBoW 59.3
First 42.6
SoftAttend (BoW) 49.9
Table 4: Approximate sentence selection accuracy on the development set for all models. We use Oracle to find a proxy gold sentence and report the proportion of times each model selects the proxy sentence.

Table 4 reports sentence selection accuracy by showing the proportion of times models selects the proxy gold sentence when it is found by Oracle. In Wikireading Long, Reinforce finds the approximate gold sentence in 74.4% of the examples where the the answer is in the document. In Wikisuggest performance is at 67.5%, mostly due to noise in the data. Pipeline performs slightly better as it is directly trained towards our noisy evaluation. However, not all sentences that contain the answer are useful to answer the question (first example in Table 5). Reinforce learned to choose sentences that are likely to generate a correct answer rather than proxy gold sentences, improving the final answer accuracy. On Wikireading Long, complex models (CNN and ChunkBow) outperform the simple BoW, while on Wikisuggest Bow performed best.

WikiReading Long (WR Long)

Error Type No evidence in doc.
(Query, Answer) (place_of_death, Saint Petersburg)
System Output Crimean Peninsula
1 11.7 Alexandrovich Friedmann ( also spelled Friedman or [Fridman] , Russian :
4 3.4 Friedmann was baptized … and lived much of his life in Saint Petersburg .
25 63.6 Friedmann died on September 16 , 1925 , at the age of 37 , from typhoid fever that
he contracted while returning from a vacation in Crimean Peninsula .
Error Type Error in sentence selection
(Query, Answer) (position_played_on_team_speciality, power forward)
System Output point guard
1 37.8 James Patrick Johnson (born February 20 , 1987) is an American professional basketball player
for the Toronto Raptors of the National Basketball Association ( NBA ).
3 22.9 Johnson was the starting power forward for the Demon Deacons of Wake Forest University

WikiSuggest (WS)

Error Type Error in answer generation
(Query, Answer) (david blaine’s mother, Patrice Maureen White)
System Output Maureen
1 14.1 David Blaine (born David Blaine White; April 4, 1973) is an American magician, illusionist
8 22.6 Blaine was born and raised in, Brooklyn , New York the son of Patrice Maureen White …
Error Type Noisy query & answer
(Query, Answer) (what are dried red grapes called, dry red wines)
System Output Chardonnay
1 2.8 Burgundy wine ( French : Bourgogne or vin de Bourgogne ) is wine made in the
2 90.8 The most famous wines produced here … are dry red wines made from Pinot noir grapes …
Correctly Predicted Examples

WR Long

(Query, Answer) (position_held, member of the National Assembly of South Africa)
1 98.4 Anchen Margaretha Dreyer (born 27 March 1952) is a South African politician, a Member of
Parliament for the opposition Democratic Alliance , and currently
(Query, Answer) (headquarters_locations, Solihull)
1 13.8 LaSer UK is a provider of credit and loyalty programmes , operating in the UK and Republic
4 82.3 The company ’s operations are in Solihull and Belfast where it employs 800 people .


(Query, Answer) (avril lavigne husband, Chad Kroeger)
1 17.6 Avril Ramona Lavigne ([ˈævrɪl] [ləˈviːn] / ; French pronunciation : ¡200b¿ ( [avʁil] [laviɲ] ) ;…
23 68.4 Lavigne married Nickelback frontman , Chad Kroeger , in 2013 . Avril Ramona Lavigne was
Table 5: Example outputs from Reinforce (=1) with BoW sentence selection model. First column: sentence index (). Second column: attention distribution . Last column: text .
WR Long Wikisuggest
No evidence in doc. 29 8
Error in answer generation 13 15
Noisy query & answer 0 24
Error in sentence selection 8 3
Table 6: Manual error analysis on 50 errors from the development set for Reinforce (=1).

Qualitative Analysis

We categorized the primary reasons for the errors in Table 6 and present an example for each error type in Table 5. All examples are from Reinforce with BoW sentence selection. The most frequent source of error for Wikireading Long was lack of evidence in the document. While the dataset does not contain false answers, the document does not always provide supporting evidence (examples of properties without clues are elevation above sea level and sister). Interestingly, the answer string can still appear in the document as in the first example in Table 5: ‘Saint Petersburg’ appears in the document (4th sentence). Answer generation at times failed to generate the answer even when the correct sentence was selected. This was pronounced especially in long answers. For the automatically collected Wikisuggest dataset, noisy question-answer pairs were problematic, as discussed in Section 3. However, the models frequently guessed the spurious answer. We attribute higher proxy performance in sentence selection for Wikisuggest to noise. In manual analysis, sentence selection was harder in Wikireading Long, explaining why sampling two sentences improved performance.

In the first correct prediction (Table 5), the model generates the answer, even when it is not in the document. The second example shows when our model spots the relevant sentence without obvious clues. In the last example the model spots a sentence far from the head of the document.

7 Related Work

There has been substantial interest in datasets for reading comprehension. MCTest Richardson et al. (2013) is a smaller-scale datasets focusing on common sense reasoning; bAbi Weston et al. (2015) is a synthetic dataset that captures various aspects of reasoning; and SQuAD Rajpurkar et al. (2016); Wang et al. (2016); Xiong et al. (2016) and NewsQA Trischler et al. (2016a) are QA datasets where the answer is a span in the document. Compared to Wikireading, some datasets covers shorter passages (average 122 words for SQuAD). Cloze-style question answering datasets Hermann et al. (2015); Onishi et al. (2016); Hill et al. (2015) assess machine comprehension but do not form questions. The recently released MS MARCO dataset Nguyen et al. (2016) consists of query logs, web documents and crowd-sourced answers.

Answer sentence selection is studied with the TREC QA Voorhees and Tice (2000), WikiQA Yang et al. (2016b) and SelQA Jurczyk et al. (2016) datasets. Recently, neural networks models Wang and Nyberg (2015); Severyn and Moschitti (2015); dos Santos et al. (2016) achieved improvements. Sultan:16 optimized the answer sentence extraction and the answer extraction jointly, but with gold labels for both parts. TrischlerModel:2016 proposed a model that shares the intuition of observing inputs at multiple granularities (sentence, word), but deals with multiple choice questions. Our model considers answer sentence selection as latent and generates answer strings instead of selecting text spans.

Hierarchical models which treats sentence selection as a latent variable have been applied text categorization Yang et al. (2016b), extractive summarization Cheng and Lapata (2016), machine translation Ba et al. (2014)

and sentiment analysis 

Yessenalina et al. (2010); Lei et al. (2016). To the best of our knowledge, we are the first to use the hierarchical nature of a document for QA.

Finally, our work is related to the reinforcement learning literature. Hard and soft attention were examined in the context of caption generation Xu et al. (2015). Curriculum learning was investigated in Sachan:16, but they focused on the ordering of training examples while we combine supervision signals. Reinforcement learning recently gained popularity in tasks such as co-reference resolution Clark and Manning (2016), information extraction Narasimhan et al. (2016), semantic parsing Andreas et al. (2016) and textual games Narasimhan et al. (2015); He et al. (2016).

8 Conclusion

We presented a coarse-to-fine framework for QA over long documents that quickly focuses on the relevant portions of a document. In future work we would like to deepen the use of structural clues and answer questions over multiple documents, using paragraph structure, titles, sections and more. We argue that this is necessary for developing systems that can efficiently answer the information needs of users over large quantities of text.


  • Andreas et al. (2016) Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies .
  • Ba et al. (2014) Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2014. Multiple object recognition with visual attention. The International Conference on Learning Representations .
  • Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of the International Conference on Learning Representations .
  • Chen et al. (2016) Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Association for Computational Linguistics.
  • Cheng and Lapata (2016) Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. Proceedings of the Annual Meeting of the Association for Computational Linguistics .
  • Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation.

    Proceedings of the Conference of the Empirical Methods in Natural Language Processing

  • Clark and Manning (2016) Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the Conference of the Empirical Methods in Natural Language Processing.
  • Collobert et al. (2011) R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch.

    Journal of Machine Learning Research (JMLR)

  • dos Santos et al. (2016) Cícero Nogueira dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. CoRR abs/1602.03609.
  • He et al. (2016) Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep reinforcement learning with an unbounded action space. Proceedings of the Conference of the Association for Computational Linguistics .
  • Hermann et al. (2015) Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1506.03340.
  • Hewlett et al. (2016) Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. In Proceedings of the Conference of the Association for Computational Linguistics.
  • Hill et al. (2015) Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. The International Conference on Learning Representations .
  • Jurczyk et al. (2016) Tomasz Jurczyk, Michael Zhai, and Jinho D. Choi. 2016. SelQA: A New Benchmark for Selection-based Question Answering. In

    Proceedings of the 28th International Conference on Tools with Artificial Intelligence

    . San Jose, CA, ICTAI’16.
  • Kadlec et al. (2016) Rudolf Kadlec, Martin Schmid, Ondřej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 908–918. http://www.aclweb.org/anthology/P16-1086.
  • Kalchbrenner et al. (2014) Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. Proceedings of the Annual Meeting of the Association for Computational Linguistics .
  • Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. Proceedings of the Conference of the Empirical Methods in Natural Language Processing .
  • Kumar et al. (2016) Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of the International Conference on Machine Learning.
  • Lei et al. (2016) Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2016. Rationalizing neural predictions. Proceedings of the Conference of the Empirical Methods in Natural Language Processing .
  • Masson (1983) Michael EJ Masson. 1983. Conceptual processing of text during skimming and rapid sequential reading. Memory & Cognition 11(3):262–274.
  • Miller et al. (2016) Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. Proceedings of the Conference of the Empirical Methods in Natural Language Processing .
  • Narasimhan et al. (2015) Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. Proceedings of the Conference of the Empirical Methods in Natural Language Processing .
  • Narasimhan et al. (2016) Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquiring external evidence with reinforcement learning. Proceedings of the Conference of the Empirical Methods in Natural Language Processing .
  • Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop in Advances in Neural Information Processing Systems.
  • Onishi et al. (2016) Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. Proceedings of Empirical Methods in Natural Language Processing .
  • Rajpurkar et al. (2016) P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference of the Empirical Methods in Natural Language Processing.
  • Richardson et al. (2013) Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the Conference of the Empirical Methods in Natural Language Processing.
  • Ross et al. (2011) Stéphane Ross, Geoffrey J Gordon, and Drew Bagnell. 2011.

    A reduction of imitation learning and structured prediction to no-regret online learning.

    In International Conference on Artificial Intelligence and Statistics.
  • Sachan and Xing (2016) Mrinmaya Sachan and Eric P Xing. 2016. Easy questions first? a case study on curriculum learning for question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
  • Seo et al. (2016) Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Query-reduction networks for question answering. arXiv preprint arXiv:1606.04582 .
  • Severyn and Moschitti (2015) Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pages 373–382.
  • Sultan et al. (2016) Md. Arafat Sultan, Vittorio Castelli, and Radu Florian. 2016. A joint model for answer sentence ranking and answer extraction. Transactions of the Association for Computational Linguistics 4:113–125.
  • Trischler et al. (2016a) Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016a. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 .
  • Trischler et al. (2016b) Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. 2016b. A parallel-hierarchical model for machine comprehension on sparse data. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics .
  • Voorhees and Tice (2000) Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 200–207.
  • Wang and Nyberg (2015) Di Wang and Eric Nyberg. 2015.

    A long short-term memory model for answer sentence selection in question answering.

    In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
  • Wang et al. (2016) Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 .
  • Weston et al. (2015) Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 .
  • Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256.
  • Xiong et al. (2016) Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 .
  • Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. Proceedings of the International Conference on Machine Learning .
  • Yang et al. (2016a) Yi Yang, Wen-tau Yih, and Christopher Meek. 2016a. Wikiqa: A challenge dataset for open-domain question answering. Proceedings of the Conference of the Empirical Methods in Natural Language Processing .
  • Yang et al. (2016b) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016b. Hierarchical attention networks for document classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
  • Yessenalina et al. (2010) Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for document-level sentiment classification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1046–1056.
  • Yu et al. (2014) Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep Learning for Answer Sentence Selection. In

    NIPS Deep Learning Workshop