Quasar: Datasets for Question Answering by Search and Reading

07/12/2017 ∙ by Bhuwan Dhingra, et al. ∙ Carnegie Mellon University 0

We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neural models, and show that these lag behind human performance by 16.4 https://github.com/bdhingra/quasar .



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase (Bollacker et al., 2008), are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete (Miller et al., 2016; West et al., 2014), and hence can only answer a limited subset of all possible factoid questions.

For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in textual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, however, challenging, and typical QA pipelines consist of the following two components: (1) searching for the passages relevant to the given question, and (2) reading the retrieved text in order to select a span of text which best answers the question (Chen et al., 2017; Watanabe et al., 2017).

Like most other language technologies, the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in particular, has been significantly boosted in the last few years with the introduction of large-scale reading comprehension datasets such as CNN / DailyMail

(Hermann et al., 2015) and Squad (Rajpurkar et al., 2016). State-of-the-art systems for these datasets (Dhingra et al., 2017; Seo et al., 2017) focus solely on step (2) above, in effect assuming the relevant passage of text is already known.

Question javascript – javascript not to be confused with java is a dynamic weakly-typed language used for XXXXX as well as server-side scripting .
Answer client-side
Context excerpt JavaScript is not weakly typed, it is strong typed.
JavaScript is a Client Side Scripting Language.
JavaScript was the **original** client-side web scripting language.
Question 7-Eleven stores were temporarily converted into Kwik E-marts to promote the release of what movie?
Answer the simpsons movie
Context excerpt In July 2007 , 7-Eleven redesigned some stores to look like Kwik-E-Marts in select cities to promote The Simpsons Movie .
Tie-in promotions were made with several companies , including 7-Eleven , which transformed selected stores into Kwik-E-Marts .
“ 7-Eleven Becomes Kwik-E-Mart for ‘ Simpsons Movie ’ Promotion ” .
Figure 1: Example short-document instances from Quasar-S (top) and Quasar-T (bottom)

In this paper, we introduce two new datasets for QUestion Answering by Search And Reading – Quasar. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension. Quasar-S consists of 37,362 cloze-style questions constructed from definitions of software entities available on the popular website Stack Overflow111Stack Overflow is a website featuring questions and answers (posts) from a wide range of topics in computer programming. The entity definitions were scraped from https://stackoverflow.com/tags.. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities. Quasar-T consists of 43,013 trivia questions collected from various internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases.

While production quality QA systems may have access to the entire world wide web as a knowledge source, for Quasar we restrict our search to specific background corpora. This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed. For Quasar-S we construct the knowledge source by collecting top 50 threads222A question along with the answers provided by other users is collectively called a thread. The threads are ranked in terms of votes from the community. Note that these questions are different from the cloze-style queries in the Quasar-S dataset. tagged with each entity in the dataset on the Stack Overflow website. For Quasar-T we use ClueWeb09 (Callan et al., 2009), which contains about 1 billion web pages collected between January and February 2009. Figure 1 shows some examples.

Unlike existing reading comprehension tasks, the Quasar tasks go beyond the ability to only understand a given passage, and require the ability to answer questions given large corpora. Prior datasets (such as those used in (Chen et al., 2017)) are constructed by first selecting a passage and then constructing questions about that passage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for passages that may contain candidate answers, and aggregating information/resolving conflicts between candidates from many passages. The purpose of Quasar is to allow research into these subproblems, and in particular whether the search step can benefit from integration and joint training with downstream reading systems.

Additionally, Quasar-S has the interesting feature of being a closed-domain dataset about computer programming, and successful approaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed-domain QA datasets available. Quasar-T, on the other hand, consists of open-domain questions based on trivia, which refers to “bits of information, often of little importance”. Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that Quasar-T requires a deeper reading of documents to answer correctly.

We evaluate Quasar against human testers, as well as several baselines ranging from naïve heuristics to state-of-the-art machine readers. The best performing baselines achieve and on Quasar-S and Quasar-T, while human performance is and respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies – retrieving more documents in the search phase leads to a higher coverage of answers, but makes the comprehension task more difficult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved documents for each question.

2 Existing Datasets

Open-Domain QA:

Early research into open-domain QA was driven by the TREC-QA challenges organized by the National Institute of Standards and Technology (NIST) (Voorhees and Tice, 2000). Both dataset construction and evaluation were done manually, restricting the size of the dataset to only a few hundreds. WikiQA (Yang et al., 2015) was introduced as a larger-scale dataset for the subtask of answer sentence selection, however it does not identify spans of the actual answer within the selected sentence. More recently, Miller et al. (2016) introduced the MoviesQA dataset where the task is to answer questions about movies from a background corpus of Wikipedia articles. MoviesQA contains questions, however many of these are similarly phrased and fall into one of only different categories; hence, existing systems already have accuracy on it (Watanabe et al., 2017). MS MARCO (Nguyen et al., 2016) consists of diverse real-world queries collected from Bing search logs, however many of them not factual, which makes their evaluation tricky. Chen et al. (2017) study the task of Machine Reading at Scale which combines the aspects of search and reading for open-domain QA. They show that jointly training a neural reader on several distantly supervised QA datasets leads to a performance improvement on all of them. This justifies our motivation of introducing two new datasets to add to the collection of existing ones; more data is good data.

Reading Comprehension:

Reading Comprehension (RC) aims to measure the capability of systems to “understand” a given piece of text, by posing questions over it. It is assumed that the passage containing the answer is known beforehand. Several datasets have been proposed to measure this capability. Richardson et al. (2013) used crowd-sourcing to collect MCTest – stories with questions over them. Significant progress, however, was enabled when Hermann et al. (2015) introduced the much larger CNN / Daily Mail datasets consisting of and cloze-style questions respectively. Children’s Book Test (CBT) (Hill et al., 2016) and Who-Did-What (WDW) (Onishi et al., 2016) are similar cloze-style datasets. However, the automatic procedure used to construct these questions often introduces ambiguity and makes the task more difficult (Chen et al., 2016). Squad (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2016) attempt to move toward more general extractive QA by collecting, through crowd-sourcing, more than questions whose answers are spans of text in a given passage. Squad in particular has attracted considerable interest, but recent work (Weissenborn et al., 2017) suggests that answering the questions does not require a great deal of reasoning.

Recently, Joshi et al. (2017) prepared the TriviaQA dataset, which also consists of trivia questions collected from online sources, and is similar to Quasar-T. However, the documents retrieved for TriviaQA were obtained using a commercial search engine, making it difficult for researchers to vary the retrieval step of the QA system in a controlled fashion; in contrast we use ClueWeb09, a standard corpus. We also supply a larger collection of retrieved passages, including many not containing the correct answer to facilitate research into retrieval, perform a more extensive analysis of baselines for answering the questions, and provide additional human evaluation and annotation of the questions. In addition we present Quasar-S, a second dataset. SearchQA (Dunn et al., 2017) is another recent dataset aimed at facilitating research towards an end-to-end QA pipeline, however this too uses a commercial search engine, and does not provide negative contexts not containing the answer, making research into the retrieval component difficult.

3 Dataset Construction

Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant.

3.1 Question sets


The software question set was built from the definitional “excerpt” entry for each tag (entity) on StackOverflow. For example the excerpt for the “java“ tag is, “Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).” Not every excerpt includes the tag being defined (which we will call the “head tag”), so we prepend the head tag to the front of the string to guarantee relevant results later on in the pipeline. We then completed preprocessing of the software questions by downcasing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. “.net”, “c++”). Each preprocessed excerpt was then converted to a series of cloze questions using a simple heuristic: first searching the string for mentions of other entities, then repleacing each mention in turn with a placeholder string (Figure 2).

Excerpt Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).
Preprocessed Excerpt java — java is a general-purpose object-oriented programming language designed to be used in conjunction with the java virtual-machine jvm .
Cloze Questions
Cloze Question
java java — java is a general-purpose object-oriented programming language designed to be used in conjunction with the @placeholder virtual-machine jvm .
virtual-machine java — java is a general-purpose object-oriented programming language designed to be used in conjunction with the java @placeholder jvm .
jvm java — java is a general-purpose object-oriented programming language designed to be used in conjunction with the java virtual-machine @placeholder .
Figure 2: Cloze generation

This heuristic is noisy, since the software domain often overloads existing English words (e.g. “can” may refer to a Controller Area Network bus; “swap” may refer to the temporary storage of inactive pages of memory on disk; “using” may refer to a namespacing keyword). To improve precision we scored each cloze based on the relative incidence of the term in an English corpus versus in our StackOverflow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as “can” “swap” and “using”, but also “image” “service” and “packet”). A more sophisticated entity recognition system could make recall improvements here.


The trivia question set was built from a collection of just under 54,000 trivia questions collected by Reddit user 007craft and released in December 2015333https://www.reddit.com/r/trivia/comments/3wzpvt/free_database_of_50000_trivia_questions/. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in formatting, spelling, and accuracy. We filtered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of  52,000 free-response style questions remaining. The questions range in difficulty, from straightforward (“Who recorded the song ‘Rocket Man”’ “Elton John”) to difficult (“What was Robin Williams paid for Disney’s Aladdin in 1982” “Scale $485 day + Picasso Painting”) to debatable (“According to Earth Medicine what’s the birth totem for march” “The Falcon”)444In Earth Medicine, March has two birth totems, the falcon and the wolf.

3.2 Context Retrieval

The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.

Context documents for each query were generated in a two-phase fashion, first collecting a large pool of semirelevant text, then filling a temporary index with short or long pseudodocuments from the pool, and finally selecting a set of top-ranking pseudodocuments (100 short or 20 long) from the temporary index.

For Quasar-S, the pool of text for each question was composed of 50+ question-and-answer threads scraped from http://stackoverflow.com. StackOverflow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy555https://scrapy.org to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identifier, a post identifier, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.

To build the context documents for Quasar-S, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each annotated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:

  • [noitemsep]

  • SHOULD(PHRASE(question text))

  • SHOULD(BOOLEAN(question text))

  • MUST(tags:$headtag)

where “question text” refers to the sequence of tokens in the cloze question, with the placeholder removed. The first SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term indicates that only pseudodocuments annotated with the head tag of the cloze should be considered.

The top pseudodocuments were retrieved, and the top unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.

For Quasar-T, the pool of text for each question was composed of 100 HTML documents retrieved from ClueWeb09. Each question-answer pair was converted to a #combine query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove (s/[.(){}<>:*‘_]+//g) or replace (s/[,?’]+/ /g) illegal characters. Any questions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho666http://jericho.htmlparser.net/docs/index.html. For long pseudodocuments we used the full page text, truncated to 2048 characters. For short pseudodocuments we used individual sentences as extracted by the Stanford NLP sentence segmenter, truncated to 200 characters.

To build the context documents for the trivia set, the pseudodocuments from the pool were collected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for Quasar-S, without the head tag filter:

  • [noitemsep]

  • SHOULD(PHRASE(question text))

  • SHOULD(BOOLEAN(question text))

The top pseudodocuments were retrieved, and the top unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.

3.3 Candidate solutions

The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question. Quasar-S used a closed vocabulary of 4874 tags as its candidate list. Since the questions in Quasar-T are in free-response format, we constructed a separate list of candidate solutions for each question. Since most of the correct answers were noun phrases, we took each sequence of NN* -tagged tokens in the context document, as identified by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list.

3.4 Postprocessing

(train / val / test)
(train / val / test)
Answer in Short
(train / val / test)
Answer in Long
(train / val / test)
Quasar-S 31,049 / 3,174 / 3,139 30,198 / 3,084 / 3,044 30,417 / 3,099 / 3,064
Quasar-T 37,012 / 3,000 / 3,000 18,726 / 1,507 / 1,508 25,465 / 2,068 / 2,043 26,318 / 2,129 / 2,102
Table 1: Dataset Statistics. Single-Token refers to the questions whose answer is a single token (for Quasar-S all answers come from a fixed vocabulary). Answer in Short (Long) indicates whether the answer is present in the retrieved short (long) pseudo-documents.

Once context documents had been built, we extracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of Quasar as a whole. We also split the full set into training, validation and test sets. The final size of each data subset after all discards is listed in Table 1.

4 Evaluation

4.1 Metrics

Evaluation is straightforward on Quasar-S since each answer comes from a fixed output vocabulary of entities, and we report the average accuracy

of predictions as the evaluation metric. For

Quasar-T, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation difficult. Here we pick the two metrics from Rajpurkar et al. (2016); Joshi et al. (2017). In preprocessing the answer we remove punctuation, white-space and definite and indefinite articles from the strings. Then, exact match measures whether the two strings, after preprocessing, are equal or not. For F1 match we first construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap between the two bags of tokens. These metrics are far from perfect for Quasar-T; for example, our human testers were penalized for entering “0” as answer instead of “zero”. However, a comparison between systems may still be meaningful.

4.2 Human Evaluation

To put the difficulty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for Quasar-S, and an avid trivia enthusiast for Quasar-T) and non-experts. Each volunteer was presented with randomly selected questions from the development set and asked to answer them via an online app. The experts were evaluated in a “closed-book” setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an “open-book” setting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section 3.2). We decided to use short pseudo-documents for this exercise to reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.

(a) Quasar-S relations
(b) Quasar-T genres
(c) Quasar-T answer categories
Figure 3: Distribution of manual annotations for Quasar. Description of the Quasar-S annotations is in Appendix A.

We also asked the volunteers to provide annotations to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For Quasar-S the annotators were asked to mark the relation between the head entity (from whose definition the cloze was constructed) and the answer entity. For Quasar-T the annotators were asked to mark the genre of the question (e.g., Arts & Literature)777Multiple genres per question were allowed. and the entity type of the answer (e.g., Person). When multiple annotators marked the same question differently, we took the majority vote when possible and discarded ties. In total we collected relation annotations for questions in Quasar-S, out of which were discarded due to conflicting ties, leaving a total of annotated questions. For Quasar-T we collected annotations for a total of questions, out of which we marked as ambiguous. In the remaining , a total of genres were annotated (a question could be annotated with multiple genres), while questions had conflicting entity-type annotations which we discarded, leaving total entity-type annotations. Figure 3 shows the distribution of these annotations.

4.3 Baseline Systems

We evaluate several baselines on Quasar

, ranging from simple heuristics to deep neural networks. Some predict a single token / entity as the answer, while others predict a span of tokens.

4.3.1 Heuristic Models


MF-i (Maximum Frequency) counts the number of occurrences of each candidate answer in the retrieved context and returns the one with maximum frequency. MF-e is the same as MF-i except it excludes the candidates present in the query. WD (Word Distance) measures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style Quasar-S the distances are measured by first aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a specified threshold, which is tuned on the validation set.


For Quasar-T we also test the Sliding Window (SW) and Sliding Window + Distance (SW+D) baselines proposed in (Richardson et al., 2013). The scores were computed for the list of candidate solutions described in Section 3.2.

4.3.2 Language Models

For Quasar-S, since the answers come from a fixed vocabulary of entities, we test language model baselines which predict the most likely entity to appear in a given context. We train three n-gram baselines using the SRILM toolkit (Stolcke et al., 2002) for on the entire corpus of all Stack Overflow posts. The output predictions are restricted to the output vocabulary of entities.

We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the final states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overflow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach benefits from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.

4.3.3 Reading Comprehension Models

Reading comprehension models are trained to extract the answer from the given passage. We test two recent architectures on Quasar using publicly available code from the authors888https://github.com/bdhingra/ga-reader 999https://github.com/allenai/bi-att-flow.

GA (Single-Token):

The GA Reader (Dhingra et al., 2017) is a multi-layer neural network which extracts a single token from the passage to answer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For Quasar-S we train and test GA on all instances for which the correct answer is found within the retrieved context. For Quasar-T we train and test GA on all instances where the answer is in the context and is a single token.

BiDAF (Multi-Token):

The BiDAF model (Seo et al., 2017) is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writing it had state-of-the-art performance among published models on the Squad dataset. For Quasar-T we train and test BiDAF on all instances where the answer is in the retrieved context.

4.4 Results

Figure 4: Variation of Search, Read and Overall accuracies as the number of context documents is varied.

Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best.

Method Optimal Context Search Acc Reading Acc Overall Acc
val test val test val test
Human Performance
Expert (CB) 0.468
Non-Expert (OB) 0.500
Language models
3-gram 0.148 0.153
4-gram 0.161 0.171
5-gram 0.165 0.174
BiRNN 0.345 0.336
WD 10 0.40 0.43 0.250 0.249 0.100 0.107
MF-e 60 0.64 0.64 0.209 0.212 0.134 0.136
MF-i 90 0.67 0.68 0.237 0.234 0.159 0.159
GA 70 0.65 0.65 0.486 0.483 0.315 0.316
WD 10 0.66 0.66 0.124 0.142 0.082 0.093
MF-e 15 0.69 0.69 0.185 0.197 0.128 0.136
MF-i 15 0.69 0.69 0.230 0.231 0.159 0.159
GA 15 0.67 0.67 0.474 0.479 0.318 0.321
Table 2: Performance comparison on Quasar-S. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with . Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.
Method Optimal Context Search Acc Reading Acc Overall Acc
exact f1 exact f1
val test val test val test val test val test
Human Performance
Expert (CB) 0.547 0.604
Non-Expert (OB) 0.515 0.606
MF-i 10 0.35 0.34 0.053 0.044 0.053 0.044 0.019 0.015 0.019 0.015
WD 20 0.40 0.39 0.104 0.082 0.104 0.082 0.042 0.032 0.042 0.032
SW+D 20 0.64 0.63 0.112 0.113 0.157 0.155 0.072 0.071 0.101 0.097
SW 10 0.56 0.53 0.216 0.205 0.299 0.271 0.120 0.109 0.159 0.144
MF-e 70 0.45 0.45 0.372 0.342 0.372 0.342 0.167 0.153 0.167 0.153
GA 70 0.44 0.44 0.580 0.600 0.580 0.600 0.256 0.264 0.256 0.264
BiDAF** 10 0.57 0.54 0.454 0.476 0.509 0.524 0.257 0.259 0.289 0.285
WD 20 0.43 0.44 0.084 0.067 0.084 0.067 0.037 0.029 0.037 0.029
SW 20 0.74 0.73 0.041 0.034 0.056 0.050 0.030 0.025 0.041 0.037
SW+D 5 0.58 0.58 0.064 0.055 0.094 0.088 0.037 0.032 0.054 0.051
MF-i 20 0.44 0.45 0.185 0.187 0.185 0.187 0.082 0.084 0.082 0.084
MF-e 20 0.43 0.44 0.273 0.286 0.273 0.286 0.119 0.126 0.119 0.126
BiDAF** 1 0.47 0.468 0.370 0.395 0.425 0.445 0.17 0.185 0.199 0.208
GA** 10 0.44 0.44 0.551 0.556 0.551 0.556 0.245 0.244 0.245 0.244
Table 3: Performance comparison on Quasar-T. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with . Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.**We were unable to run BiDAF with more than 10 short-documents / 1 long-documents, and GA with more than 10 long-documents due to memory errors.

In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set101010The Search Accuracy for different baselines may be different despite the same number of retrieved context documents, due to different preprocessing requirements. For example, the SW baselines allow multiple tokens as answer, whereas WD and MF baselines do not.. For Quasar-S the best performing baseline is the BiRNN language model, which achieves accuracy. The GA model achieves accuracy on the set of instances for which the answer is in context, however, a search accuracy of only means its overall performance is lower. This can improve with improved retrieval. For Quasar-T, both the neural models significantly outperform the heuristic models, with BiDAF getting the highest F1 score of .

The best performing baselines, however, lag behind human performance by and for Quasar-S and Quasar-T respectively, indicating the strong potential for improvement. Interestingly, for human performance we observe that non-experts are able to match or beat the performance of experts when given access to the background corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the usefulness of the search engine for non-experts; it should not be viewed as an upper bound for automatic systems which can potentially use the entire background corpus. Further analysis of the human and baseline performance in each category of annotated questions is provided in Appendix B.

5 Conclusion

We have presented the Quasar datasets for promoting research into two related tasks for QA – searching a large corpus of text for relevant passages, and reading the passages to extract answers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the searching performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improving these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, including the documents retrieved by our system and the human annotations, are available at https://github.com/bdhingra/quasar.


This work was funded by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google.


  • Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247–1250.
  • Callan et al. (2009) Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 data set.
  • Chen et al. (2016) Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. ACL .
  • Chen et al. (2017) Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Association for Computational Linguistics (ACL).
  • Dhingra et al. (2017) Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text comprehension. ACL .
  • Dunn et al. (2017) Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179 .
  • Hermann et al. (2015) Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693–1701.
  • Hill et al. (2016) Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. ICLR .
  • Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. ACL .
  • Miller et al. (2016) Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. EMNLP .
  • Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. NIPS .
  • Onishi et al. (2016) Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. EMNLP .
  • Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. EMNLP .
  • Richardson et al. (2013) Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3, page 4.
  • Seo et al. (2017) Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. ICLR .
  • Stolcke et al. (2002) Andreas Stolcke et al. 2002. Srilm-an extensible language modeling toolkit. In Interspeech. volume 2002, page 2002.
  • Trischler et al. (2016) Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 .
  • Voorhees and Tice (2000) Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 200–207.
  • Watanabe et al. (2017) Yusuke Watanabe, Bhuwan Dhingra, and Ruslan Salakhutdinov. 2017. Question answering from unstructured text by retrieval and comprehension. arXiv preprint arXiv:1703.08885 .
  • Weissenborn et al. (2017) Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neural architecture for question answering. arXiv preprint arXiv:1703.04816 .
  • West et al. (2014) Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based question answering. In Proceedings of the 23rd international conference on World wide web. ACM, pages 515–526.
  • Yang et al. (2015) Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP. Citeseer, pages 2013–2018.

Appendix A Quasar-S Relation Definitions

(head answer)
is-a head is a type of answer
component-of head is a component of answer
has-component answer is a component of head
developed-with head was developed using the answer
extends head is a plugin or library providing additional functionality to larger thing answer
runs-on answer is an operating system, platform, or framework on which head runs
synonym head and answer are the same entity
used-for head is a software / framework used for some functionality related to answer
Table 4: Description of the annotated relations between the head entity, from whose definition the cloze is constructed, and the answer entity which fills in the cloze. These are the same as the descriptions shown to the annotators.
(a) Quasar-S relations
(b) Quasar-T genres
(c) Quasar-T answer categories
Figure 5: Performance comparison of humans and the best performing baseline across the categories annotated for the development set.

Table 4 includes the definition of all the annotated relations for Quasar-S.

Appendix B Performance Analysis

Figure 5 shows a comparison of the human performance with the best performing baseline for each category of annotated questions. We see consistent differences between the two, except in the following cases. For Quasar-S, Bi-RNN performs comparably to humans for the developed-with and runs-on categories, but much worse in the has-component and is-a categories. For Quasar-T, BiDAF performs comparably to humans in the sports category, but much worse in history & religion and language, or when the answer type is a number or date/time.