Reading comprehension (RC) is concerned with reading a piece of text and answering questions about it Richardson et al. (2013); Berant et al. (2014); Hermann et al. (2015); Rajpurkar et al. (2016). Its appeal stems both from the clear application it proposes, but also from the fact that it allows to probe many aspects of language understanding, simply by posing questions on a text document. Indeed, this has led to the creation of a large number of RC datasets in recent years.
While each RC dataset has a different focus, there is still substantial overlap in the abilities required to answer questions across these datasets. Nevertheless, there has been relatively little work Min et al. (2017); Chung et al. (2018); Sun et al. (2018) that explores the relations between the different datasets, including whether a model trained on one dataset generalizes to another. This research gap is highlighted by the increasing interest in developing and evaluating the generalization of language understanding models to new setups Yogatama et al. (2019); Liu et al. (2019).
In this work, we conduct a thorough empirical analysis of generalization and transfer across 10 RC benchmarks. We train models on one or more source RC datasets, and then evaluate their performance on a target test set, either without any additional target training examples (generalization) or with additional target examples (transfer). We experiment with DocQA Clark and Gardner (2018), a standard and popular RC model, as well as a model based on BERT Devlin et al. (2019), which provides powerful contextual representations.
Our generalization analysis confirms findings that current models over-fit to the particular training set and generalize poorly even to similar datasets. Moreover, BERT representations substantially improve generalization. However, we find that the contribution of BERT is much more pronounced on Wikipedia (which BERT was trained on) and Newswire, but quite moderate when documents are taken from web snippets.
We also analyze the main causes for poor generalization: (a) differences in the language of the text document, (b) differences in the language of the question, and (c) the type of language phenomenon that the dataset explores. We show how generalization is related to these factors (Figure 1) and that performance drops as more of these factors accumulate.
Our transfer experiments show that pre-training on one or more source RC datasets substantially improves performance when fine-tuning on a target dataset. An interesting question is whether such pre-training improves performance even in the presence of powerful language representations from BERT. We find the answer is a conclusive yes, as we obtain consistent improvements in our BERT-based RC model.
We find that training on multiple source RC datasets is effective for both generalization and transfer. In fact, training on multiple datasets leads to the same performance as training from the target dataset alone, but with roughly three times fewer examples. Moreover, we find that when using the high capacity BERT-large, one can train a single model on multiple RC datasets, and obtain close to or better than state-of-the-art performance on all of them, without fine-tuning to a particular dataset.
Armed with the above insights, we train a large RC model on multiple RC datasets, termed MultiQA. Our model leads to new state-of-the-art results on five datasets, suggesting that in many language understanding tasks the size of the dataset is the main bottleneck, rather than the model itself.
Last, we have developed infrastructure (on top of AllenNLP Gardner et al. (2018)), where experimenting with multiple models on multiple RC datasets, mixing datasets, and performing fine-tuning, are trivial. It is also simple to expand the infrastructure to new datasets and new setups (abstractive RC, multi-choice, etc.). We will open source our infrastructure, which will help researchers evaluate models on a large number of datasets, and gain insight on the strengths and shortcoming of their methods. We hope this will accelerate progress in language understanding.
To conclude, we perform a thorough investigation of generalization and transfer in reading comprehension over 10 RC datasets. Our findings are:
[topsep=0pt, itemsep=0pt, leftmargin=0in, parsep=0pt]
An analysis of generalization on two RC models, illustrating the factors that influence generalization between datasets.
Pre-training on a RC dataset and fine-tuning on a target dataset substantially improves performance even in the presence of contextualized word representations (BERT).
Pre-training on multiple RC datasets improves transfer and generalization and can reduce the cost of example annotation.
A new model, MultiQA, that improves state-of-the-art performance on five datasets.
Infrastructure for easily performing experiments on multiple RC datasets.
We describe the 10 datasets used for our investigation. Each dataset provides question-context-answer triples for training, and a model maps an unseen question-context pair to an answer . For simplicity, we focus on the single-turn extractive setting, where the answer is a span in the context . Thus, we do not evaluate abstractive Nguyen et al. (2016) or conversational datasets Choi et al. (2018); Reddy et al. (2018).
We broadly distinguish large datasets that include more than 75K examples, from small datasets that contain less than 75K examples. In §4, we will fix the size of the large datasets to control for size effects, and always train on exactly 75K examples per dataset.
We now shortly describe the datasets, and provide a summary of their characteristics in Table 1. The table shows the original size of each dataset, the source for the context, how questions were generated, and whether the dataset was specifically designed to probe multi-hop reasoning.
The large datasets used are:
SQuAD Rajpurkar et al. (2016): Crowdsourcing workers were shown Wikipedia paragraphs and were asked to author questions about their content. Questions mostly require soft matching of the language in the question to a local context in the text.
NewsQA Trischler et al. (2017): Crowdsourcing workers were shown a CNN article (longer than SQuAD) and were asked to author questions about its content.
SearchQA Dunn et al. (2017): Trivia questions were taken from Jeopardy! TV show, and contexts are web snippets retrieved from Google search engine for those questions, with an average of 50 snippets per question.
TriviaQA Joshi et al. (2017): Trivia questions were crawled from the web. In one variant of TriviaQA (termed TQA-W), Wikipedia pages related to the questions are provided for each question. In another, web snippets and documents from Bing search engine are given. For the latter variant, we use only the web snippets in this work (and term this TQA-U). In addition, we replace Bing web snippets with Google web snippets (and term this TQA-G).
HotpotQA Yang et al. (2018): Crowdsourcing workers were shown pairs of related Wikipedia paragraphs and asked to author questions that require multi-hop reasoning over the paragraphs. There are two versions of HotpotQA: the first where the context includes the two gold paragraphs and eight “distractor” paragraphs, and a second, where 10 paragraphs retrieved by an information retrieval (IR) system are given. Here, we use the latter version.
The small datasets are:
CWQ Talmor and Berant (2018c): Crowdsourcing workers were shown compositional formal queries against Freebase and were asked to re-phrase them in natural language. Thus, questions require multi-hop reasoning. The original work assumed models contain an IR component, but the authors also provided default web snippets, which we use here. The re-partitioned version 1.1 was used. Talmor and Berant (2018a)
WikiHop Welbl et al. (2017) Questions are entity-relation pairs from Freebase, and are not phrased in natural language. Multiple Wikipedia paragraphs are given as context, and the dataset was constructed such that multi-hop reasoning is needed for answering the question.
ComQA Abujabal et al. (2018): Questions are real user questions from the WikiAnswers community QA platform. No contexts are provided, and thus we augment the questions with web snippets retrieved from Google search engine.
DROP Dua et al. (2019): Contexts are Wikipedia paragraphs and questions are authored by crowdsourcing workers. This dataset focuses on quantitative reasoning. Because most questions are not extractive, we only use the 33,573 extractive examples in the dataset (but evaluate on the entire development set).
We carry our empirical investigation using two models. The first is DocQA Clark and Gardner (2018), and the second is based on BERT Devlin et al. (2019), which we term BertQA. We now describe the pre-processing on the datasets, and provide a brief description of the models. We emphasize that in all our experiments we use exactly the same training procedure for all datasets, with minimal hyper-parameter tuning.
Examples in all datasets contain a question, text documents, and an answer. To generate an extractive example we (a) Split: We define a length and split every paragraph whose length is into chunks using a few manual rules. (b) Sort: We sort all chunks (paragraphs whose length is
or split paragraphs) by cosine similarity to the question in tf-idf space, as proposed by clark2018simple. (c)Merge: We go over the sorted list of chunks and greedily merge them to the largest possible length that is at most , so that the RC model will be exposed to as much context as possible. The final context is the merged list of chunks (d) We take the gold answer and mark all spans that match the answer.
Clark and Gardner (2018): A widely-used RC model, based on BiDAF Seo et al. (2016), that encodes the question and document with bidirectional RNNs, performs attention between the question and document, and adds self-attention on the document side.
We run DocQA on each chunk , where the input is a sequence of up to () tokens represented as GloVE embeddings Pennington et al. (2014)
. The output is a distribution over the start and end positions of the predicted span, and we output the span with highest probability across all chunks. At training time,DocQA
uses a shared-norm objective that normalizes the probability distribution over spans from all chunks. We define the gold span to be the first occurrence of the gold answer in the context.
Devlin et al. (2019): For each chunk, we apply the standard implementation, where the input is a sequence of wordpiece tokens composed of the question and chunk separated by special tokens [CLS] <question> [SEP] <chunk> [SEP]. A linear layer with softmax over the top-layer [CLS] outputs a distribution over start and end span positions.
We train over each chunk separately, back-propagating into BERT’s parameters. We maximize the log-likelihood of the first occurrence of the gold answer in each chunk that contains the gold answer. At test time, we output the span with the maximal logit across all chunks.
4 Controlled Experiments
We now present controlled experiments aiming to explore generalization and transfer of models trained on a set of RC datasets to a new target.
4.1 Do models generalize to unseen datasets?
We first examine generalization – whether models trained on one dataset generalize to examples from a new distribution. While different datasets differ substantially, there is overlap between them in terms of: (i) the language of the question, (ii) the language of the context, and (iii) the type of linguistic phenomena the dataset aims to probe. Our goal is to answer (a) do models over-fit to a particular dataset? How much does performance drop when generalizing to a new dataset? (b) Which datasets generalize better to which datasets? What properties determine generalization?
We train DocQA and BertQA (we use BERT-base) on six large datasets (for TriviaQA we use TQA-G and TQA-W), taking 75K examples from each dataset to control for size. We also create Multi-75K, which contains 15K examples from the five large dataset (Using TQA-G only for TriviaQA), resulting in another dataset of 75K examples. We evaluate performance on all datasets that the model was not trained on.
Table 2 shows exact match (EM) performance (does the predicted span exactly match the gold span) on the development set. The row Self corresponds to training and testing on the target itself, and is provided for reference (For DROP, we train on questions where the answer is a span in the context, but evaluate on the entire development set). The top part shows DocQA, while the bottom BertQA.
At a high-level we observe three trends. First, models generalize poorly in this zero-shot setup: comparing Self to the best zero-shot number shows a performance reduction of 31.5% on average. This confirms the finding that models over-fit to the particular dataset. Second, BertQA substantially improves generalization compared to DocQA
owing to the power of large-scale unsupervised learning – performance improves by 21.2% on average. Last,Multi-75K performs almost as well as the best source dataset, reducing performance by only 3.7% on average. Hence, training on multiple datasets results in robust generalization. We further investigate training on multiple datasets in §4.2 and §5.
Taking a closer look, the pair SearchQA and TQA-G exhibits the smallest performance drop, since both use trivia questions and web snippets. SQuAD and NewsQA also generalize well (especially with BertQA), probably because they contain questions on a single document, focusing on predicate-argument structure. While HotpotQA and WikiHop both examine multi-hop reasoning over Wikipedia, performance dramatically drops from HotpotQA to WikiHop. This is due to the difference in the language of the questions (WikiHop questions are synthetic). The best generalization to DROP is from HotpotQA, since both require multi-hop reasoning. Performance on DROP is overall low, showing that our models struggle with quantitative reasoning.
For the small datasets, ComQA, CQ, and CWQ, generalization is best with TQA-G, as the contexts in these datasets are web snippets. For CQ, whose training set has 1,300 examples, zero-shot performance is even higher than Self.
Interestingly, BertQA improves performance substantially compared to DocQA on NewsQA, SQuAD, TQA-W and WikiHop, but only moderately on HotpotQA, SearchQA, and TQA-G. This hints that BERT is efficient when the context is similar to (or even part of) its training corpus, but degrades over web snippets. This is most evident when comparing TQA-G to TQA-W, as the difference between them is the type of context.
To view the global structure of the datasets, we visualize them with the force-directed placement algorithm Fruchterman and Reingold (1991). The input is a set of nodes (datasets), and a set of undirected edges representing springs in a mechanical system pulling nodes towards one another. Edges specify the pulling force, and a physical simulation places the nodes in a final minimal energy state in 2D-space.
Let be the performance when training BertQA on dataset and evaluating on . Let be the performance when training and evaluating on . The force between an unordered pair of datasets is when we train and evaluate in both directions, and , if we train on and evaluate on only.
Figure 1 shows this visualization, where we observe that datasets cluster naturally according to shape and color. Focusing on the context, datasets with web snippets are clustered (triangles), while datasets that use Wikipedia are also near one another (circles). Considering the question language, TQA-G, SearchQA, and TQA-U are very close (blue triangles), as all contain trivia questions over web snippets. DROP, HotpotQA, NewsQA and SQuAD generate questions with crowd workers, and all are at the top of the figure. WikiHop
uses synthetic questions that prevent generalization, and is far from other datasets – however this gap will be closed during transfer learning (§4.2). DROP is far from all datasets because it requires quantitative reasoning that is missing from other datasets. However, it is relatively close to HotpotQA and WikiHop, which target multi-hop reasoning. DROP is also close to SQuAD, as both have similar contexts and question language, but the linguistic phenomena they target differ.
Does generalization improve with more data?
So far we trained on datasets with 75K examples. To examine generalization as the training set size increases, we evaluate performance as the number of examples from the five large datasets grows. Table 3 shows that generalization improves by 26% on average when increasing the number of examples from 37K to 375K.
4.2 Does pre-training improve results on small datasets?
We now consider transfer learning, assuming access to a small number of examples (15K) from a target dataset. We pre-train a model on a source dataset, and then fine-tune on the target. In all models, pre-training and fine-tuning are identical and performed until no improvement is seen on the development set (early stopping). Our goal is to analyze whether pre-training improves performance compared to training on the target alone. This is particularly interesting with BertQA, as BERT already contains substantial knowledge that might deem pre-training unnecessary.
How to choose the dataset to pre-train on?
Table 4 shows exact match (EM) on the development set of all datasets (rows are the trained datasets and columns the evaluated datasets). Pre-training on a source RC dataset and transferring to the target improves performance by 21% on average for DocQA (improving on 8 out of 11 datasets), and by 7% on average for BertQA (improving on 10 out of 11 datasets). Thus, pre-training on a related RC dataset helps even given representations from a model like BertQA.
Second, Multi-75K obtains good performance in almost all setups. Performance of Multi-75K is 3% lower than the best source RC dataset on average for DocQA, and 0.3% lower for BertQA. Hence, one can pre-train a single model on a mixed dataset, rather than choose the best source dataset for every target.
Third, in 4 datasets (ComQA, DROP, HotpotQA, WikiHop) the best source dataset uses web snippets in DocQA, but Wikipedia in BertQA. This strengthens our finding that BertQA performs better given Wikipedia text.
Last, we see dramatic improvement in performance comparing to §4.1. This highlights that current models over-fit to the data they are trained on, and small amounts of data from the target distribution can overcome this generalization gap. This is clearest for WikiHop, where synthetic questions preclude generalization, but fine-tuning improves performance from 12.6 EM to 50.5 EM. Thus, low performance was not due to a modeling issue, but rather a mismatch in the question language.
An interesting question is whether performance in the generalization setup is predictive of performance in the transfer setup. Average performance across target datasets in Table 4, when choosing the best source dataset from Table 4, is 39.3 (DocQA) and 43.8 (BertQA). Average performance across datasets in Table 4, when choosing the best source dataset from Table 2, is 38.9 (DocQA) and 43.5 (BertQA). Thus, one can select a dataset to pre-train on based on generalization performance and suffer a minimal hit in accuracy, without fine-tuning on each dataset. However, training on Multi-75K also yields good results without selecting a source dataset at all.
How much target data is needed?
We saw that with 15K training examples from the target dataset, pre-training improves performance. We now ask whether this effect maintains given a larger training set. To examine this, we measure (Figure 2) the performance on each of the large datasets when pre-training on its nearest dataset (according to ) for both DocQA (top) and BertQA (bottom row). The orange curve corresponds to training on the target dataset only, while the blue curve describes pre-training on 75K examples from a source dataset, and then fine-tuning on an increasing number of examples from the target dataset.
In 5 out of 10 curves, pre-training improves performance even given access to all 75K examples from the target dataset. In the other 5, using only the target dataset is better after 30-50K examples. To estimate the savings in annotation costs through pre-training, we measure how many examples are needed, when doing pre-training, to reach 95% of the performance obtained when training on all examples from the target dataset. We find that with pre-training we only need 49% of the examples to reach 95% performance, compared to 86% without pre-training.
To further explore pre-training on multiple datasets, we plot a curve (green) for BertQA, where at each point we train on a fixed number of examples from all five large datasets (no fine-tuning). We observe that more data from multiple datasets improves performance in almost all cases. In this case, we reach 95% of the final performance using 30% of the examples only. We will use this observation further in §5 to reach new state-of-the-art performance on several datasets.
4.3 Does context augmentation improve performance?
For TriviaQA we have for all questions, contexts from three different sources – Wikipedia (TQA-W), Bing web snippets (TQA-U), and Google web snippets (TQA-G). Thus, we can explore whether combining the three datasets improves performance. Moreover, because questions are identical across the datasets, we can see the effect on generalization due to the context language only.
Table 5 shows the results. In the first 3 rows we train on 75K examples from each dataset, and in the last we train on the combined 225K examples. First, we observe that context augmentation substantially improves performance (especially for TQA-G and TQA-W). Second, generalization is sensitive to the context type: performance substantially drops when training on one context type and evaluating on another ( 48.4 for TQA-G, for TQA-U, and for TQA-W).
|BERT-large Dev.||MultiQA Dev.||MultiQA Test||SOTA11footnotemark: 1|
|Dataset||EM||tok. F1||EM||tok. F1||EM||tok. F1||EM||tok. F1|
|TQA-U||56.8||62.6||58.4||64.3||-||-||52.022footnotemark: 2||61.722footnotemark: 2|
|HotpotQA||27.9||37.7||30.6||40.3||30.7||40.2||37.122footnotemark: 2||48.922footnotemark: 2|
Results for datasets where the official evaluation metric is EM and token F. The CWQ evaluation script provides only the EM mertic. We did not find a public evaluation script for the hidden test set of TQA-U.
|BERT-large Dev.||MultiQA Dev.||MultiQA Test||SOTA|
We now present MultiQA, a BERT-based model, trained on multiple RC datasets, that obtains new state-of-the-art results on several datasets.
Does training on multiple datasets improve BertQA?
MultiQA trains BertQA on the Multi-375K dataset presented above, which contains 75K examples from 5 large datasets, but uses BERT-large rather than BERT-base. For small target datasets, we fine-tune the model on these datasets, since they were not observed when training on Multi-375K. For large datasets, we do not fine-tune. We found that fine-tuning on datasets that are already part of Multi-375K does not improve performance (we assume this is due to the high-capacity of BERT-large), and thus we use one model for all the large datasets. We train on Multi-375K, and thus our model does not use all examples in the original datasets, which contain more than 75K examples.
We use the official evaluation script for any dataset that provides one, and the SQuAD evaluation script for all other datasets. Table 6 shows results for datasets where the evaluation metric is EM or token F
(harmonic mean of the list of tokens in the predicted vs. gold span). Table7 shows results for datasets where the evaluation metric is average recall/precision/F between the list of predicted answers and the list of gold answers.
We compare MultiQA to BERT-large, a model that does not train on Multi-375K, but only fine-tunes BERT-large on the target dataset. We also show the state-of-the-art (SOTA) result for all datasets for reference.111State-of-the-are-results were found in Tay et al. (2018) for NewsQA, in lin2018denoising, for SearchQA, in das2019multi for TQA-U, in Talmor and Berant (2018b) for CWQ, in Ding2019Cognitive for HotpotQA, in Abujabal et al. (2018) for ComQA, and in bao2016constraint for CQ.
MultiQA improves state-of-the-art performance on fivedatasets, although it does not even train on all examples in the large datasets.222We compare only to models for which we found a publication. For TQA-U, Figure 4 in clark2018simple shows roughly 67 F on the development set, but no exact number. For CQ we compare against SOTA achieved on the web snippets context. On the Freebase context SOTA is 42.8 F. Luo1 et al. (2018) MultiQA improves performance compared to BERT-large in all cases. This improvement is especially noticeable in small datasets such as ComQA, CWQ, and CQ. Moreover, in NewsQA, MultiQA surpasses human performance as measured by the creators of those datasets. (46.5 EM, 69.4 F1) Trischler et al. (2017)), improving upon previous state-of-the-art by a large margin.
To conclude, MultiQA is able to improve state-of-the-art performance on multiple datasets. Our results suggest that in many NLU tasks the size of the dataset is the main bottleneck rather than the model itself.
Does training on multiple datasets improve resiliency against adversarial attacks?
Finally, we evaluated MultiQA on the adversarial SQuAD Jia and Liang (2017), where a misleading sentence is appended to each context (AddSent variant). MultiQA obtained 66.7 EM and 73.1 F, outperforming BERT-large (60.4EM, 66.3F1) by a significant margin, and also substantially improving state-of-the-art results (56.0 EM, 61.3 F, Hu et al. (2018) and 52.1 EM, 62.7 F, Wang et al. (2018)).
6 Related Work
Prior work has shown that RC performance can be improved by training on a large dataset and transferring to a smaller one, but at a small scale Min et al. (2017); Chung et al. (2018). sun2018improving has recently shown this in a larger experiment for multi-choice questions, where they first fine-tuned BERT on RACE Lai et al. (2017) and then fine-tuned on several smaller datasets.
Interest in learning general-purpose representations for natural language through unsupervised, multi-task and transfer learning has been sky-rocketing lately Peters et al. (2018); Radford et al. (2018); McCann et al. (2018); Chronopoulou et al. (2019); Phang et al. (2018); Wang et al. (2019). In parallel to our work, studies that focus on generalization have appeared on publication servers, empirically studying generalization to multiple tasks Yogatama et al. (2019); Liu et al. (2019). Our work is part of this research thread on generalization in natural langauge understanding, focusing on reading comprehension, which we view as an important and broad language understanding task.
In this work we performed a thorough empirical investigation of generalization and transfer over 10 RC datasets. We characterized the factors affecting generalization and obtained several state-of-the-art results by training on 375K examples from 5 RC datasets. We open source our infrastructure for easily performing experiments on multiple RC datasets, for the benefit of the community.
We highlight several practical take-aways:
[topsep=0pt, itemsep=0pt, leftmargin=0in, parsep=0pt]
Pre-training on multiple source RC datasets consistently improves performance on a target RC dataset , even in the presence of BERT representations. It also leads to substantial reduction in the number of necessary training examples for a fixed performance.
Training the high-capacity BERT-large representations over multiple RC datasets leads to good performance on all of the trained datasets without having to fine-tune on each dataset separately.
BERT representations improve generalization, but their effect is moderate when the source of the context is web snippets compared to Wikipedia and newswire.
Performance over an RC dataset can be improved by retrieving web snippets for all questions and adding them as examples (context augmentation).
We thank the anonymous reviewers for their constructive feedback. This work was completed in partial fulfillment for the PhD degree of Alon Talmor. This research was partially supported by The Israel Science Foundation grant 942/16, The Blavatnik Computer Science Research Fund and The Yandex Initiative for Machine Learning.
- Abujabal et al. (2018) A. Abujabal, R. S. Roy, M. Yahya, and G. Weikum. 2018. Comqa: A community-sourced dataset for complex factoid question answering with paraphrase clusters. arXiv preprint arXiv:1809.09528.
Bao et al. (2016)
J. Bao, N. Duan, Z. Yan, M. Zhou, and T. Zhao. 2016.
Constraint-based question answering with knowledge graph.In International Conference on Computational Linguistics (COLING).
Berant et al. (2014)
J. Berant, V. Srikumar, P. Chen, A. V. Linden, B. Harding, B. Huang, P. Clark,
and C. D. Manning. 2014.
Modeling biological processes for reading comprehension.
Empirical Methods in Natural Language Processing (EMNLP).
- Bollacker et al. (2008) K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In International Conference on Management of Data (SIGMOD), pages 1247–1250.
- Choi et al. (2018) E. Choi, H. He, M. Iyyer, M. Yatskar, W. Yih, Y. Choi, P. Liang, and L. Zettlemoyer. 2018. Quac: Question answering in context. In Empirical Methods in Natural Language Processing (EMNLP).
- Chronopoulou et al. (2019) A. Chronopoulou, C. Baziotis, and A. Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. arXiv preprint arXiv:1902.10547.
- Chung et al. (2018) Y. Chung, H. Lee, and J. Glass. 2018. Supervised and unsupervised transfer learning for question answering. In North American Association for Computational Linguistics (NAACL).
- Clark and Gardner (2018) C. Clark and M. Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Association for Computational Linguistics (ACL).
- Das et al. (2019) R. Das, S. Dhuliawala, M. Zaheer, and A. McCallum. 2019. Multi-step retriever-reader interaction for scalable open-domain question answering. In International Conference on Learning Representations (ICLR).
- Devlin et al. (2019) J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Association for Computational Linguistics (NAACL).
- Ding et al. (2019) M. Ding, C. Zhou, Q. Chen, H. Yang, and J. Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In Association for Computational Linguistics (ACL).
- Dua et al. (2019) D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner. 2019. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Association for Computational Linguistics (NAACL).
- Dunn et al. (2017) M. Dunn, , L. Sagun, M. Higgins, U. Guney, V. Cirik, and K. Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv.
- Fruchterman and Reingold (1991) T. M. Fruchterman and E. M. Reingold. 1991. Graph drawing by force-directed placement. Software: Practice and experience, 21(11):1129–1164.
- Gardner et al. (2018) M. Gardner, J. Grus, M. Neumann, O. Tafjord, P. Dasigi, N. Liu, M. Peters, M. Schmitz, and L. Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640.
- Hermann et al. (2015) K. M. Hermann, T. Kočiský, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NeurIPS).
- Hu et al. (2018) M. Hu, Y. Peng, F. Wei, Z. Huang, D. Li, N. Yang, and M. Zhou. 2018. Attention-guided answer distillation for machine reading comprehension. In Empirical Methods in Natural Language Processing (EMNLP).
- Jia and Liang (2017) R. Jia and P. Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP).
- Joshi et al. (2017) M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Association for Computational Linguistics (ACL).
- Lai et al. (2017) G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683.
- Lin et al. (2018) Y. Lin, H. Ji, Z. Liu, and M. Sun. 2018. Denoising distantly supervised open-domain question answering. In Association for Computational Linguistics (ACL), volume 1, pages 1736–1745.
- Liu et al. (2019) X. Liu, P. He, W. Chen, and J. Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504.
- Luo1 et al. (2018) K. Luo1, F. Lin1, X., L. Kenny, and Q.Zhu1. 2018. Knowledge base question answering via encoding of complex query graphs. In Empirical Methods in Natural Language Processing (EMNLP).
- McCann et al. (2018) B. McCann, N. S. Keskar, C. Xiong, and R. Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.
- Min et al. (2017) S. Min, M. Seo, and H. Hajishirzi. 2017. Question answering through transfer learning from large fine-grained supervision data. In Association for Computational Linguistics (ACL).
- Nguyen et al. (2016) T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop on Cognitive Computing at NIPS.
Pennington et al. (2014)
J. Pennington, R. Socher, and C. D. Manning. 2014.
GloVe: Global vectors for word representation.In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
- Peters et al. (2018) M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. In North American Association for Computational Linguistics (NAACL).
- Phang et al. (2018) J. Phang, T. Fevry, and S. R. Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.
- Radford et al. (2018) A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI.
- Rajpurkar et al. (2016) P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP).
- Reddy et al. (2018) S. Reddy, D. Chen, and C. D. Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042.
- Richardson et al. (2013) M. Richardson, C. J. Burges, and E. Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 193–203.
- Seo et al. (2016) M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv.
- Sun et al. (2018) K. Sun, D. Yu, D. Yu, and C. Cardie. 2018. Improving machine reading comprehension with general reading strategies. arXiv preprint arXiv:1810.13441.
- Talmor and Berant (2018a) A. Talmor and J. Berant. 2018a. Repartitioning of the complexwebquestions dataset. arXiv preprint arXiv:1807.09623.
- Talmor and Berant (2018b) A. Talmor and J. Berant. 2018b. Repartitioning of the complexwebquestions dataset. arXiv preprint arXiv:1807.09623.
- Talmor and Berant (2018c) A. Talmor and J. Berant. 2018c. The web as knowledge-base for answering complex questions. In North American Association for Computational Linguistics (NAACL).
- Talmor et al. (2017) A. Talmor, M. Geva, and J. Berant. 2017. Evaluating semantic parsing against a simple web-based question answering model. In *SEM.
- Tay et al. (2018) Y. Tay, L. Tuan, S. Hui, and J. Su. 2018. Densely connected attention propagation for reading comprehension. In Advances in Neural Information Processing Systems (NeurIPS).
- Trischler et al. (2017) A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. 2017. NewsQA: A machine comprehension dataset. In Workshop on Representation Learning for NLP.
- Wang et al. (2019) A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations (ICLR).
- Wang et al. (2018) W. Wang, M. Yan, and C. Wu. 2018. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. In Association for Computational Linguistics (ACL).
- Welbl et al. (2017) J. Welbl, P. Stenetorp, and S. Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. arXiv preprint arXiv:1710.06481.
- Yang et al. (2018) Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Empirical Methods in Natural Language Processing (EMNLP).
- Yogatama et al. (2019) D. Yogatama, C. de M. d’Autume, J. Connor, T. Kocisky, M. Chrzanowski, L. Kong, A. Lazaridou, W. Ling, L. Yu, C. Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373.