Machine Reading Comprehension: a Literature Review

06/30/2019 ∙ by Xin Zhang, et al. ∙ Peking University 0

Machine reading comprehension aims to teach machines to understand a text like a human and is a new challenging direction in Artificial Intelligence. This article summarizes recent advances in MRC, mainly focusing on two aspects (i.e., corpus and techniques). The specific characteristics of various MRC corpus are listed and compared. The main ideas of some typical MRC techniques are also described.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 42

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Over past decades, there has been a growing interest in making the machine understand human languages. And recently, great progress has been made in machine reading comprehension (MRC). In one view, the recent tasks titled MRC can also be seen as the extended tasks of question answering (QA).

As early as 1965, Simmons had summarized a dozen of QA systems proposed over the preceding 5 years in his reviewsimmons1964answering . The survey by Hirschman and Gaizauskashirschman2001natural classifies those QA model into three categories, namely the natural language front ends to the database, the dialogue interactive advisory systems and the question answering and story comprehension. For QA systems in the first category, like the BASEBALLgreen1961baseball and the LUNARwoods1973progress system, they usually transform the natural language questions into a query against a structured database based on linguistic knowledge. Although performing fairly well on certain tasks, they suffered from the constraints of the narrow domain of the database. As about the dialogue interactive advisory systems, including the SHRDLUwinograd1972understanding and the GUSbobrow1977gus , early models also used the database as their knowledge source. Problems like ellipsis and anaphora in the conservation, which those systems struggled in dealing with, still remain as a challenge even for nowadays models. The last category can be seen as the origin of modern MRC tasks. Wendy Lehnertlehnert1977conceptual first proposed that the QA systems should consider both the story and the question, and answer the question after necessary interpretation and inference. Lehnert also designed a system called QUALMlehnert1977conceptual according to her theory.

The past decade has witnessed a huge development in the MRC field, including the soar of numbers of corpus and great progress in techniques.

As about MRC corpus, plenty of datasets in different domains and styles have been released in recent years. In 2013, MCTestrichardson2013mctest was released as a multiple-choice reading comprehension dataset, which was of high quality whereas too small to train neural models. In 2015, CNN/Daily Mailref_cnn and CBTref_cbt were released. These two datasets were generated automatically from differentdomains and much larger than previous datasets. In 2016, SQuADref_squad was shown up as the first large-scale dataset with questions and answers written by the human. Many techniques have been proposed along with the competition on this dataset. In the same year, the MS MARCOnguyen2016ms was released with the emphasis on narrative answers. Subsequently, NewsQAref_newsqa and NarrativeQAkovcisky2018narrativeqa were constructed in similar paradigm with SQuAD and MS MARCO respectively. And both datasets were crowdsourced with the expectation for high quality. Next, various datasets sourced from different domains sprung up in the following two years, including RACElai2017large , CLOTHxie2017cloth and ARCclark2018arc that were collected from exams, TriviaQAref_triviaqa that were based on trivias and MCScriptostermann2018mcscript primarily focused on scripts. Released in 2018, WikiHopref_wikihop aimed at examing systems’ ability of multi-hop reasoning, and CoQAreddy2018coqa were proposed to test conversation ability of models.

The appearance of large-scale datasets above makes training an end-to-end neural MRC model possible. When competing on the leaderboard, many models and techniques were developed in an attempt to conquer a certain dataset. From word representations, attention mechanisms to high-level architectures, neural models evolve rapidly and even surpass human performance in some tasks.

In this article, we aim to make an extensive review on recent datasets and techniques for MRC. In Section 2, we categorize the MRC datasets into three types and describe them briefly. In Section 3

, we introduce the traditional non-neural methods, neural network based models and attention mechanism which have been used in the MRC tasks. Finally, Section 4 concludes our review.

2 MRC Corpus

The fast development of the MRC field is driven by various large and realistic datasets released in recent years. Each dataset is usually composed of documents and questions for testing the document understanding ability. The answers for the raised questions can be obtained through seeking from the documents or selecting the preseted options. Here, according to the formats of answers, we classify the datasets into three types, namely datasets with extractive answers, with descriptive answers and with multiple-choice answers, and introduce them respectively in the following subsections.In parallel to this survey, there are also new datasets hotpotqa; drop; googlenaturalquestions steadily coming out with more diverse task formulations, and testing more complicated understanding and reasoning abilities.

2.1 Datasets With Extractive Answers

To test a system’s ability of reading comprehension, this kind of datasets, which originates from Clozeref_cloze style questions, firstly provide the system with a large amount of documents or passages, and then feed it with questions whose answers are segments of corresponding passages. A good system should select a correct text span from a given context. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to complex inference ref_richardson .

Either sourced by crowdworkers or generated automatically from different corpus, these datasets all use a text span in the document as the answer to the proposed question. Many of them released in recent years are large enough for training strong neural models. These datasets include SQuAD, CNN/Daily Mail, CBT, NewsQA, TriviaQA, WIKIHOP which are described briefly below.

SQuAD

One of the most famous dataset of this kind is Stanford Question Answering Dataset (SQuAD) ref_squad . The Stanford Question Answering Dataset v1.0 (SQuAD v1.0) 111https://stanford-qa.com consists of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text (or span) from the corresponding reading passage. SQuAD v1.0 contains 107,785 question-answer pairs from 536 articles, which is much larger than previous manually labeled RC datasets. We quote some example question-answer pairs as in Fig.1, where each answer is a span of the document.

 

In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail… Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called “showers”.

What causes precipitation to fall?
gravity
What is another main form of precipitation besides drizzle, rain, snow, sleet and hail?
graupel
Where do water droplets collide with ice crystals to form precipitation?
within a cloud

 

Figure 1: Question-answer pairs for a sample passage in the SQuADref_squad .

In SQuAD v1.0 ref_squad , the answers belong to different categories as shown in Table 1. As we can see, common noun phrases make up 31.8% of the whole data, proper noun phrases 222consisting of person, location and other entities make up 32.6% of the data, and the rest one third consists of date, numbers, adjective phrase, verb phrase, clauses and so on. This indicates that the answers of SQuAD v1.0 displays reasonable diversity. As about the reasoning skills of SQuAD v1.0 to answer the questions, the authors show that all examples at least have some lexical or syntactic divergence between the question and the answer in the passage, through manually annotating some examples.

Answer type Percentage Example
Date 8.9% 19 October 1512
Other Numeric 10.9% 12
Person 12.9% Thomas Coke
Location 4.4% Germany
Other Entity 15.3% ABC Sports
Common Noun Phrase 31.8% property damage
Adjective Phrase 3.9% second-largest
Verb Phrase 5.5% returned to Earth
Clause 3.7% to avoid trivialization
Other 2.7% quietly
Table 1: Answer type distribution in SQuADref_squad

Later, SQuAD v2.0ref_squad_2 was released with emphasis on unanswerable questions. This new version of SQuAD adds over 50,000 unanswerable questions which were created adversarially by crowdworkers according to the original ones. In order to challenge the existing models which tend to make unreliable guesses on questions whose answers are not stated in context, newly added questions are highly similar to corresponding context and have plausible (but incorrect) answers in context. We also quote some examples as shown in Fig.2. The unanswerable questions in SQuAD v2.0 are posed by humans, and exhibit much more diversity and fidelity than those in other automatic constructed datasets ref_addsent ; ref_zero_shot

. In such cases, simple heuristics which are based on overlapping

ref_overlap or entity type recognitionref_type_recog , are not able to distinguish answerable from unanswerable questions.

Article: Endangered Species Act

Paragraph: “ …Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales, and the Bald Eagle Protection Act of 1940. These later laws had a low cost to society—the species were relatively rare—and little opposition was raised.”

Question 1: “Which laws faced significant opposition?”

Plausible Answer: later laws

Question 2: “What was the name of the 1937 treaty?”

Plausible Answer: Bald Eagle Protection Act

Figure 2: Unanswerable question examples with plausible (but incorrect) answersref_squad_2

CNN/Daily Mail

CNN and Daily Mail Datasetref_cnn , which was released by Google DeepMind and University of Oxford in 2015, is the first large-scale reading comprehension dataset constructed from natural language materials. Unlike most relevant work which uses templates or syntactic/semantic rules to extract document-query-answer triples, this work collects 93k articles from the CNN333www.cnn.com and 220k articles from the Daily Mail444www.dailymail.co.uk as the source text. Since each article comes along with a number of bullet points to summarize the article, this work converts these bullet points into document-query-answer triples with the Clozeref_cloze style questions.

To exclusively examine a system’s ability of reading comprehension rather than using world knowledge or co-occurrence, further modifications are implemented on those triples to construct an anonymized version. That is, each entity is anonymized by using an abstract entity marker, which is not easily predicted by using world knowledge or n-gram language model. An example data point and its anonymized version is shown in Table

2.

Some basic corpus statistics of CNN and Daily Mail are shown in Table 4. We also quote the percentages of the right answers appearing in the top N most frequent entities in an given document as in Table 4, illustrating the difficulty degree of the questions to some extent.

Original Version Anonymised Version
Context
The BBC producer allegedly struck by Jeremy Clarkson will not press charges against the “Top Gear” host, his lawyer said Friday. Clarkson, who hosted one of the most-watched television shows in the world, was dropped by the BBC Wednesday after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon “to an unprovoked physical and verbal attack.” … the ent381 producer allegedly struck by ent212 will not press charges against the “ ent153 ” host , his lawyer said friday . ent212 , who hosted one of the most - watched television shows in the world , was dropped by the ent381 wednesday after an internal investigation by the ent180 broadcaster found he had subjected producer ent193 “ to an unprovoked physical and verbal attack . ” …
Query
Producer X will not press charges against Jeremy Clarkson, his lawyer says. producer X will not press charges against ent212 , his lawyer says.
Answer
Oisin Tymon ent193
Table 2: An example data point quoted from ref_cnn
CNN Daily Mail train valid test train valid test # months 95 1 1 56 1 1 # documents 90,266 1,220 1,093 196,961 12,148 10,397 # queries 380,298 3,924 3,198 879,450 64,835 53,182 Max # entities 527 187 396 371 232 245 Avg # entities 26.4 26.5 24.5 26.5 25.5 26.0 Avg # tokens 762 763 716 813 774 780 Vocab size 118,497 208,045
Table 3: Corpus statistics of CNN and Daily Mail ref_cnn
Top N Cumulative % CNN Daily Mail 1 30.5 25.6 2 47.7 42.4 3 58.1 53.7 5 70.6 68.1 10 85.1 85.5
Table 4: Percentage of correct Answers contained in the top most frequent entities in a given document quoted from ref_cnn .

Cbt

The Children’s Book Testref_cbt is a part of bAbI project of Facebook AI Research555https://research.fb.com/downloads/babi/ which aims at researching automatic text understanding and reasoning. Children books are chosen because they ensure a clear narrative structure which aids this task. The children stories used in CBT come from books freely available from Project Guntenberg666https://www.gutenberg.org. Questions are formed by enumerating 21 consecutive sentences from chapters in books, of which the first 20 sentences serve as context, and the last one as query after removing one word. 10 candidates are selected from words appearing in either context or query. An example question is given in Fig. 3 and the dataset size is shown in Table 5.

In CBT, four distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions777based on output from the POS tagger and named-entity-recognizer in the Stanford Core NLP Toolkitref_SCNLP ., are removed respectively to form 4 classes of questions. For each class of questions, the nine wrong candidates are selected randomly from words which have the same type as the answer options in the corresponding context and query.

Compared to human performance on this dataset, the state-of-art models like Recurrent Neural Networks (RNNs) with Long-Short Term Memory (LSTM)

hochreiter1997long

performed much worse when predicting nouns or named entities, whereas they did great job in predicting prepostions and verbs. This may probably be explained by the fact that these models are almost based exclusively on local contexts. In contrast, Memory Networks

ref_mem_net can exploit a wider context and outperform the conventional models when predicting nouns or named entities. Thus, this corpus encourages the use of world knowledge in comparison with CNN/Daily Mail, and therefore focuses less on paraphrasing parts of a context.

Figure 3: An CBT example quoted from ref_cbt
Training Validation Test
Number of books 98 5 5
Number of questions (context+query) 669,343 8,000 10,000
Average words in contexts 465 435 445
Average words in queries 31 27 29
Distinct candidates 37,242 5,485 7,108
Vocabulary size 53,628
Table 5: Corpus statistics of CBT ref_cbt

NewsQA

Based on 12,744 news articles from CNN888www.cnn.com news, the NewsQAref_newsqa dataset contains 119,633 question-answer pairs generated by crowdworkers. Similar to SQuADref_squad , the answer to each question is a text span of arbitrary length in the corresponding article (a null span is also included). CNN articles are chosen as source materials, because in the authors’ view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like newsref_newsqa . The major differences between CNN/Daily Mail and NewsQA are that, the answers of NewsQA are not necessarily entities and therefore no anonymization procedure is considered in the generation of NewsQA.

The statistics of answer types in NewsQA is shown in Table 6. As can be seen in the table, the variety of answer types is ensured. Furthermore, the authors sampled 1000 examples from NewsQA and SQuAD respectively and analyzed the possible reasoning skills to answer the questions. The results indicate that compared to SQuAD, a larger proportion of questions in NewsQA require high-level reasoning skills, including Inference and Synthesis. Besides, while simple skills like word matching and paraphrasing can solve most questions in both datasets, NewsQA tends to require more complex reasoning skills than SQuAD. The detailed comparison result is given in Table 7.

Answer type Example Proportion (%)
Date/Time March 12, 2008 2.9
Numeric 24.3 million 9.8
Person Ludwig van Beethoven 14.8
Location Torrance, California 7.8
Other Entity Pew Hispanic Center 5.8
Common Noun Phr. federal prosecutors 22.2
Adjective Phr. 5-hour 1.9
Verb Phr. suffered minor damage 1.4
Clause Phr. trampling on human rights 18.3
Prepositional Phr. in the attack 3.8
Other nearly half 11.2
Table 6: Answer types distribution of NewsQAref_newsqa
Reasoning Example Proportion (%)
NewsQA SQuAD
Word Matching Q: When were the findings published?
S: Both sets of research findings were published Thursday
32.7 39.8
Paraphrasing Q: Who is the struggle between in Rwanda?
S: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo.
27.0 34.3
Inference Q: Who drew inspiration from presidents?
S: Rudy Ruiz says the lives of US presidents can make them positive role models for students.
13.2 8.6
Synthesis Q: Where is Brittanee Drexel from?
S: The mother of a 17-year-old Rochester, New York high school student … says she did not give her daughter permission to go on the trip. Brittanee Marie Drexel’s mom says…
20.7 11.9
Ambiguous/Insufficient Q: Whose mother is moving to the White House?
S: … Barack Obama’s mother-in-law, Marian Robinson, will join the Obamas at the family’s private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned]
6.4 5.4
Table 7: Reasoning skills used in NewsQA and SQuAD and their corresponding proportions ref_newsqa

TriviaQA

Instead of relying on crowdworkers to create question-answer pairs from selected passages like NewsQA and SQuAD, over 650K TriviaQAref_triviaqa question-answer-evidence triples are generated through automatic procedures. Firstly, a huge amount of question-answer pairs from 14 trivia and quiz-league websites are gathered and filtered. Then the evidence documents for each question-answer pair are collected from either web search results or Wikipedia articles. Finally, a clean, noise-free and human-annotated subset of 1975 triples from TriviaQA is given and an triple example is shown in Fig. 4.

The basic statistics of TriviaQA is given in Table 8. By sampling 200 examples from the dataset and annotating them manually, it turns out that the Wikipedia titles (including person, organization, location, and miscellaneous) consists of over 90% of all answer, and the rest small percentage of answers mainly belong to Numerical and Free Text type. The average number of entities per question and the percentages of certain types of questions are also shown in Table 9.

Question: The Dodecanese Campaign of WWII that was an attempt by the Allied forces to capture islands in the Aegean Sea was the inspiration for which acclaimed 1961 commando film?
Answer: The Guns of Navarone
Excerpt: The Dodecanese Campaign of World War II was an attempt by Allied forces to capture the Italian-held Dodecanese islands in the Aegean Sea following the surrender of Italy in September 1943, and use them as bases against the German-controlled Balkans. The failed campaign, and in particular the Battle of Leros, inspired the 1957 novel The Guns of Navarone and the successful 1961 movie of the same name.
Question: American Callan Pinckney’s eponymously named system became a best-selling (1980s-2000s) book/video franchise in what genre?
Answer: Fitness
Excerpt: Callan Pinckney was an American fitness professional. She achieved unprecedented success with her Callanetics exercises. Her 9 books all became international best-sellers and the video series that followed went on to sell over 6 million copies. Pinckney’s first video release ”Callanetics: 10 Years Younger In 10 Hours” outsold every other fitness video in the US.
Figure 4: Example question-answer-evidence triples in TriviaQA quoted fromref_triviaqa
Total number of QA pairs 95,956
Number of unique answers 40,478
Number of evidence documents 662,659
Avg. question length (word) 14
Avg. document length (word) 2,895
Table 8: Corpus statistics of TriviaQAref_triviaqa .
Property Example annotation Statistics
Avg. entities/question Which politician won the Nobel Peace Prize in 2009? 1.77 per question
Fine grained answer type What fragrant essential oil is obtained from Damask Rose? 73.5% of questions
Coarse grained answer type Who won the Nobel Peace Prize in 2009? 15.5% of questions
Time frame What was photographed for the first time in October 1959 34% of questions
Comparisons What is the appropriate name of the largest type of frog? 9% of questions
Table 9: Properties of questions on 200 sampled examples. The boldfaced words mean the presence of the corresponding properties.

Wikihop

WIKIHOPref_wikihop was released For the purpose of evaluating a system’s ability of multi-hop reasoning across multiple documents in 2018. In most existing datasets, the information needed to answer a question is usually contained in only one sentence, which makes current MRC models pay much attention on simple reasoning skills like locating, matching or aligning information between query and support text. For example, in SQuAD, the sentence which has the highest lexical similarity with the question contains the answer about 80% of the timeref_wad , and a simple binary word-in-query indicator feature boosted the relative accuracy of a baseline model by 27.9%ref_weis . To move beyond this, the authors define a novel MRC task in which a model needs to combine evidences in different documents to answer the questions. A sample in WIKIHOP which displays such characteristics is shown in Fig.5.

To construct WIKIHOP, the authors collect (s, r, o) triples - with subject entity , relation , and object entity , from WIKIDATAref_wikidata . Then Wikipedia articles associated with the entities are added as candidate evidence documents . The triple becomes a query after removing answer from it, that is, = (s, r, ?) and =o. To reach the goal of multi-hop reasoning, bipartite graphs are constructed for the help of corpus construction. As shown in Fig.6, vertices on two sides respectively correspond to the entities and the documents from the Knowledge Base, and edges denote the entities appear in the corresponding documents. For a given (q,a) pair, the answer candidates and support documents are identified by traversing the bipartite graph using breadth-first search; the documents visited will become the support documents .

Another dataset MEDHOP is constructed in the same way as WIKIHOP, with the focus on the medicine area. Some basic statistics of WIKIHOP and MEDHOP are shown in Table 10 and Table 11. Table 12 lists the proportions of different types of answer samples, which indicates that to perform well on WIKIHOP, one system needs to be good at multi-step reasoning.

The Hanging Gardens, in [Mumbai], also known as Pherozeshah Mehta Gardens, are terraced gardens … They provide sunset views over the [Arabian Sea]

[Mumbai] (also known as Bombay, the official name until 1995) is the capital city of the Indian state of Maharashtra. It is the most populous city in India

The [Arabian Sea] is a region of the northern Indian Ocean bounded on the north by Pakistan and Iran, on the west by northeastern Somalia and the Arabian Peninsula, and on the east by India

Question: (Hanging gardens of Mumbai, country, ?)
Options: {Iran, India, Pakistan, Somalia, …}

Figure 5: A sample of WIKIHOP quoted from ref_wikihop which displays the necessity of multi-hop reasoning across several documents.
Figure 6: A bipartite graph given in paperref_wikihop connecting entities and documents mentioning them. Bold edges are those traversed for the first fact in the small KB on the right; yellow highlighting indicates documents in and candidates in . Check and cross indicate correct and false candidates.
Train Dev Test Total
WIKIHOP 43,738 5,129 2,451 51,318
MEDHOP 1,620 342 546 2,508
Table 10: Dataset sizes of WIKIHOP and MedHop ref_wikihop .
min max avg median
# cand. – WH 2 79 19.8 14
# docs. – WH 3 63 13.7 11
# tok/doc – WH 4 2,046 100.4 91
# cand. – MH 2 9 8.9 9
# docs. – MH 5 64 36.4 29
# tok/doc – MH 5 458 253.9 264
Table 11: Corpus statistics of WIKIHOP and MedHop ref_wikihop . WH: WikiHop; MH: MedHop.
Unique multi-step answer. 36%
Likely multi-step unique answer. 9%
Multiple plausible answers. 15%
Ambiguity due to hypernymy. 11%
Only single document required. 9%
Answer does not follow. 12%
Wikidata/Wikipedia discrepancy. 8%
Table 12: Qualitiative analysis of sampled answers of WIKIHOPref_wikihop

2.2 Descriptive Answer Datasets

Instead of text spans or entities obtained from candidate documents, descriptive answers are whole, stand-alone sentences, which exhibit more fluency and integrity. In addition, in real world, many questions may not be answered simply by a text span or an entity. What’s more, presenting answers with their supporting evidence and examples is preferred by human. So in light of these reasons, some descriptive answer datasets are released in recent years. Next we mainly introduce two of them in detail, namely MS MARCO and NarrativeQA.

Ms Marco

MS MARCO (Microsoft MAchine Reading COmprehension) is a large dataset released by Microsoft in 2016nguyen2016ms . This dataset aims to address questions and documents in the real world. Sourced from real anonymized queries issued through Bing999www.bing.com or Cortana101010https://www.microsoft.com/en-us/cortana and the corresponding searching results from Bing search engine, MS MARCO can well reproduce QA situations in real world. For each question in the dataset, a crowdworker is asked to answer it in the form of a complete sentence using passages provided by Bing. The unanswerable questions are also kept in the dataset for the purpose of encouraging one system to judge whether a question is answerable due to scanty or conflicting materials. The first version of MS MARCO released in 2016 has about 100k questions, and the latest version V2.1 released in 2018 has over 1,000k questions. Both are now available at http://www.msmarco.org.

The dataset compositions of MS MARCO are shown in Table 13. And the distribution of different types of questions are shown in Table 14. From this table, we can see that not all of them contain interrogatives, because the queries come from real users. We can also see that the interrogative ”What” is contained in 34.96% of the queries and description questions account for the major question type. Generally, interrogative distribution in questions shows reasonable diversity.

Field Description
Query A question query issued to Bing.
Passages Top 10 passages from Web documents as retrieved by Bing. The passages are presented in ranked order to human editors. The passage that the editor uses to compose the answer is annotated as is_selected: 1.
Document URLs URLs of the top ranked documents for the question from Bing. The passages are extracted from these documents.
Answer(s) Answers composed by human editors for the question, automatically extracted passages and their corresponding documents.
Well Formed Answer(s) Well-formed answer rewritten by human editors, and the original answer.
Segment QA classification. E.g., tallest mountain in south america belongs to the ENTITY segment because the answer is an entity (Aconcagua).
Table 13: The MS MARCO dataset composition nguyen2016ms .
Question segment Percentage of question
Question types
YesNo 7.46%
What 34.96%
How 16.8%
Where 3.46%
When 2.71%
Why 1.67%
Who 3.33%
Which 1.79%
Other 27.83%
Question classification
Description 53.12%
Numeric 26.12%
Entity 8.81%
Location 6.17%
Person 5.78%
Table 14: Distribution of different question types in MS MARCOnguyen2016ms

NarrativeQA

NarrariveQAkovcisky2018narrativeqa

is another dataset with descriptive answers released by DeepMind and University of Oxford in 2017. NarrativeQA is specifically designed to examine how well a system can capture the underlying narrative elements to answer those questions which can not be answered by simple pattern recognition or global salience. From an example of question-answer pair shown in Fig.

7, we can see that relatively high-level abstraction or reasoning is required to answer the question.

The stories used in NarrativeQA consist of books from Project Gutenberg111111http://www.gutenberg.org/ and movie scripts from relative websites121212Mainly from http://www.imsdb.com/, and also from http://www.dailyscript.com/ and http://www.awesomefilm.com/.. Each story, as well as its plot summary, is finally provided to crowdworkers to create question-answer pairs. Because the crowdworkers never see the full text, it’s less likely for them to create questions and answers solely based on localized context. The answers can be full sentences, which exhibit more artificial intelligence when asked about factual informationkovcisky2018narrativeqa .

Title: Ghostbusters II
Question: How is Oscar related to Dana?
Answer: her son
Summary snippet: …Peter’s former girlfriend Dana Barrett has had a son, Oscar…
Story snippet:
DANA (setting the wheel brakes on the buggy) Thank you, Frank. I’ll get the hang of this eventually. She continues digging in her purse while Frank leans over the buggy and makes funny faces at the baby, OSCAR, a very cute nine-month old boy. FRANK (to the baby) Hiya, Oscar. What do you say, slugger? FRANK (to Dana) That’s a good-looking kid you got there, Ms. Barrett.
Figure 7: An example question-answer pair of NarrativeQA given in paperkovcisky2018narrativeqa

Some basic statistics are shown in Table 17, and the distribution of different types of questions and answers are shown in Table 17 and Table 17. According to the original paper, less than 30% of answers appear as text segments of the stories, which decreases the possibility of answering questions with simple skills for a system as before.

train valid test
# documents 1,102 115 355
… books 548 58 177
… movie scripts 554 57 178
# question–answer pairs 32,747 3,461 10,557
Avg. #tok. in summaries 659 638 654
Max #tok. in summaries 1,161 1,189 1,148
Avg. #tok. in stories 62,528 62,743 57,780
Max #tok. in stories 430,061 418,265 404,641
Avg. #tok. in questions 9.83 9.69 9.85
Avg. #tok. in answers 4.73 4.60 4.72
Table 15: NarrativeQA dataset statistics kovcisky2018narrativeqa
First token Frequency What 38.04% Who 23.37% Why  9.78% How  8.85% Where  7.53% Which  2.21% How many/much 1.80% When  1.67% In  1.19% OTHER  5.57%
Table 16: Frequency of first token of the question in the training set of NarrativeQA kovcisky2018narrativeqa .
Category Frequency Person 30.54% Description 24.50% Location  9.73% Why/reason  9.40% How/method  8.05% Event  4.36% Entity  4.03% Object  3.36% Numeric  3.02% Duration  1.68% Relation  1.34%
Table 17: Question categories on a sample of 300 questions from the validation set of NarrativeQA kovcisky2018narrativeqa .

¡RougeL-Bleu1¿,on Q&A + Natural Langauge Generation Task.

¡Bleu1-Bleu4-Meteor-RougeL¿.

SQuAD
(v1.1)
SQuAD
(v2.0)
CNN&
Daily Mail
CBT NewsQA TriviaQA WIKIHOP MS MARCO NarrativeQA
Release
date
2016 2018 2015 2015 2017 2017 2018 2018(v2) 2017
Type extractive extractive extractive extractive extractive extractive extractive narrative narrative
domain Wikipedia Wikipedia News books News Trivia Wikipedia Search Engine Scripts
Question
source
crowd
-sourced
crowd
-sourced
automatic automatic
crowd
-sourced
natural automatic
query:natural
answer:automatic
crowd
-sourced
Human
performance
EM 82.3
F1 91.2
EM 86.8
F1 89.5
-
NE 0.816
CN 81.6
VB 82.8
PR. 70.8
EM 46.5
F1 74.9
wiki-dom
79.7
web-dom
75.4
85.0 63.21-53.03 a
44.43-19.65
-24.14-57.02
b
SOTA
EM 87.4
F1 93.2
EM 85.1
F1 87.6
EM 76.9
F1 79.6
NE 89.1
CN 93.3
EM 42.8
F1 56.1
wiki-dom
67.3
web-dom
68.7
71.2 49.61-50.13
44.35-27.61
-21.80-44.69
Contain
unanswerable
question
Table 18: basic info of all Extractive and Narrative datasets

Default use word number when calculating length unless specified.

The statistics with are counted by ourselves. Unless specified other statistics come from corresponding original papers.

Corresponding data is unavailable.

Wikipedia articles, ¡train-dev-test¿.

Months of news, ¡train-dev-test¿.

Books number, ¡train-dev-test¿.

News articles.

Full Web Documents

Stories, ¡train-dev-test¿.

Paragraphs,¡train-dev-test¿.

Passages.

Result from chen2016thorough .

Anonymised version, the answer is an entity marker.

SQuAD
(v1.1)
SQuAD
(v2.0)
CNN&
Daily Mail
CBT NewsQA TriviaQA WIKIHOP MS MARCO NarrativeQA
Raw
document
442-
48-
46 a
442-
35-
28 a
95-1-1
&
56-1-1b
98-
5-
5 c
12,744 d - - 3,563,535 e
1,102-
115-
355 f
Document
number
18896-
2067-
? g
19035-
1204-
? g
90226-
1220-
1093
&
196961-
12148-
10397d
669343-
8000-
10000 g
12,744 d 662,659 g
598103-
74741-
? g
8069749-
1008985-
1008943 h
1102-
115-
355 f
Average
length of
document 1
116.6-
122.8-
?
116.6-
126.6-
?
762-
763-
716
&
813-
774-
780
465-
435-
445
616 2,895
85.42-
85.01-
?
56.49-
53.04-
53.05
62528-
62743-
57780
Query
number
87599-
10570-
9533
130319-
11873-
8862
380298-
3924-
3198
&
879450-
64835-
53182
669343-
8000-
10000
119,633 95,956
43,738-
5,129-
2,451
808731-
101093-
101092
32747-
3461-
10557
Average
length of
query
10.1-
10.2-
?
9.89-
10.02-
?
12.5
&
14.3 i
31-
27-
29
6.77 14
3.42-
3.42-
?
6.37-
6.41-
6.40
9.83-
9.69-
9.85
Average
length of
answer
3.16-
2.91-
?
3.16-
3.06-
?
1 j 1 4.13 1.68
1.79-
1.73-
?
9.21-
9.65-
?
4.73-
4.60-
4.72
Table 19: Statistics infomation of all Extractive and Narrative datasets.
RACE CLOTH MCTest MCScript ARC CoQA Release date 2017 2017 2013 2018 2018 2018 Type multiple choice multiple choice multiple choice multiple choice multiple choice multiple choice Domain exam exam Fiction stories Script scenarios science Wide131313Children’s Stories, Literature, Mid/High School Exams, News, Wikipedia, Science, Reddit Question source natural natural crowd -sourced crowd -sourced natural crowd -sourced Human performance 95.4- 94.2 141414RACE-M - RACE-H 85.9- 89.7- 84.5 151515total-middle-high 97.7- 96.9 161616MC160-MC500 98.2 - 89.4- 87.4 171717in domain-out of domain SOTA 73.4- 68.1 181818RACE-M-RACE-H 0.860- 0.887- 0.850 81.7- 82.0 84.84 44.62 87.5- 85.3 Contain unanswerable question Test common sense specifically Raw document - - - 110191919scenarios 14M202020science-related sentences - Document number 25,137- 1,389- 1,407 5513- 805- 813 160- 500212121stories 1470- 219- 430222222texts - 8,399 232323Passages Average length of document 1 321.9 313.16 204- 212 196 - 271 Query number 87,866- 4,887- 4,934 76850- 11067- 11516 640- 2000 9731- 1411- 2797 3370- 869- 3548 127k Average length of query 10 - 8.0- 7.7 7.8 20.4 5.5 Average length of answer 5.3 1 3.4- 3.4 3.6 4.1 2.7
Table 20: Basic information and statistics of all Multiple-choice datasets.

2.3 Multiple-choice

Datasets with descriptive answers are relatively difficult to evaluate the system performance precisely and objectively. Nevertheless, multiple-choice question, which has long been used for testing students reading comprehension ability, can be objectively gradable. Generally, this kind of questions can extensively examine one’s reasoning skills, including simple pattern recognition, clausal inference and multiple-sentence reasoning, of a given passage. In light of this, many datasets in this format are released and listed as follows.

MCTest

MCTestrichardson2013mctest , a high-quality dataset consisting of 500 stories and 2000 questions about fiction stories, was released in 2013 by Microsoft with the same format as RACE. Targeting at 7-year-old children, passages and questions used in MCTest are quite easy and understandable, which reduces the world knowledge requisite. For MCTest, many answers can only be found in the story, since the stories are fictional. The main drawback of MCTest is that its size is too small to train a well-performed model. A sample of MCTest is shown in Fig.8.

James the Turtle was always getting in trouble. Sometimes he’d reach into the freezer and empty out all the food. Other times he’d sled on the deck and get a splinter. His aunt Jane tried as hard as she could to keep him out of trouble, but he was sneaky and got into lots of trouble behind her back. One day, James thought he would go into town and see what kind of trouble he could get into. He went to the grocery store and pulled all the pudding off the shelves and ate two jars. Then he walked to the fast food restaurant and ordered 15 bags of fries. He did- n’t pay, and instead headed home. His aunt was waiting for him in his room. She told James that she loved him, but he would have to start acting like a well-behaved turtle.After about a month, and after getting into lots of trouble, James finally made up his mind to be a better turtle.

(1) What is the name of the trouble making turtle?

(A) Fries (B) Pudding (C) James (D) Jane

(2) What did James pull off of the shelves in the grocery store?

(A) pudding (B) fries (C) food (D) splinters

(3) Where did James go after he went to the grocery store?

(A) his deck (B) his freezer (C) a fast food restaurant (D) his room

(4) What did James do after he ordered the fries?

(A) went to the grocery store (B) went home without paying

(C) ate them (D) made up his mind to be a better turtle

Figure 8: A sample of MCTest given in paperrichardson2013mctest

Race

RACElai2017large contains 27,933 passages and 97,687 questions that are collected from English exams for middle and high school Chinese students. Considering that those passages and questions are specifically designed by English teachers and experts to evaluate reading comprehension ability of students, this dataset is promising in developing and testing MRC systems.

Because the questions are created with high quality by human experts, there are few noises in RACE. What’s more, passages in RACE cover a wide range of topics, overcoming the topic bias problem that commonly exists in other datasets (like news articles for CNN/Daily Mailref_cnn and Wikipedia articles for SQuADref_squad ).

A sample of RACE is shown in Table 21. The dataset firstly provides students/systems with a passage to read, then presents several questions with 4 candidate answers. Words in the questions and candidate answers may not appear in the passage, so simple context-matching techniques will not aid as much as in other datasets. Analysis in the paperlai2017large shows that reasoning skill is indispensable to answering most questions of RACE correctly.

RACE is divided into two subsets, namely RACE-M and RACE-H, for middle school and high school respectively. Some basic statistics of RACE is given in Table 22 and Table 23. Distributions of different reasoning types required to answer certain questions are illustrated in Table 24, denoting that over half of the questions in RACE requires Reasoning skill.

Passage: In a small village in England about 150 years ago, a mail coach was standing on the street. It didn’t come to that village often. People had to pay a lot to get a letter. The person who sent the letter didn’t have to pay the postage, while the receiver had to. “Here’s a letter for Miss Alice Brown,” said the mailman. “ I’m Alice Brown,” a girl of about 18 said in a low voice. Alice looked at the envelope for a minute, and then handed it back to the mailman. “I’m sorry I can’t take it, I don’t have enough money to pay it”, she said. A gentleman standing around were very sorry for her. Then he came up and paid the postage for her. When the gentleman gave the letter to her, she said with a smile, “ Thank you very much, This letter is from Tom. I’m going to marry him. He went to London to look for work. I’ve waited a long time for this letter, but now I don’t need it, there is nothing in it.” “Really? How do you know that?” the gentleman said in surprise. “He told me that he would put some signs on the envelope. Look, sir, this cross in the corner means that he is well and this circle means he has found work. That’s good news.” The gentleman was Sir Rowland Hill. He didn’t forgot Alice and her letter. “The postage to be paid by the receiver has to be changed,” he said to himself and had a good plan. “The postage has to be much lower, what about a penny? And the person who sends the letter pays the postage. He has to buy a stamp and put it on the envelope.” he said . The government accepted his plan. Then the first stamp was put out in 1840. It was called the “Penny Black”. It had a picture of the Queen on it. Questions: 1): The first postage stamp was made _. A. in England B. in America C. by Alice D. in 1910 2): The girl handed the letter back to the mailman because _ . A. she didn’t know whose letter it was B. she had no money to pay the postage C. she received the letter but she didn’t want to open it D. she had already known what was written in the letter 3): We can know from Alice’s words that _ . A. Tom had told her what the signs meant before leaving B. Alice was clever and could guess the meaning of the signs C. Alice had put the signs on the envelope herself D. Tom had put the signs as Alice had told him to 4): The idea of using stamps was thought of by _ . A. the government B. Sir Rowland Hill C. Alice Brown D. Tom 5): From the passage we know the high postage made _ . A. people never send each other letters B. lovers almost lose every touch with each other C. people try their best to avoid paying it D. receivers refuse to pay the coming letters Answer: ADABC

Table 21: A sample of RACE quoted from lai2017large .
Dataset RACE-M RACE-H RACE
Subset Train Dev Test Train Dev Test Train Dev Test All
# passages 6,409 368 362 18,728 1,021 1,045 25,137 1,389 1,407 27,933
# questions 25,421 1,436 1,436 62,445 3,451 3,498 87,866 4,887 4,934 97,687
Table 22: The basic statistics of the training, development and test sets of RACE-M,RACE-H and RACElai2017large
Dataset RACE-M RACE-H RACE
Passage Len 231.1 353.1 321.9
Question Len 9.0 10.4 10.0
Option Len 3.9 5.8 5.3
Vocab size 32,811 125,120 136,629
Table 23: Statistics of RACE where Len denotes length and Vocab denotes Vocabulary lai2017large .
Dataset RACE-M RACE-H RACE CNN SQUAD NEWSQA
Word Matching 29.4% 11.3% 15.8% 13.0% 39.8%* 32.7%*
Paraphrasing 14.8% 20.6% 19.2% 41.0% 34.3%* 27.0%*
Single-Sentence Reasoning 31.3% 34.1% 33.4% 19.0% 8.6%* 13.2%*
Multi-Sentence Reasoning 22.6% 26.9% 25.8% 2.0% 11.9%* 20.7%*
Ambiguous/Insufficient 1.8% 7.1% 5.8% 25.0% 5.4%* 6.4%*
Table 24: Distribution of reasoning type in RACElai2017large and other datasets. * denotes quoting ref_newsqa based on 1000 samples per dataset, and quoting chen2016thorough .

Cloth

CLOTH (CLOze test by TeacHers) xie2017cloth was constructed with the format of cloze questions. It is also composed of English tests for Chinese middle school and high school. One example is shown in Table 25. In CLOTH, the missing blanks in the questions were carefully designed by teachers to test different aspects of language knowledge. The candidate answers usually have subtle differences, making the questions difficult to answer even for human. Similar to RACE, CLOTH is also divided into two parts: CLOTH-M for middle school and CLOTH-H for high school ones. Some basic statistics of this corpus are shown in Table 26.

Through experiments on CLOTH, the authors came to the conclusion that the performance gap between human and a system mainly results from the ability of using a long-term context xie2017cloth , or multiple-sentence reasoning.

Passage: Nancy had just got a job as a secretary in a company. Monday was the first day she went to work, so she was very _1_ and arrived early. She _2_ the door open and found nobody there. ”I am the _3_ to arrive.” She thought and came to her desk. She was surprised to find a bunch of _4_ on it. They were fresh. She _5_ them and they were sweet. She looked around for a _6_ to put them in. ”Somebody has sent me flowers the very first day!” she thought _7_ . ” But who could it be?” she began to _8_ . The day passed quickly and Nancy did everything with _9_ interest. For the following days of the _10_ , the first thing Nancy did was to change water for the followers and then set about her work.

Then came another Monday. _11_ she came near her desk she was overjoyed to see a(n) _12_ bunch of flowers there. She quickly put them in the vase, _13_ the old ones. The same thing happened again the next Monday. Nancy began to think of ways to find out the _14_ . On Tuesday afternoon, she was sent to hand in a plan to the _15_ . She waited for his directives at his secretary’s _16_ . She happened to see on the desk a half-opened notebook, which _17_ : ”In order to keep the secretaries in high spirits, the company has decided that every Monday morning a bunch of fresh flowers should be put on each secretary’s desk.” Later, she was told that their general manager was a business management psychologist.

Questions:

1. A. depressed B. encouraged C. excited D. surprised
2. A. turned B. pushed C. knocked D. forced
3. A. last B. second C. third D. first
4. A. keys B. grapes C. flowers D. bananas
5. A. smelled B. ate C. took D. held
6. A. vase B. room C. glass D. bottle
7. A. angrily B. quietly C. strangely D. happily
8. A. seek B. wonder C. work D. ask
9. A. low B. little C. great D. general
10. A. month B. period C. year D. week
11. A. Unless B. When C. Since D. Before
12. A. old B. red C. blue D. new
13. A. covering B. demanding C. replacing D. forbidding
14. A. sender B. receiver C. secretary D. waiter
15. A. assistant B. colleague C. employee D. manager
16. A. notebook B. desk C. office D. house
17. A. said B. written C. printed D. signed
Table 25: A Sample passage of CLOTH xie2017cloth . Bold faces highlight the correct answers. There is only one best answer among four candidates, although several candidates may seem correct.
Dataset CLOTH-M CLOTH-H CLOTH
Train Dev Test Train Dev Test Train Dev Test
# passages 2,341 355 335 3,172 450 478 5,513 805 813
# questions 22,056 3,273 3,198 54,794 7,794 8,318 76,850 11,067 11,516
Vocab. size 15,096 32,212 37,235
Avg. # sentence 16.26 18.92 17.79
Avg. # words 242.88 365.1 313.16
Table 26: The statistics of the training, development and test sets of CLOTH and two subsets from paperxie2017cloth .

MCScript

MCScriptostermann2018mcscript focuses on questions that need reasoning using commonsense knowledge. Released in March 2018, this new dataset provides stories describing people’s daily activities, in which ambiguity and implicitness can be resolved easily by commonsense, with crowdworkers to generate questions. The correct answers to the questions may not appear in the given text, as is shown in the examples in Fig.9. It consists of about 2.1K texts and 14K questions. According to statistical analysis, 27.4% of all the questions in MCScript require commonsense knowledge to answer. Thus, this dataset can literally examine systems’ commonsense inference ability. All questions in the dataset are answerable. The distribution of the questions types in MCScript is shown in Fig.10.

T I wanted to plant a tree. I went to the home and garden store and picked a nice oak. Afterwards, I planted it in my garden.
Q1 What was used to dig the hole?
a. a shovel b. his bare hands
Q2 When did he plant the tree?
a. after watering it b. after taking it home
Figure 9: Example questions of MCScriptostermann2018mcscript .
Figure 10: Distribution of question types in MCScriptostermann2018mcscript .

Arc

ARC(AI2 Reasoning Challenge)clark2018arc makes use of standardized tests, whose questions are objectively gradable and exhibit the variety in difficulty, which can be a Grand Challenge for AI clark2018think clark2016my . ARC consists about 7.8K questions.

The authors of ARC also designe two baselines, namely a retrieval-based algorithm and a word co-occurrence algorithm. The Challenge Set, a subset of ARC containing about 2.6K questions, is created by gathering questions that are answered incorrectly by both of these two baselines. The Easy Set is composed of the remaining 5.2K questions. Several state-of-the-art models are tested on the Challenge Set, but none of them are able to significantly outperform a random baselineclark2018arc , which reflects the difficulty of the Challenge Set. Two example questions of the Challenge Set questions are as follows:

Which property of a mineral can be determined just by looking at it? (A) luster [correct] (B) mass (C) weight (D) hardness

A student riding a bicycle observes that it moves faster on a smooth road than on a rough road. This happens because the smooth road has (A) less gravity (B) more gravity (C) less friction [correct] (D) more friction

For example, the first question is difficult in that the ground truth, “Luster can be determined by looking at something”, only appears as a stand-alone sentence in the Web text. However, the incorrect candidate “hardness” has a strong correlation with “mineral” in the text.

The ARC corpus, a scientific text corpus which contains 14M science-related sentences and mentions 95% of the knowledge related to the Challenge Set questions according to a sample analysis clark2018arc , is released along with the ARC questions set. The use of the corpus is optional. Some statistics of ARC is shown in Table 27, Table 28 and Table 27.

[t] Challenge Easy       Total
Train [t] 1119 2251 3370
Dev 299 570 869
Test 1172 2376 3548
[t] TOTAL 2590 5197 7787
Table 27: Number of questions in ARC clark2018arc
Grade [t] Challenge Easy
% (# qns) % (# qns)
3 [t] 3.6 (94 qns) 3.4 (176 qns)
4 9 (233) 11.4 (591)
5 19.5 (506) 21.2 (1101)
6 3.2 (84) 3.4 (179)
7 14.4 (372) 10.7 (557)
8 41.4 (1072) 41.2 (2139)
9 8.8 (229) 8.7 (454)
Table 28: Grade-level distribution of ARC questions clark2018arc
min / average / max
Property: [t] Challenge Easy
Question (# words) [t] 2 / 22.3 / 128 3 / 19.4 / 118
Question (# sentences) 1 / 1.8 / 11 1 / 1.6 / 9
Answer option (# words) [t] 1 / 4.9 / 39 1 / 3.7 / 26
# answer options 3 / 4.0 / 5 3 / 4.0 / 5
Table 29: Properties of the ARC Dataset in clark2018arc

CoQA

CoQA(Conversational Question Answering systems)reddy2018coqa is a conversational style datasets which consists of 126k questions sourced from 8k conversations in 7 different domains. Answers of questions are in free form. The motivation of CoQA is that in daily life human usually get information by asking questions in conversations, and so it is desirable for a machine to be capable of answering such questions. CoQA firstly provides models with a text passage to understand, and then presents a series of questions that appear in a conversation. One example is given in Fig.11.

The key challenge of CoQA is that a system must handle conversation history properly to tackle problems like resolving the coreference. Among 7 domains of the passages from which the questions are collected, 2 are used for cross-domain evaluation and 5 are used for in-domain evaluation. The distribution of domains are shown in Table 30. Some linguistic phenomena statistics are given in Table 31. The coreference and pragmatics are unique and challenging linguistic phenomena that do not appeare in other datasets.

Jessica went to sit in her rocking chair. Today was her birthday and she was turning 80. Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well. Jessica had
Q: Who had a birthday?
A: Jessica
R: Jessica went to sit in her rocking chair. Today was her birthday and she was turning 80.
Q: How old would she be?
A: 80
R: she was turning 80
Q: Did she plan to have any visitors?
A: Yes
R: Her granddaughter Annie was coming over
Q: How many?
A: Three
R: Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well.
Q: Who?
A: Annie, Melanie and Josh
R: Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well.
Figure 11: A conversation example from the CoQAreddy2018coqa . Each turn contains a question (Q), an answer (A) and a rationale (R) that supports the answer.
Domain #Passages #Q/A Passage #Turns per
pairs length passage
Children’s Sto. 750 10.5k 211 14.0
Literature 1,815 25.5k 284 15.6
Mid/High Sch. 1,911 28.6k 306 15.0
News 1,902 28.7k 268 15.1
Wikipedia 1,821 28.0k 245 15.4
Out of domain
Science 100 1.5k 251 15.3
Reddit 100 1.7k 361 16.6
Total 8,399 127k 271 15.2
Table 30: Distribution of domains in CoQA in reddy2018coqa .
Phenomenon Example Percentage
Relationship between a question and its passage
Lexical match Q: Who had to rescue her? 29.8%
A: the coast guard
R: Outen was rescued by the coast guard
Paraphrasing Q: Did the wild dog approach? 43.0%
A: Yes
R: he drew cautiously closer
Pragmatics Q: Is Joey a male or female? 27.2%
A: Male
R: it looked like a stick man so she kept him. She named her new noodle friend Joey
Relationship between a question and its conversation history
No coref. Q: What is IFL? 30.5%
Explicit coref. Q: Who had Bashti forgotten? 49.7%
A: the puppy
Q: What was his name?
Implicit coref. Q: When will Sirisena be sworn in? 19.8%
A: 6 p.m local time
Q: Where?
Table 31: Linguistic phenomena in CoQA questions given by paperreddy2018coqa .

3 MRC Techniques

In this section, we will introduce different techniques employed in MRC.

3.1 Non-Neural Method

Before the neural networks came into fashion, many MRC systems were developed based on different non-neural techniques, which now mostly serve as baselines for comparison. Next, we will introduce the techniques including TF-IDF, sliding window, logistic regression and boosted method.

Tf-Idf

The TF-IDF (term frequency-inverse document frequency) technique is widely used in the Information Retrieval area and finds a place in the MRC tasks later. As validated beforeclark2016combining , if candidate answers are presented, retrieval-based models can serve as a strong baseline. This kind of baseline is widely used in multi-document datasets such as WIKIHOPref_wikihop . By solely exploiting lexical correlation between the concatenation of a candidate answer and the query and a given document, this kind of algorithm can predict the candidate with the highest similarity score among all documents. Because the inter-document information is usually ignored by TF-IDF, this baseline can not detect how much a question rely on cross-document reasoning.

Sliding Window

The sliding window algorithm is constructed as a baseline in the dataset MCTestrichardson2013mctest . It predicts an answer based on simple lexical information in a sliding window. Inspired by TF-IDF, this algorithm uses inverse word count as weight of each word, and maximize the bag-of-word similarity between the answer and the sliding window in the given passage.

Logistic Regression

This baseline method is proposed in SQuADref_squad . It extracts a large mount of features from the candidates including lengths, bigram frequencies, word frequencies, span POS tags, lexical features, dependency tree path features etc., and predicts whether a text-span is the final answer based on all those information.

Boosting method

This model is proposed as a conventional feature-based baseline for CNN/Daily Mail dataset chen2016thorough . Since the task can be seen as a ranking problem—making the score of the predicted answer rank top among all the candidates, the authors turn to the implementation of LambdaMART wu2010adapting in Ranklib package242424https://sourceforge.net/p/lemur/wiki/RankLib/.

, a highly successful ranking algorithm using forests of boosting decision trees. Through feature engineering, 8 features templates

252525the details can be found in the paper

are chosen to form a feature vector which represents a candidate, and the weight vector will be learnt so that the correct answer will be ranked the highest.

3.2 Neural-Based Method

With the popularity of neural networks, end-to-end models have produced promising results on some MRC tasks. These models do not need to design complex manually-devised features that traditional approaches relied on, and perform much better than them. Next we will introduce several end-to-end models, mainly in chronological order.

Match-LSTM+Pointer Network

As the first end-to-end neural architecturewang2016machine proposed for SQuAD, this model combines the match-LSTMwang2015learning , which is used to get a query-aware representation of passage, and the Pointer Network2015arXiv150603134V , which aims to construct an answer so that every token within it comes from the input text. An overall picture of the model architecture is given in Fig.12.

Match-LSTM is originally designed for predicting textual entailment. In that task, a premise and a hypothesis are given, and the match-LSTM encodes the hypothesis in a premise-aware way. For every token in hypothesis, this model uses soft-attention mechanism, which will be discussed later in Sect.3.3, to get a weighted vector representation of premise. This weighted vector is concatenated with a vector representation of the according token, and both are fed into an LSTM, namely the match-LSTM. In this paper, the authors replace the premise and hypothesis with the query and passage to get a query-aware representation of the given passage. Two preprocessing LSTMs are employed respectively to encode the query and the passage. And a bidirectional match-LSTM is employed to obtain the query-aware representation of the passage.

After getting the query-aware representation of the passage, a Pointer Network(Ptr-Net) is employed to generate answers by selecting tokens from the input passage. At each inference step, Ptr-Net uses soft-attention mechanism to get a probability distribution of the input sequence, and selects the token with the largest possibility as the output symbol. Moreover, two different strategies are proposed for constructing the answer.

The sequence model assumes that every word in the answer can appear in any position in the passage, and the length of the answer is not fixed. In order to tell the model to stop generating tokens after getting the whole answer, a special symbol is placed at the end of the passage, the prediction of this symbol indicates the termination of the answer generating.

The boundary model works differently from the Sequence Model in that it only predicts the start indice and the end indice , in other word, it’s based on the assumption that the answer appears as a continuous segment of the passage. The test result shows an advantage of the boundary model over the other one.

Figure 12: the overview of two models in wang2016machine

Bi-Directional Attention Flow

Proposed by seo2016bidirectional , the Bi-Directional Attention Flow has two key features at the context encoding stage. First, this model takes different levels of granularity as input, including character-level, word-level and contextualized embeddings. Second, it uses bi-directional attention flow, namely a passage-to-query attention and a query-to-passage attention, to get a query-aware passage representation. The detailed description is given as follows.

As is shown in Fig.13, the BiDAF model has six layers. The Character Embedding layer and the Word Embedding Layer map each each word into the vector space based respectively on character-level CNNskim2014convolutional and the pre-trained GloVe embeddingpennington2014glove . The concatenation of these two word embeddings is passed to a two layer Highway Networkssrivastava2015highway , the output of which is provided to a bi-directional LSTM in the Contextual Embedding Layer to refine the word embedding using the context information. These first three layers are applied to both the query and the passage.

The Attention Flow Layer is where the information from the query and the passage mixed and interacted. Instead of summarizing the passage and the query into a fixed vector like most attention mechanisms do, this layer grants raw information including attention vectors and the embeddings from previous layers flowing to the subsequent layer, which reduces the information loss. The attentions are computed in two directions—from passage to query and from query to passage. The detailed information of the Attention Flow Layer will be given in Sect.3.3.

The Modeling Layer takes in the query-aware representation of context words and used two bi-directional LSTM to capture the interactions among the passage words according to the query. The last Output Layer is task-specific, which gives the prediction of the answer.

Figure 13: Overview of BiDAF architecture given in seo2016bidirectional .

Gated Attention

Gated-Attention Readerdhingra2016gated targets at realizing multi-hop reasoning in answering cloze-style questions over documents. A multiplicative interaction between the query and the hidden state of the document is employed in its attention mechanism. The multi-hop architecture of the model imitates the multi-step reasoning of human in reading comprehension.

The overview of the model is given in Fig.14. The model reads the document and the query iteratively in a row of K layers. In

th layer, first, the model uses bidirectional Gated Recurrent Unit(Bi-GRU)

cho2014learning to transform the , embeddings of document passed from the last layer, to get . Then a layer-specific query representation is transformed by another Bi-GRU to get .

Then both and are fed to a Gated Attention module, the result of which, , will be passed to the next layer.

For each token in , the Gated Attention module uses soft attention to get a token specified representation of query: . Finally we get the new embeddings of this token, , by applying a element-wise multiplication for and .

At the last stage, the decoder employs a softmax layer to the inner-product between outputs of last layer to get the possibility distribution of the predict answers.

Figure 14: Gated Attention architecture given in dhingra2016gated .

Dcn

Dynamic Coattention Networks(DCN)xiong2016dynamic introduces coattention mechanism to combine co-dependent representations of query and the document, and dynamic iteration to avoid been trapped in local maxima corresponding to incorrect answers like previous single-pass models. The Dynamic pointer decoder takes in the output of coattention encoder and generates the final predictions. Detailed procedures is given as follows.

Let denote the sequence of embeddings of words in query and for those in document. The the details of DCN are as follows.

In the Document and Question encoder, the vector representations of the document and the query are fed into LSTM respectively, and the hidden states at each step are combined to form the encoding matrix and . Sentinel vector and merity2016pointer is appended to the encoding matrix to enable the model to map some unrelated words that exclusively appear in either the query or the document to this void vector. To allow for some variation between the document encoding space and the query encoding space, a non-linear projection is applied to . The final representations of the document and the query are and .

The Coattention encoder takes in and and outputs coattention encoding matrix , which is the input to the Dynamic pointing decoder. The details of Coattention encoder will be discussed in Sec.3.3.

The overview of Dynamic pointing decoder is given in Fig.15. To enable the model to recover from local maxima, the Highway Maxout Network (HMN) is proposed to predict the start point and the end point iteratively. HMN is composed of Highway Networkssrivastava2015highway , which is characterized by the skip connect that passes gradient effectively through deep networks, and Maxout Networksgoodfellow2013maxout

, a learnable activation function that has strong empirical performance.

During the iteration, the hidden state of the decoder is updated according to Eq.1.

where and are the coattention representations of according start and end words predicted by (i-1)th iteration. Given , and , the possibility of the tth word to be the start or the end point is calculated by Eq.2.

the word with the maximum possibility is selected as the prediction at current step.

The architecture of HMN is given in Fig.16. The mathematical description of HMN is given as follows:

where r is a non-linear projection of the current state.

Figure 15: Architecture of Dynamic Decoder from paperxiong2016dynamic

. Blue denotes the variables and functions related to estimating the start position whereas red denotes the variables and functions related to estimating the end position.

Figure 16: Architecture of Highway Maxout Network given in xiong2016dynamic .

FastQA

FastQAWeissenbornWS17 achieved competitive performance with simple architecture, which questions the necessity of improving complexity of QA systems. Unlike many systems that employed a complex interaction layer to catch the interaction between the query and the context, FastQA only makes use of computable features on word levels. The overview of FastQA architecture is given in Fig.17.

The binary word-in-question() feature indicates whether a token in passages appears in the corresponding query.

The weighted feature() which is defined as below takes the term-frequency and the similarity between query and context into account.

The concatenation of these two features and the original representation of each words is fed into a Bi-LSTM to get the final hidden state. The Answer Layer is composed of a simple 2-layer feed-forward network along with a beam search.

Figure 17: Overview of FastQA architecture from WeissenbornWS17 .

R-Net

The R-NETwang2017gated was proposed in 2017 by MSRA and achieved state-of-the-art results on SQuAD and MS-MARCO. An overview of its architecture is shown in Fig.18.

Given the word-level and character-level embeddings, R-NET firstly employs a bi-directional GRUcho2014learning to encode the questions and passages. Then it uses a gated attention-based recurrent network to fuse the information from the question and passage. Later a self-matching layer is used to fine-tune and get the final representation of the passage. The output layer is based on pointer networks similar to that in match-LSTM to predict the boundary of the answer. The initial hidden vectors of the pointer network are computed by an attention-pooling over the final passage representations.

The gated attention-based recurrent network adds another gate to normal attention-based recurrent networks. This gate gives the weight of certain passage information according to the question. Inspired by rocktaschel2015reasoning , the sentence-pair representations are obtained as follows:

where is the added gate, and are original representations of the passage and the question.

To exploit information from the whole passage for each token, a self-matching attention is applied to get the final representation of the passage . The details of self-matching attention is given in Sec.3.3.

The Output Layer uses pointer networksvinyals2015pointer to predict the start and end position of the answer. The initial hidden vector for the pointer network is an attention-pooling over the question representation . The objective function is the sum of the negative log probabilities of the ground truth start and end position by the predicted distributions.

Figure 18: Overview of the R-net architecture from paperwang2017gated

ReasoNet

Unlike previous models which have fixed number of turns during reading or reasoning regardless of the complexity of queries and passages, the ReasoNetshen2017reasonet

makes use of reinforcement learning to dynamically determine the reading and reasoning depth. The intuition of this work comes from that the difficulty of different questions can vary a lot in the same dataset

chen2016thorough , and the fact that human usually revisit important part of passage and question to answer the question better. An overview of ReasoNet structure is given in Fig.19.

The external Memory M is usually the word embeddings encoded by a Bi-RNN. The Internal State is updated according to , where is the Attention vector : . The Termination Gate

determines when to stop updating states above and predict the answers according to the binary variable

: . In this way, the ReasoNet can mimic the inference process of human, exploit the passages and answer the questions better.

Figure 19: Overview of ReasoNet structure from shen2017reasonet .

QAnet

Most of the models above are primarily based on RNNs with attention, therefore are often slow for both training and inference due to the sequential nature of RNNs. To make machine comprehension fast, the QAnetyu2018qanet are proposed without RNNs in its architecture. An overview of QAnet structure is given in Fig.20.

The key difference between QAnet and the previous models is that, QAnet only use convolutional and self-attention mechanism in its embedding and modeling encoders, discarding the commonly used RNNs. The depthwise separable convolutionschollet2017xception kaiser2017depthwise can capture the local structure of the text, and the multi-head (self-)attention mechanismvaswani2017attention will model global interactions within the whole passages. A query-to-context attention similar to that in DCNxiong2016dynamic is applied afterwards.

The QAnet achieved state-of-the-art accuracy while achieving up to 13x speedup in training and 9x per training iteration, compared to the RNN counterpartsyu2018qanet .

Figure 20: Overview of the QANet architecture (left) which has several Encoder Blocks. All Encoder Blocks are the same except that the number of convolutional layers for each block(right) varies. From yu2018qanet .

3.3 Attention

The Attention mechanisms have shown great power in selecting important information, aligning and capturing similarity between different part of input. Next we will introduce several representative attention mechanism primarily based on time order.

Hard Attention

was proposed in image caption task in xu2015show as the ”stochastic hard attention”. Let denote the feature vectors captured by CNN, each corresponding to a part of the image. When deciding which one of all features is to feed to the decoder LSTM to generate caption, a one-hot variable is defined. The indicator is set to 1 if the -th vector of is the one used to extract visual features at current step . If we denote the input of decoder LSTM as :

The paper assigns a multinoulli distribution parametrized by and view

as a random variable:

where

is a multilayer perceptron. After defining the objective function

as below:

and approximate its gradient by a Monte Carlo method, the final learning rule for the model is then:

where the and

are two hyperparameters set by crossvalidation.

Although hard attention is tricky and troublesome in training, once trained well, it can perform better than soft attention for the sharp focus on memory provided (Shankar2018SurprisinglyEH xu2015show ShankarS17 ).

Soft Attention

Here we will first introduce the basic form of soft attention in Neural Machine translation task, then we will talk about its variants in other tasks like natural language inference(NLI) and MRC.

Different to hard attention, soft attention calculates a weight distribution among all the input representations, and use the weighted sum of them as the input to the decoder. For example, in BahdanauCB14 , let denote the Encoder’s output sequence, and denote the weight of each (which indicates to what extent is related to the current output token ). Then the input to the decoder is :

The weights are calculated and learn through a feedforward neural network .

In NLI task, input has two components, namely a premise and a hypothesis. And attention is used to exploit the interaction/relation between these two parts. Take the match-LSTMwang2015learning as example, we denote and as the resulting hidden states of the Encoder LSTM separately for premise and hypothesis. When predicting the label of the hypothesis, an attention-weighted combinations of the hidden states of the premise is computed through a match-LSTM:

where is the attention vector stated above, is the parameters to be learned, and is the hidden state of match-LSTM at position . Finally is concatenated with for predicting the result.

In MRC task, we can regard the question as a premise and the passage as a hypothesis, as it likes in the model Match-LSTM+Pointer Network. By applying the attention mechanism, we can get additional query information for each token in the passage, which will improve the model performance.

Compared to hard attention, soft attention’s advantage is that it is differentiabile thus easy to train, and fast in training and inference.

Bi-directional Attention

was proposed in BiDAF. Compared to the attention described above, it considers attention in two directions, or Query-to-context(Q2C) Attention and Context-to-query(C2Q) Attention. Take BiDAF as example, given and , the concatenation of the outputs of the LSTMs in Contextual Embedding Layer, the similarity matrix is computed:

where is trainable parameters, is elementwise multiplication. Then we can compute the C2Q attention weights and the attended query vectors by:

Similarily the Q2C attention weights and attended context vectors are:

Finally two attention vectors above are combined together with the original contextual embeddings through a vector fusing function, the result of which serve as the base for future modeling or prediction.

The Bi-directional Attention adds more information in the Q2C Attention part compared to normal attention mechanism. However, as shown in the ablation study of seo2016bidirectional , the attention in this direction is less useful than the standard C2Q Attention(on SQuAD dev set). The reason is that the query is usually short, and the added Q2C information is relatively small than that of the other one.

Coattention

is proposed in xiong2016dynamic . The architecture of the coattention encoder in DCN is shown in Fig.21.

In the Coattention encoder, the affinity matrix

is calculated and normalized row-wise and column-wise to obtain , the attention weights matrix across the document for each word of query, and , the attention weights matrix across the query for each word of document. Then the attention contexts for question are computed and concatenated with to obtain the final document representation . At the last step, [,] is fed to a bidirectional LSTM:

The result serves as the foundation for predicting the answer. The hidden states form coattention encoding matrix .

Similarly to Bi-directional Attention, the coattention mechanism utilizes attention information in two directions, while in a different way. It successively computes the attention contexts for the question and the document, and fuses them to get a co-dependent representation of document.

Figure 21: Architecture of co-attention encoder from xiong2016dynamic .

Self-matching Attention

is proposed in R-NET introduced before. Because many useful information exist in the passage context while they can not be captured by the traditional LSTM(which mainly exploits information in words’ surrounding window), so the self-matching attention mechanism is proposed to address this problem. It collects evidence for each token from the whole passage and its according question information . And the result is the final passage representation:

here refers to an attention-pooling vector of the whole passage:

and is the gate define in Sec.3.2.

Uniquely, Self-matching Attention captures long-distance information from the passage itself. This helps R-NET in dealing with problems like coreference.

3.4 Pre-trained word representations

How to efficiently represent words as vectors, which serve as the base of most of the modern MRC systems, is a problem that concerns researchers very much. Previously, one-hot representation and N-gram model were popular, however, those simple techniques met their limits in many tasks. To address this problem, many technologies have been proposed. According to the time of occurrence, we introduce them as follows.

word2vec

Moving further from feedforward neural net language model(NNLM)bengio2003neural and recurrent neural net language model(RNNLM), this papermikolov2013efficient

proposed two novel models to learn the distributed representations of words, namely the Continuous Bag-of-Words Model(CBOW) and the Continuous Skip-gram Model. The architectures of these two models are given in Fig.

22.

The CBOW model uses several history words and future words as input and maximizes the probability of correctly predicting the current word. By contrast the skip-gram model uses current word as input and tries to predict words within a certain range before and after the current word. The result word vectors of both models achieved state-of-the-art performance on several tests.

Figure 22: Architectures of CBOW model and Skip-gram model from mikolov2013efficient .

GloVe

The word2vec method belongs to local context window methods, those methods can capture fine-grained semantic and syntactic regularities of words efficiently. However, they can not exploit global statistical information like latent semantic analysis(LSA)deerwester1990indexing , which belongs to global matrix factorization methods. GloVepennington2014glove combines the advantages of these two family of methods.

GloVe takes the co-occurrence probabilities of words into consideration, and use the ratio of probabilities to reflect the relations of different words. For example, if we denote the probability that word j appear in the context of word j as , then the ratio can tell the correlation between certain words. An example is given in Fig.23. The GloVe model takes the below form according to above phenomenon.

where are word vectors. varies according to different constrains.

Probability and Ratio k = solid k = gas k = water k = fashion
P(kice)
P(ksteam)
P(kice)/P(ksteam) 8.9 1.36 0.96
Figure 23: from pennington2014glove . A ratio much greater than 1 means word k correlate well with ice, and a ratio much greater than 1 means word k correlate well with stream.

ELMo

One disadvantages of word vectors generated by above methods is that they are static, thus are independent of application linguistic contexts. This may lead to poor performance when it comes to polysemy. In light of this, ELMopeters2018deep was proposed to addresses this problem.

ELMo’s model employs a bi-LSTMhochreiter1997long with character convolutions on the input.

Then it jointly maximizes the log likelihood of the forward and backward directions and record the internal states.

Finally a task specific linear combination of those internal states are used to obtain the ELMo representation. In this way, ELMo can capture context-dependent aspects of word meaning as well as syntax information for each token. If fine-tuned on domain specific data, the model usually performs better.

Gpt

Compared to ELMo, GPTradford2018improving uses a variant of Transformervaswani2017attention instead of LSTM to better capture the long term linguistic structure. The overview of this work is given in Fig.24. Given a corpus , a standard language model with a multi-layer Transformer decoderliu2018generating is used:

where is the context window size, is the context vectors of tokens, is the number of layers, is the token embedding matrix, and

is the position embedding matrix. All the parameters are trained using stochastic gradient descent

robbins1985stochastic . The final transformer block’s activation is denoted as .

A supervised fine-tuning can be applied in different down-stream tasks. As for some tasks like text classification, only a linear output layer with parameters is needed to predict :

More recently, its successor GPT2 is released, which is a scale-up of GPT while with much larger volume. GPT2 has 1.5 billion parameters, and claimed to achieve state-of-the-art performance on many language modeling. However its code have not been released by the time this paper is written.

Figure 24: Graph comes from paperradford2018improving . Left is transformer architecture and training objectives used in this work. Right is input transformations for fine-tuning on different tasks. All structured inputs are converted into token sequences to be processed by GPT, followed by a linear+softmax layer.

Bert

As shown in Fig.25, both ELMo and GPT models only use unidirectional language models to learn the representation of tokens. BERTdevlin2018bert points out that this restriction has severely limited the efficiency of the pre-trained representation. To address this problem, two new prediction tasks are proposed to pre-train BERT in two direction, namely the ”masked language model” and the ”Next Sentence Prediction”.

Inspired by the Clozeref_cloze

task, the ”masked language model” is to predict the randomly masked tokens’ id based on their context in the input. In other words, both the left and the right context will be taken into consideration when computing representations. And to capture sentence level information and relationship, a binarized ”Next Sentence Prediction” task is to predict whether a sentence

is the next sentence of .

The WordPiece embeddingswu2016google are used in the input layer along with the Segment Embeddings and the Position Embeddings. The input embeddings is the sum of above three embeddings, as shown in Fig.26. The main architecture of BERT’s model is a multi-layer bidirectional Transformer encoder almost identical to the original onevaswani2017attention .

Similar to GPT, when fine-tuned on down-steam tasks, only an additional output layer with a minimal number of parameters is needed, as shown in Fig.27. BERT advanced state-of-the-art results on 11 NLP tasks.

A comparison of size of BERT and GPT is given in Table 32.

Figure 25: Model Architectures of BERT, GPT and ELMo Quoted from devlin2018bert
Figure 26: BERT Input Representation devlin2018bert .
Figure 27: Task specific models overview from paperdevlin2018bert .
Model Parameters Layers Hidden size
117M 12 768
110M 12 768
340M 24 1024
1542M 48 1600
Table 32: Hyperparameter Comparison among 4 Similar Models. Layers means the number of the transformer blocks.

4 Conclusion

In this paper, we summarized advances in MRC field in recent years. In section1, we briefly introduced the history of MRC tasks and some early MRC systems. In section 2, we introduced recent datasets in three categories, i.e. SQuAD, CNN/Daily mail, CBT, NewsQA, TriviaQA and CLOTH in Extractive format, MS MARCO and Narrative QA in Narrative format and WIKIHOP, MCTest, RACE, MCScript and ARC in Multiple-choice format. The CoQA, a novel dataset focuses on conversational questions is also included.

In section 3, we first go through several non-neural methods, including Sliding Window, Logistic regression, TF-IDF and Boosted method, then more importantly the neural-based models like mLSTM+Ptr, DCN, GA, BiDAF, FastQA, RNET, ReasoNet and QAnet. Afterwards we discussed and compared two important compositions of these models, namely the Pre-training technology and Attention mechanism, in detail. We covered Word2Vec, Glove, ELMo, GPT&GPT2 and BERT in section 3.4, and hard attention, soft attention, Bi-directional attention, coattention and self-attention mechanisms in section 3.3.

All together, we reviewed the major progress that has been made in recent years in MRC field. However, the MRC direction is developing very fast and it is difficult to include all the newly proposed MRC work in this survey. We hope this review will ease the reference to recent MRC advences, and encourage more researchers to work on MRC field.

References