Over past decades, there has been a growing interest in making the machine understand human languages. And recently, great progress has been made in machine reading comprehension (MRC). In one view, the recent tasks titled MRC can also be seen as the extended tasks of question answering (QA).
As early as 1965, Simmons had summarized a dozen of QA systems proposed over the preceding 5 years in his reviewsimmons1964answering . The survey by Hirschman and Gaizauskashirschman2001natural classifies those QA model into three categories, namely the natural language front ends to the database, the dialogue interactive advisory systems and the question answering and story comprehension. For QA systems in the first category, like the BASEBALLgreen1961baseball and the LUNARwoods1973progress system, they usually transform the natural language questions into a query against a structured database based on linguistic knowledge. Although performing fairly well on certain tasks, they suffered from the constraints of the narrow domain of the database. As about the dialogue interactive advisory systems, including the SHRDLUwinograd1972understanding and the GUSbobrow1977gus , early models also used the database as their knowledge source. Problems like ellipsis and anaphora in the conservation, which those systems struggled in dealing with, still remain as a challenge even for nowadays models. The last category can be seen as the origin of modern MRC tasks. Wendy Lehnertlehnert1977conceptual first proposed that the QA systems should consider both the story and the question, and answer the question after necessary interpretation and inference. Lehnert also designed a system called QUALMlehnert1977conceptual according to her theory.
The past decade has witnessed a huge development in the MRC field, including the soar of numbers of corpus and great progress in techniques.
As about MRC corpus, plenty of datasets in different domains and styles have been released in recent years. In 2013, MCTestrichardson2013mctest was released as a multiple-choice reading comprehension dataset, which was of high quality whereas too small to train neural models. In 2015, CNN/Daily Mailref_cnn and CBTref_cbt were released. These two datasets were generated automatically from differentdomains and much larger than previous datasets. In 2016, SQuADref_squad was shown up as the first large-scale dataset with questions and answers written by the human. Many techniques have been proposed along with the competition on this dataset. In the same year, the MS MARCOnguyen2016ms was released with the emphasis on narrative answers. Subsequently, NewsQAref_newsqa and NarrativeQAkovcisky2018narrativeqa were constructed in similar paradigm with SQuAD and MS MARCO respectively. And both datasets were crowdsourced with the expectation for high quality. Next, various datasets sourced from different domains sprung up in the following two years, including RACElai2017large , CLOTHxie2017cloth and ARCclark2018arc that were collected from exams, TriviaQAref_triviaqa that were based on trivias and MCScriptostermann2018mcscript primarily focused on scripts. Released in 2018, WikiHopref_wikihop aimed at examing systems’ ability of multi-hop reasoning, and CoQAreddy2018coqa were proposed to test conversation ability of models.
The appearance of large-scale datasets above makes training an end-to-end neural MRC model possible. When competing on the leaderboard, many models and techniques were developed in an attempt to conquer a certain dataset. From word representations, attention mechanisms to high-level architectures, neural models evolve rapidly and even surpass human performance in some tasks.
, we introduce the traditional non-neural methods, neural network based models and attention mechanism which have been used in the MRC tasks. Finally, Section 4 concludes our review.
2 MRC Corpus
The fast development of the MRC field is driven by various large and realistic datasets released in recent years. Each dataset is usually composed of documents and questions for testing the document understanding ability. The answers for the raised questions can be obtained through seeking from the documents or selecting the preseted options. Here, according to the formats of answers, we classify the datasets into three types, namely datasets with extractive answers, with descriptive answers and with multiple-choice answers, and introduce them respectively in the following subsections.In parallel to this survey, there are also new datasets hotpotqa; drop; googlenaturalquestions steadily coming out with more diverse task formulations, and testing more complicated understanding and reasoning abilities.
2.1 Datasets With Extractive Answers
To test a system’s ability of reading comprehension, this kind of datasets, which originates from Clozeref_cloze style questions, firstly provide the system with a large amount of documents or passages, and then feed it with questions whose answers are segments of corresponding passages. A good system should select a correct text span from a given context. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic understanding to complex inference ref_richardson .
Either sourced by crowdworkers or generated automatically from different corpus, these datasets all use a text span in the document as the answer to the proposed question. Many of them released in recent years are large enough for training strong neural models. These datasets include SQuAD, CNN/Daily Mail, CBT, NewsQA, TriviaQA, WIKIHOP which are described briefly below.
One of the most famous dataset of this kind is Stanford Question Answering Dataset (SQuAD) ref_squad . The Stanford Question Answering Dataset v1.0 (SQuAD v1.0) 111https://stanford-qa.com consists of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text (or span) from the corresponding reading passage. SQuAD v1.0 contains 107,785 question-answer pairs from 536 articles, which is much larger than previous manually labeled RC datasets. We quote some example question-answer pairs as in Fig.1, where each answer is a span of the document.
In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel and hail… Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called “showers”.
|What causes precipitation to fall?|
|What is another main form of precipitation besides drizzle, rain, snow, sleet and hail?|
|Where do water droplets collide with ice crystals to form precipitation?|
|within a cloud|
In SQuAD v1.0 ref_squad , the answers belong to different categories as shown in Table 1. As we can see, common noun phrases make up 31.8% of the whole data, proper noun phrases 222consisting of person, location and other entities make up 32.6% of the data, and the rest one third consists of date, numbers, adjective phrase, verb phrase, clauses and so on. This indicates that the answers of SQuAD v1.0 displays reasonable diversity. As about the reasoning skills of SQuAD v1.0 to answer the questions, the authors show that all examples at least have some lexical or syntactic divergence between the question and the answer in the passage, through manually annotating some examples.
|Date||8.9%||19 October 1512|
|Other Entity||15.3%||ABC Sports|
|Common Noun Phrase||31.8%||property damage|
|Verb Phrase||5.5%||returned to Earth|
|Clause||3.7%||to avoid trivialization|
Later, SQuAD v2.0ref_squad_2 was released with emphasis on unanswerable questions. This new version of SQuAD adds over 50,000 unanswerable questions which were created adversarially by crowdworkers according to the original ones. In order to challenge the existing models which tend to make unreliable guesses on questions whose answers are not stated in context, newly added questions are highly similar to corresponding context and have plausible (but incorrect) answers in context. We also quote some examples as shown in Fig.2. The unanswerable questions in SQuAD v2.0 are posed by humans, and exhibit much more diversity and fidelity than those in other automatic constructed datasets ref_addsent ; ref_zero_shot
. In such cases, simple heuristics which are based on overlappingref_overlap or entity type recognitionref_type_recog , are not able to distinguish answerable from unanswerable questions.
CNN and Daily Mail Datasetref_cnn , which was released by Google DeepMind and University of Oxford in 2015, is the first large-scale reading comprehension dataset constructed from natural language materials. Unlike most relevant work which uses templates or syntactic/semantic rules to extract document-query-answer triples, this work collects 93k articles from the CNN333www.cnn.com and 220k articles from the Daily Mail444www.dailymail.co.uk as the source text. Since each article comes along with a number of bullet points to summarize the article, this work converts these bullet points into document-query-answer triples with the Clozeref_cloze style questions.
To exclusively examine a system’s ability of reading comprehension rather than using world knowledge or co-occurrence, further modifications are implemented on those triples to construct an anonymized version. That is, each entity is anonymized by using an abstract entity marker, which is not easily predicted by using world knowledge or n-gram language model. An example data point and its anonymized version is shown in Table2.
Some basic corpus statistics of CNN and Daily Mail are shown in Table 4. We also quote the percentages of the right answers appearing in the top N most frequent entities in an given document as in Table 4, illustrating the difficulty degree of the questions to some extent.
|Original Version||Anonymised Version|
|The BBC producer allegedly struck by Jeremy Clarkson will not press charges against the “Top Gear” host, his lawyer said Friday. Clarkson, who hosted one of the most-watched television shows in the world, was dropped by the BBC Wednesday after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon “to an unprovoked physical and verbal attack.” …||the ent381 producer allegedly struck by ent212 will not press charges against the “ ent153 ” host , his lawyer said friday . ent212 , who hosted one of the most - watched television shows in the world , was dropped by the ent381 wednesday after an internal investigation by the ent180 broadcaster found he had subjected producer ent193 “ to an unprovoked physical and verbal attack . ” …|
|Producer X will not press charges against Jeremy Clarkson, his lawyer says.||producer X will not press charges against ent212 , his lawyer says.|
The Children’s Book Testref_cbt is a part of bAbI project of Facebook AI Research555https://research.fb.com/downloads/babi/ which aims at researching automatic text understanding and reasoning. Children books are chosen because they ensure a clear narrative structure which aids this task. The children stories used in CBT come from books freely available from Project Guntenberg666https://www.gutenberg.org. Questions are formed by enumerating 21 consecutive sentences from chapters in books, of which the first 20 sentences serve as context, and the last one as query after removing one word. 10 candidates are selected from words appearing in either context or query. An example question is given in Fig. 3 and the dataset size is shown in Table 5.
In CBT, four distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions777based on output from the POS tagger and named-entity-recognizer in the Stanford Core NLP Toolkitref_SCNLP ., are removed respectively to form 4 classes of questions. For each class of questions, the nine wrong candidates are selected randomly from words which have the same type as the answer options in the corresponding context and query.
performed much worse when predicting nouns or named entities, whereas they did great job in predicting prepostions and verbs. This may probably be explained by the fact that these models are almost based exclusively on local contexts. In contrast, Memory Networksref_mem_net can exploit a wider context and outperform the conventional models when predicting nouns or named entities. Thus, this corpus encourages the use of world knowledge in comparison with CNN/Daily Mail, and therefore focuses less on paraphrasing parts of a context.
|Number of books||98||5||5|
|Number of questions (context+query)||669,343||8,000||10,000|
|Average words in contexts||465||435||445|
|Average words in queries||31||27||29|
Based on 12,744 news articles from CNN888www.cnn.com news, the NewsQAref_newsqa dataset contains 119,633 question-answer pairs generated by crowdworkers. Similar to SQuADref_squad , the answer to each question is a text span of arbitrary length in the corresponding article (a null span is also included). CNN articles are chosen as source materials, because in the authors’ view, machine comprehension systems are particularly suited to high-volume, rapidly changing information sources like newsref_newsqa . The major differences between CNN/Daily Mail and NewsQA are that, the answers of NewsQA are not necessarily entities and therefore no anonymization procedure is considered in the generation of NewsQA.
The statistics of answer types in NewsQA is shown in Table 6. As can be seen in the table, the variety of answer types is ensured. Furthermore, the authors sampled 1000 examples from NewsQA and SQuAD respectively and analyzed the possible reasoning skills to answer the questions. The results indicate that compared to SQuAD, a larger proportion of questions in NewsQA require high-level reasoning skills, including Inference and Synthesis. Besides, while simple skills like word matching and paraphrasing can solve most questions in both datasets, NewsQA tends to require more complex reasoning skills than SQuAD. The detailed comparison result is given in Table 7.
|Answer type||Example||Proportion (%)|
|Date/Time||March 12, 2008||2.9|
|Person||Ludwig van Beethoven||14.8|
|Other Entity||Pew Hispanic Center||5.8|
|Common Noun Phr.||federal prosecutors||22.2|
|Verb Phr.||suffered minor damage||1.4|
|Clause Phr.||trampling on human rights||18.3|
|Prepositional Phr.||in the attack||3.8|
Q: When were the findings published?
S: Both sets of research findings were published Thursday…
Q: Who is the struggle between in Rwanda?
S: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo.
Q: Who drew inspiration from presidents?
S: Rudy Ruiz says the lives of US presidents can make them positive role models for students.
Q: Where is Brittanee Drexel from?
S: The mother of a 17-year-old Rochester, New York high school student … says she did not give her daughter permission to go on the trip. Brittanee Marie Drexel’s mom says…
Q: Whose mother is moving to the White House?
S: … Barack Obama’s mother-in-law, Marian Robinson, will join the Obamas at the family’s private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned]
Instead of relying on crowdworkers to create question-answer pairs from selected passages like NewsQA and SQuAD, over 650K TriviaQAref_triviaqa question-answer-evidence triples are generated through automatic procedures. Firstly, a huge amount of question-answer pairs from 14 trivia and quiz-league websites are gathered and filtered. Then the evidence documents for each question-answer pair are collected from either web search results or Wikipedia articles. Finally, a clean, noise-free and human-annotated subset of 1975 triples from TriviaQA is given and an triple example is shown in Fig. 4.
The basic statistics of TriviaQA is given in Table 8. By sampling 200 examples from the dataset and annotating them manually, it turns out that the Wikipedia titles (including person, organization, location, and miscellaneous) consists of over 90% of all answer, and the rest small percentage of answers mainly belong to Numerical and Free Text type. The average number of entities per question and the percentages of certain types of questions are also shown in Table 9.
|Question: The Dodecanese Campaign of WWII that was an attempt by the Allied forces to capture islands in the Aegean Sea was the inspiration for which acclaimed 1961 commando film?|
|Answer: The Guns of Navarone|
|Excerpt: The Dodecanese Campaign of World War II was an attempt by Allied forces to capture the Italian-held Dodecanese islands in the Aegean Sea following the surrender of Italy in September 1943, and use them as bases against the German-controlled Balkans. The failed campaign, and in particular the Battle of Leros, inspired the 1957 novel The Guns of Navarone and the successful 1961 movie of the same name.|
|Question: American Callan Pinckney’s eponymously named system became a best-selling (1980s-2000s) book/video franchise in what genre?|
|Excerpt: Callan Pinckney was an American fitness professional. She achieved unprecedented success with her Callanetics exercises. Her 9 books all became international best-sellers and the video series that followed went on to sell over 6 million copies. Pinckney’s first video release ”Callanetics: 10 Years Younger In 10 Hours” outsold every other fitness video in the US.|
|Total number of QA pairs||95,956|
|Number of unique answers||40,478|
|Number of evidence documents||662,659|
|Avg. question length (word)||14|
|Avg. document length (word)||2,895|
|Avg. entities/question||Which politician won the Nobel Peace Prize in 2009?||1.77 per question|
|Fine grained answer type||What fragrant essential oil is obtained from Damask Rose?||73.5% of questions|
|Coarse grained answer type||Who won the Nobel Peace Prize in 2009?||15.5% of questions|
|Time frame||What was photographed for the first time in October 1959||34% of questions|
|Comparisons||What is the appropriate name of the largest type of frog?||9% of questions|
WIKIHOPref_wikihop was released For the purpose of evaluating a system’s ability of multi-hop reasoning across multiple documents in 2018. In most existing datasets, the information needed to answer a question is usually contained in only one sentence, which makes current MRC models pay much attention on simple reasoning skills like locating, matching or aligning information between query and support text. For example, in SQuAD, the sentence which has the highest lexical similarity with the question contains the answer about 80% of the timeref_wad , and a simple binary word-in-query indicator feature boosted the relative accuracy of a baseline model by 27.9%ref_weis . To move beyond this, the authors define a novel MRC task in which a model needs to combine evidences in different documents to answer the questions. A sample in WIKIHOP which displays such characteristics is shown in Fig.5.
To construct WIKIHOP, the authors collect (s, r, o) triples - with subject entity , relation , and object entity , from WIKIDATAref_wikidata . Then Wikipedia articles associated with the entities are added as candidate evidence documents . The triple becomes a query after removing answer from it, that is, = (s, r, ?) and =o. To reach the goal of multi-hop reasoning, bipartite graphs are constructed for the help of corpus construction. As shown in Fig.6, vertices on two sides respectively correspond to the entities and the documents from the Knowledge Base, and edges denote the entities appear in the corresponding documents. For a given (q,a) pair, the answer candidates and support documents are identified by traversing the bipartite graph using breadth-first search; the documents visited will become the support documents .
Another dataset MEDHOP is constructed in the same way as WIKIHOP, with the focus on the medicine area. Some basic statistics of WIKIHOP and MEDHOP are shown in Table 10 and Table 11. Table 12 lists the proportions of different types of answer samples, which indicates that to perform well on WIKIHOP, one system needs to be good at multi-step reasoning.
|# cand. – WH||2||79||19.8||14|
|# docs. – WH||3||63||13.7||11|
|# tok/doc – WH||4||2,046||100.4||91|
|# cand. – MH||2||9||8.9||9|
|# docs. – MH||5||64||36.4||29|
|# tok/doc – MH||5||458||253.9||264|
|Unique multi-step answer.||36%|
|Likely multi-step unique answer.||9%|
|Multiple plausible answers.||15%|
|Ambiguity due to hypernymy.||11%|
|Only single document required.||9%|
|Answer does not follow.||12%|
2.2 Descriptive Answer Datasets
Instead of text spans or entities obtained from candidate documents, descriptive answers are whole, stand-alone sentences, which exhibit more fluency and integrity. In addition, in real world, many questions may not be answered simply by a text span or an entity. What’s more, presenting answers with their supporting evidence and examples is preferred by human. So in light of these reasons, some descriptive answer datasets are released in recent years. Next we mainly introduce two of them in detail, namely MS MARCO and NarrativeQA.
MS MARCO (Microsoft MAchine Reading COmprehension) is a large dataset released by Microsoft in 2016nguyen2016ms . This dataset aims to address questions and documents in the real world. Sourced from real anonymized queries issued through Bing999www.bing.com or Cortana101010https://www.microsoft.com/en-us/cortana and the corresponding searching results from Bing search engine, MS MARCO can well reproduce QA situations in real world. For each question in the dataset, a crowdworker is asked to answer it in the form of a complete sentence using passages provided by Bing. The unanswerable questions are also kept in the dataset for the purpose of encouraging one system to judge whether a question is answerable due to scanty or conflicting materials. The first version of MS MARCO released in 2016 has about 100k questions, and the latest version V2.1 released in 2018 has over 1,000k questions. Both are now available at http://www.msmarco.org.
The dataset compositions of MS MARCO are shown in Table 13. And the distribution of different types of questions are shown in Table 14. From this table, we can see that not all of them contain interrogatives, because the queries come from real users. We can also see that the interrogative ”What” is contained in 34.96% of the queries and description questions account for the major question type. Generally, interrogative distribution in questions shows reasonable diversity.
|Query||A question query issued to Bing.|
|Passages||Top 10 passages from Web documents as retrieved by Bing. The passages are presented in ranked order to human editors. The passage that the editor uses to compose the answer is annotated as is_selected: 1.|
|Document URLs||URLs of the top ranked documents for the question from Bing. The passages are extracted from these documents.|
|Answer(s)||Answers composed by human editors for the question, automatically extracted passages and their corresponding documents.|
|Well Formed Answer(s)||Well-formed answer rewritten by human editors, and the original answer.|
|Segment||QA classification. E.g., tallest mountain in south america belongs to the ENTITY segment because the answer is an entity (Aconcagua).|
|Question segment||Percentage of question|
is another dataset with descriptive answers released by DeepMind and University of Oxford in 2017. NarrativeQA is specifically designed to examine how well a system can capture the underlying narrative elements to answer those questions which can not be answered by simple pattern recognition or global salience. From an example of question-answer pair shown in Fig.7, we can see that relatively high-level abstraction or reasoning is required to answer the question.
The stories used in NarrativeQA consist of books from Project Gutenberg111111http://www.gutenberg.org/ and movie scripts from relative websites121212Mainly from http://www.imsdb.com/, and also from http://www.dailyscript.com/ and http://www.awesomefilm.com/.. Each story, as well as its plot summary, is finally provided to crowdworkers to create question-answer pairs. Because the crowdworkers never see the full text, it’s less likely for them to create questions and answers solely based on localized context. The answers can be full sentences, which exhibit more artificial intelligence when asked about factual informationkovcisky2018narrativeqa .
|Title: Ghostbusters II|
|Question: How is Oscar related to Dana?|
|Answer: her son|
|Summary snippet: …Peter’s former girlfriend Dana Barrett has had a son, Oscar…|
|DANA (setting the wheel brakes on the buggy) Thank you, Frank. I’ll get the hang of this eventually. She continues digging in her purse while Frank leans over the buggy and makes funny faces at the baby, OSCAR, a very cute nine-month old boy. FRANK (to the baby) Hiya, Oscar. What do you say, slugger? FRANK (to Dana) That’s a good-looking kid you got there, Ms. Barrett.|
Some basic statistics are shown in Table 17, and the distribution of different types of questions and answers are shown in Table 17 and Table 17. According to the original paper, less than 30% of answers appear as text segments of the stories, which decreases the possibility of answering questions with simple skills for a system as before.
|… movie scripts||554||57||178|
|# question–answer pairs||32,747||3,461||10,557|
|Avg. #tok. in summaries||659||638||654|
|Max #tok. in summaries||1,161||1,189||1,148|
|Avg. #tok. in stories||62,528||62,743||57,780|
|Max #tok. in stories||430,061||418,265||404,641|
|Avg. #tok. in questions||9.83||9.69||9.85|
|Avg. #tok. in answers||4.73||4.60||4.72|
¡RougeL-Bleu1¿,on Q&A + Natural Langauge Generation Task.
Default use word number when calculating length unless specified.
The statistics with are counted by ourselves. Unless specified other statistics come from corresponding original papers.
Corresponding data is unavailable.
Wikipedia articles, ¡train-dev-test¿.
Months of news, ¡train-dev-test¿.
Books number, ¡train-dev-test¿.
Full Web Documents
Result from chen2016thorough .
Anonymised version, the answer is an entity marker.
|12,744 d||-||-||3,563,535 e||
|12,744 d||662,659 g||
Datasets with descriptive answers are relatively difficult to evaluate the system performance precisely and objectively. Nevertheless, multiple-choice question, which has long been used for testing students reading comprehension ability, can be objectively gradable. Generally, this kind of questions can extensively examine one’s reasoning skills, including simple pattern recognition, clausal inference and multiple-sentence reasoning, of a given passage. In light of this, many datasets in this format are released and listed as follows.
MCTestrichardson2013mctest , a high-quality dataset consisting of 500 stories and 2000 questions about fiction stories, was released in 2013 by Microsoft with the same format as RACE. Targeting at 7-year-old children, passages and questions used in MCTest are quite easy and understandable, which reduces the world knowledge requisite. For MCTest, many answers can only be found in the story, since the stories are fictional. The main drawback of MCTest is that its size is too small to train a well-performed model. A sample of MCTest is shown in Fig.8.
RACElai2017large contains 27,933 passages and 97,687 questions that are collected from English exams for middle and high school Chinese students. Considering that those passages and questions are specifically designed by English teachers and experts to evaluate reading comprehension ability of students, this dataset is promising in developing and testing MRC systems.
Because the questions are created with high quality by human experts, there are few noises in RACE. What’s more, passages in RACE cover a wide range of topics, overcoming the topic bias problem that commonly exists in other datasets (like news articles for CNN/Daily Mailref_cnn and Wikipedia articles for SQuADref_squad ).
A sample of RACE is shown in Table 21. The dataset firstly provides students/systems with a passage to read, then presents several questions with 4 candidate answers. Words in the questions and candidate answers may not appear in the passage, so simple context-matching techniques will not aid as much as in other datasets. Analysis in the paperlai2017large shows that reasoning skill is indispensable to answering most questions of RACE correctly.
RACE is divided into two subsets, namely RACE-M and RACE-H, for middle school and high school respectively. Some basic statistics of RACE is given in Table 22 and Table 23. Distributions of different reasoning types required to answer certain questions are illustrated in Table 24, denoting that over half of the questions in RACE requires Reasoning skill.
CLOTH (CLOze test by TeacHers) xie2017cloth was constructed with the format of cloze questions. It is also composed of English tests for Chinese middle school and high school. One example is shown in Table 25. In CLOTH, the missing blanks in the questions were carefully designed by teachers to test different aspects of language knowledge. The candidate answers usually have subtle differences, making the questions difficult to answer even for human. Similar to RACE, CLOTH is also divided into two parts: CLOTH-M for middle school and CLOTH-H for high school ones. Some basic statistics of this corpus are shown in Table 26.
Through experiments on CLOTH, the authors came to the conclusion that the performance gap between human and a system mainly results from the ability of using a long-term context xie2017cloth , or multiple-sentence reasoning.
Passage: Nancy had just got a job as a secretary in a company. Monday was the first day she went to work, so she was very _1_ and arrived early. She _2_ the door open and found nobody there. ”I am the _3_ to arrive.” She thought and came to her desk. She was surprised to find a bunch of _4_ on it. They were fresh. She _5_ them and they were sweet. She looked around for a _6_ to put them in. ”Somebody has sent me flowers the very first day!” she thought _7_ . ” But who could it be?” she began to _8_ . The day passed quickly and Nancy did everything with _9_ interest. For the following days of the _10_ , the first thing Nancy did was to change water for the followers and then set about her work.
Then came another Monday. _11_ she came near her desk she was overjoyed to see a(n) _12_ bunch of flowers there. She quickly put them in the vase, _13_ the old ones. The same thing happened again the next Monday. Nancy began to think of ways to find out the _14_ . On Tuesday afternoon, she was sent to hand in a plan to the _15_ . She waited for his directives at his secretary’s _16_ . She happened to see on the desk a half-opened notebook, which _17_ : ”In order to keep the secretaries in high spirits, the company has decided that every Monday morning a bunch of fresh flowers should be put on each secretary’s desk.” Later, she was told that their general manager was a business management psychologist.
|1.||A. depressed||B. encouraged||C. excited||D. surprised|
|2.||A. turned||B. pushed||C. knocked||D. forced|
|3.||A. last||B. second||C. third||D. first|
|4.||A. keys||B. grapes||C. flowers||D. bananas|
|5.||A. smelled||B. ate||C. took||D. held|
|6.||A. vase||B. room||C. glass||D. bottle|
|7.||A. angrily||B. quietly||C. strangely||D. happily|
|8.||A. seek||B. wonder||C. work||D. ask|
|9.||A. low||B. little||C. great||D. general|
|10.||A. month||B. period||C. year||D. week|
|11.||A. Unless||B. When||C. Since||D. Before|
|12.||A. old||B. red||C. blue||D. new|
|13.||A. covering||B. demanding||C. replacing||D. forbidding|
|14.||A. sender||B. receiver||C. secretary||D. waiter|
|15.||A. assistant||B. colleague||C. employee||D. manager|
|16.||A. notebook||B. desk||C. office||D. house|
|17.||A. said||B. written||C. printed||D. signed|
|Avg. # sentence||16.26||18.92||17.79|
|Avg. # words||242.88||365.1||313.16|
MCScriptostermann2018mcscript focuses on questions that need reasoning using commonsense knowledge. Released in March 2018, this new dataset provides stories describing people’s daily activities, in which ambiguity and implicitness can be resolved easily by commonsense, with crowdworkers to generate questions. The correct answers to the questions may not appear in the given text, as is shown in the examples in Fig.9. It consists of about 2.1K texts and 14K questions. According to statistical analysis, 27.4% of all the questions in MCScript require commonsense knowledge to answer. Thus, this dataset can literally examine systems’ commonsense inference ability. All questions in the dataset are answerable. The distribution of the questions types in MCScript is shown in Fig.10.
ARC(AI2 Reasoning Challenge)clark2018arc makes use of standardized tests, whose questions are objectively gradable and exhibit the variety in difficulty, which can be a Grand Challenge for AI clark2018think clark2016my . ARC consists about 7.8K questions.
The authors of ARC also designe two baselines, namely a retrieval-based algorithm and a word co-occurrence algorithm. The Challenge Set, a subset of ARC containing about 2.6K questions, is created by gathering questions that are answered incorrectly by both of these two baselines. The Easy Set is composed of the remaining 5.2K questions. Several state-of-the-art models are tested on the Challenge Set, but none of them are able to significantly outperform a random baselineclark2018arc , which reflects the difficulty of the Challenge Set. Two example questions of the Challenge Set questions are as follows:
Which property of a mineral can be determined just by looking at it? (A) luster [correct] (B) mass (C) weight (D) hardness
A student riding a bicycle observes that it moves faster on a smooth road than on a rough road. This happens because the smooth road has (A) less gravity (B) more gravity (C) less friction [correct] (D) more friction
For example, the first question is difficult in that the ground truth, “Luster can be determined by looking at something”, only appears as a stand-alone sentence in the Web text. However, the incorrect candidate “hardness” has a strong correlation with “mineral” in the text.
The ARC corpus, a scientific text corpus which contains 14M science-related sentences and mentions 95% of the knowledge related to the Challenge Set questions according to a sample analysis clark2018arc , is released along with the ARC questions set. The use of the corpus is optional. Some statistics of ARC is shown in Table 27, Table 28 and Table 27.
|%||(# qns)||%||(# qns)|
|3 [t]||3.6||(94 qns)||3.4||(176 qns)|
|min / average / max|
|Question (# words) [t]||2 / 22.3 / 128||3 / 19.4 / 118|
|Question (# sentences)||1 / 1.8 / 11||1 / 1.6 / 9|
|Answer option (# words) [t]||1 / 4.9 / 39||1 / 3.7 / 26|
|# answer options||3 / 4.0 / 5||3 / 4.0 / 5|
CoQA(Conversational Question Answering systems)reddy2018coqa is a conversational style datasets which consists of 126k questions sourced from 8k conversations in 7 different domains. Answers of questions are in free form. The motivation of CoQA is that in daily life human usually get information by asking questions in conversations, and so it is desirable for a machine to be capable of answering such questions. CoQA firstly provides models with a text passage to understand, and then presents a series of questions that appear in a conversation. One example is given in Fig.11.
The key challenge of CoQA is that a system must handle conversation history properly to tackle problems like resolving the coreference. Among 7 domains of the passages from which the questions are collected, 2 are used for cross-domain evaluation and 5 are used for in-domain evaluation. The distribution of domains are shown in Table 30. Some linguistic phenomena statistics are given in Table 31. The coreference and pragmatics are unique and challenging linguistic phenomena that do not appeare in other datasets.
|Jessica went to sit in her rocking chair. Today was her birthday and she was turning 80. Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well. Jessica had|
|Q: Who had a birthday?|
|R: Jessica went to sit in her rocking chair. Today was her birthday and she was turning 80.|
|Q: How old would she be?|
|R: she was turning 80|
|Q: Did she plan to have any visitors?|
|R: Her granddaughter Annie was coming over|
|Q: How many?|
|R: Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well.|
|A: Annie, Melanie and Josh|
|R: Her granddaughter Annie was coming over in the afternoon and Jessica was very excited to see her. Her daughter Melanie and Melanie’s husband Josh were coming as well.|
|Out of domain|
|Relationship between a question and its passage|
|Lexical match||Q: Who had to rescue her?||29.8%|
|A: the coast guard|
|R: Outen was rescued by the coast guard|
|Paraphrasing||Q: Did the wild dog approach?||43.0%|
|R: he drew cautiously closer|
|Pragmatics||Q: Is Joey a male or female?||27.2%|
|R: it looked like a stick man so she kept him. She named her new noodle friend Joey|
|Relationship between a question and its conversation history|
|No coref.||Q: What is IFL?||30.5%|
|Explicit coref.||Q: Who had Bashti forgotten?||49.7%|
|A: the puppy|
|Q: What was his name?|
|Implicit coref.||Q: When will Sirisena be sworn in?||19.8%|
|A: 6 p.m local time|
3 MRC Techniques
In this section, we will introduce different techniques employed in MRC.
3.1 Non-Neural Method
Before the neural networks came into fashion, many MRC systems were developed based on different non-neural techniques, which now mostly serve as baselines for comparison. Next, we will introduce the techniques including TF-IDF, sliding window, logistic regression and boosted method.
The TF-IDF (term frequency-inverse document frequency) technique is widely used in the Information Retrieval area and finds a place in the MRC tasks later. As validated beforeclark2016combining , if candidate answers are presented, retrieval-based models can serve as a strong baseline. This kind of baseline is widely used in multi-document datasets such as WIKIHOPref_wikihop . By solely exploiting lexical correlation between the concatenation of a candidate answer and the query and a given document, this kind of algorithm can predict the candidate with the highest similarity score among all documents. Because the inter-document information is usually ignored by TF-IDF, this baseline can not detect how much a question rely on cross-document reasoning.
The sliding window algorithm is constructed as a baseline in the dataset MCTestrichardson2013mctest . It predicts an answer based on simple lexical information in a sliding window. Inspired by TF-IDF, this algorithm uses inverse word count as weight of each word, and maximize the bag-of-word similarity between the answer and the sliding window in the given passage.
This baseline method is proposed in SQuADref_squad . It extracts a large mount of features from the candidates including lengths, bigram frequencies, word frequencies, span POS tags, lexical features, dependency tree path features etc., and predicts whether a text-span is the final answer based on all those information.
This model is proposed as a conventional feature-based baseline for CNN/Daily Mail dataset chen2016thorough . Since the task can be seen as a ranking problem—making the score of the predicted answer rank top among all the candidates, the authors turn to the implementation of LambdaMART wu2010adapting in Ranklib package242424https://sourceforge.net/p/lemur/wiki/RankLib/.
, a highly successful ranking algorithm using forests of boosting decision trees. Through feature engineering, 8 features templates252525the details can be found in the paper
are chosen to form a feature vector which represents a candidate, and the weight vector will be learnt so that the correct answer will be ranked the highest.
3.2 Neural-Based Method
With the popularity of neural networks, end-to-end models have produced promising results on some MRC tasks. These models do not need to design complex manually-devised features that traditional approaches relied on, and perform much better than them. Next we will introduce several end-to-end models, mainly in chronological order.
As the first end-to-end neural architecturewang2016machine proposed for SQuAD, this model combines the match-LSTMwang2015learning , which is used to get a query-aware representation of passage, and the Pointer Network2015arXiv150603134V , which aims to construct an answer so that every token within it comes from the input text. An overall picture of the model architecture is given in Fig.12.
Match-LSTM is originally designed for predicting textual entailment. In that task, a premise and a hypothesis are given, and the match-LSTM encodes the hypothesis in a premise-aware way. For every token in hypothesis, this model uses soft-attention mechanism, which will be discussed later in Sect.3.3, to get a weighted vector representation of premise. This weighted vector is concatenated with a vector representation of the according token, and both are fed into an LSTM, namely the match-LSTM. In this paper, the authors replace the premise and hypothesis with the query and passage to get a query-aware representation of the given passage. Two preprocessing LSTMs are employed respectively to encode the query and the passage. And a bidirectional match-LSTM is employed to obtain the query-aware representation of the passage.
After getting the query-aware representation of the passage, a Pointer Network(Ptr-Net) is employed to generate answers by selecting tokens from the input passage. At each inference step, Ptr-Net uses soft-attention mechanism to get a probability distribution of the input sequence, and selects the token with the largest possibility as the output symbol. Moreover, two different strategies are proposed for constructing the answer.
The sequence model assumes that every word in the answer can appear in any position in the passage, and the length of the answer is not fixed. In order to tell the model to stop generating tokens after getting the whole answer, a special symbol is placed at the end of the passage, the prediction of this symbol indicates the termination of the answer generating.
The boundary model works differently from the Sequence Model in that it only predicts the start indice and the end indice , in other word, it’s based on the assumption that the answer appears as a continuous segment of the passage. The test result shows an advantage of the boundary model over the other one.
Bi-Directional Attention Flow
Proposed by seo2016bidirectional , the Bi-Directional Attention Flow has two key features at the context encoding stage. First, this model takes different levels of granularity as input, including character-level, word-level and contextualized embeddings. Second, it uses bi-directional attention flow, namely a passage-to-query attention and a query-to-passage attention, to get a query-aware passage representation. The detailed description is given as follows.
As is shown in Fig.13, the BiDAF model has six layers. The Character Embedding layer and the Word Embedding Layer map each each word into the vector space based respectively on character-level CNNskim2014convolutional and the pre-trained GloVe embeddingpennington2014glove . The concatenation of these two word embeddings is passed to a two layer Highway Networkssrivastava2015highway , the output of which is provided to a bi-directional LSTM in the Contextual Embedding Layer to refine the word embedding using the context information. These first three layers are applied to both the query and the passage.
The Attention Flow Layer is where the information from the query and the passage mixed and interacted. Instead of summarizing the passage and the query into a fixed vector like most attention mechanisms do, this layer grants raw information including attention vectors and the embeddings from previous layers flowing to the subsequent layer, which reduces the information loss. The attentions are computed in two directions—from passage to query and from query to passage. The detailed information of the Attention Flow Layer will be given in Sect.3.3.
The Modeling Layer takes in the query-aware representation of context words and used two bi-directional LSTM to capture the interactions among the passage words according to the query. The last Output Layer is task-specific, which gives the prediction of the answer.
Gated-Attention Readerdhingra2016gated targets at realizing multi-hop reasoning in answering cloze-style questions over documents. A multiplicative interaction between the query and the hidden state of the document is employed in its attention mechanism. The multi-hop architecture of the model imitates the multi-step reasoning of human in reading comprehension.
The overview of the model is given in Fig.14. The model reads the document and the query iteratively in a row of K layers. In
th layer, first, the model uses bidirectional Gated Recurrent Unit(Bi-GRU)cho2014learning to transform the , embeddings of document passed from the last layer, to get . Then a layer-specific query representation is transformed by another Bi-GRU to get .
Then both and are fed to a Gated Attention module, the result of which, , will be passed to the next layer.
For each token in , the Gated Attention module uses soft attention to get a token specified representation of query: . Finally we get the new embeddings of this token, , by applying a element-wise multiplication for and .
At the last stage, the decoder employs a softmax layer to the inner-product between outputs of last layer to get the possibility distribution of the predict answers.
Dynamic Coattention Networks(DCN)xiong2016dynamic introduces coattention mechanism to combine co-dependent representations of query and the document, and dynamic iteration to avoid been trapped in local maxima corresponding to incorrect answers like previous single-pass models. The Dynamic pointer decoder takes in the output of coattention encoder and generates the final predictions. Detailed procedures is given as follows.
Let denote the sequence of embeddings of words in query and for those in document. The the details of DCN are as follows.
In the Document and Question encoder, the vector representations of the document and the query are fed into LSTM respectively, and the hidden states at each step are combined to form the encoding matrix and . Sentinel vector and merity2016pointer is appended to the encoding matrix to enable the model to map some unrelated words that exclusively appear in either the query or the document to this void vector. To allow for some variation between the document encoding space and the query encoding space, a non-linear projection is applied to . The final representations of the document and the query are and .
The Coattention encoder takes in and and outputs coattention encoding matrix , which is the input to the Dynamic pointing decoder. The details of Coattention encoder will be discussed in Sec.3.3.
The overview of Dynamic pointing decoder is given in Fig.15. To enable the model to recover from local maxima, the Highway Maxout Network (HMN) is proposed to predict the start point and the end point iteratively. HMN is composed of Highway Networkssrivastava2015highway , which is characterized by the skip connect that passes gradient effectively through deep networks, and Maxout Networksgoodfellow2013maxout
, a learnable activation function that has strong empirical performance.
During the iteration, the hidden state of the decoder is updated according to Eq.1.
where and are the coattention representations of according start and end words predicted by (i-1)th iteration. Given , and , the possibility of the tth word to be the start or the end point is calculated by Eq.2.
the word with the maximum possibility is selected as the prediction at current step.
The architecture of HMN is given in Fig.16. The mathematical description of HMN is given as follows:
where r is a non-linear projection of the current state.
FastQAWeissenbornWS17 achieved competitive performance with simple architecture, which questions the necessity of improving complexity of QA systems. Unlike many systems that employed a complex interaction layer to catch the interaction between the query and the context, FastQA only makes use of computable features on word levels. The overview of FastQA architecture is given in Fig.17.
The binary word-in-question() feature indicates whether a token in passages appears in the corresponding query.
The weighted feature() which is defined as below takes the term-frequency and the similarity between query and context into account.
The concatenation of these two features and the original representation of each words is fed into a Bi-LSTM to get the final hidden state. The Answer Layer is composed of a simple 2-layer feed-forward network along with a beam search.
Given the word-level and character-level embeddings, R-NET firstly employs a bi-directional GRUcho2014learning to encode the questions and passages. Then it uses a gated attention-based recurrent network to fuse the information from the question and passage. Later a self-matching layer is used to fine-tune and get the final representation of the passage. The output layer is based on pointer networks similar to that in match-LSTM to predict the boundary of the answer. The initial hidden vectors of the pointer network are computed by an attention-pooling over the final passage representations.
The gated attention-based recurrent network adds another gate to normal attention-based recurrent networks. This gate gives the weight of certain passage information according to the question. Inspired by rocktaschel2015reasoning , the sentence-pair representations are obtained as follows:
where is the added gate, and are original representations of the passage and the question.
To exploit information from the whole passage for each token, a self-matching attention is applied to get the final representation of the passage . The details of self-matching attention is given in Sec.3.3.
The Output Layer uses pointer networksvinyals2015pointer to predict the start and end position of the answer. The initial hidden vector for the pointer network is an attention-pooling over the question representation . The objective function is the sum of the negative log probabilities of the ground truth start and end position by the predicted distributions.
Unlike previous models which have fixed number of turns during reading or reasoning regardless of the complexity of queries and passages, the ReasoNetshen2017reasonet
makes use of reinforcement learning to dynamically determine the reading and reasoning depth. The intuition of this work comes from that the difficulty of different questions can vary a lot in the same datasetchen2016thorough , and the fact that human usually revisit important part of passage and question to answer the question better. An overview of ReasoNet structure is given in Fig.19.
The external Memory M is usually the word embeddings encoded by a Bi-RNN. The Internal State is updated according to , where is the Attention vector : . The Termination Gate
determines when to stop updating states above and predict the answers according to the binary variable: . In this way, the ReasoNet can mimic the inference process of human, exploit the passages and answer the questions better.
Most of the models above are primarily based on RNNs with attention, therefore are often slow for both training and inference due to the sequential nature of RNNs. To make machine comprehension fast, the QAnetyu2018qanet are proposed without RNNs in its architecture. An overview of QAnet structure is given in Fig.20.
The key difference between QAnet and the previous models is that, QAnet only use convolutional and self-attention mechanism in its embedding and modeling encoders, discarding the commonly used RNNs. The depthwise separable convolutionschollet2017xception kaiser2017depthwise can capture the local structure of the text, and the multi-head (self-)attention mechanismvaswani2017attention will model global interactions within the whole passages. A query-to-context attention similar to that in DCNxiong2016dynamic is applied afterwards.
The QAnet achieved state-of-the-art accuracy while achieving up to 13x speedup in training and 9x per training iteration, compared to the RNN counterpartsyu2018qanet .
The Attention mechanisms have shown great power in selecting important information, aligning and capturing similarity between different part of input. Next we will introduce several representative attention mechanism primarily based on time order.
was proposed in image caption task in xu2015show as the ”stochastic hard attention”. Let denote the feature vectors captured by CNN, each corresponding to a part of the image. When deciding which one of all features is to feed to the decoder LSTM to generate caption, a one-hot variable is defined. The indicator is set to 1 if the -th vector of is the one used to extract visual features at current step . If we denote the input of decoder LSTM as :
The paper assigns a multinoulli distribution parametrized by and view
as a random variable:
is a multilayer perceptron. After defining the objective functionas below:
and approximate its gradient by a Monte Carlo method, the final learning rule for the model is then:
where the and
are two hyperparameters set by crossvalidation.
Here we will first introduce the basic form of soft attention in Neural Machine translation task, then we will talk about its variants in other tasks like natural language inference(NLI) and MRC.
Different to hard attention, soft attention calculates a weight distribution among all the input representations, and use the weighted sum of them as the input to the decoder. For example, in BahdanauCB14 , let denote the Encoder’s output sequence, and denote the weight of each (which indicates to what extent is related to the current output token ). Then the input to the decoder is :
The weights are calculated and learn through a feedforward neural network .
In NLI task, input has two components, namely a premise and a hypothesis. And attention is used to exploit the interaction/relation between these two parts. Take the match-LSTMwang2015learning as example, we denote and as the resulting hidden states of the Encoder LSTM separately for premise and hypothesis. When predicting the label of the hypothesis, an attention-weighted combinations of the hidden states of the premise is computed through a match-LSTM:
where is the attention vector stated above, is the parameters to be learned, and is the hidden state of match-LSTM at position . Finally is concatenated with for predicting the result.
In MRC task, we can regard the question as a premise and the passage as a hypothesis, as it likes in the model Match-LSTM+Pointer Network. By applying the attention mechanism, we can get additional query information for each token in the passage, which will improve the model performance.
Compared to hard attention, soft attention’s advantage is that it is differentiabile thus easy to train, and fast in training and inference.
was proposed in BiDAF. Compared to the attention described above, it considers attention in two directions, or Query-to-context(Q2C) Attention and Context-to-query(C2Q) Attention. Take BiDAF as example, given and , the concatenation of the outputs of the LSTMs in Contextual Embedding Layer, the similarity matrix is computed:
where is trainable parameters, is elementwise multiplication. Then we can compute the C2Q attention weights and the attended query vectors by:
Similarily the Q2C attention weights and attended context vectors are:
Finally two attention vectors above are combined together with the original contextual embeddings through a vector fusing function, the result of which serve as the base for future modeling or prediction.
The Bi-directional Attention adds more information in the Q2C Attention part compared to normal attention mechanism. However, as shown in the ablation study of seo2016bidirectional , the attention in this direction is less useful than the standard C2Q Attention(on SQuAD dev set). The reason is that the query is usually short, and the added Q2C information is relatively small than that of the other one.
In the Coattention encoder, the affinity matrixis calculated and normalized row-wise and column-wise to obtain , the attention weights matrix across the document for each word of query, and , the attention weights matrix across the query for each word of document. Then the attention contexts for question are computed and concatenated with to obtain the final document representation . At the last step, [,] is fed to a bidirectional LSTM:
The result serves as the foundation for predicting the answer. The hidden states form coattention encoding matrix .
Similarly to Bi-directional Attention, the coattention mechanism utilizes attention information in two directions, while in a different way. It successively computes the attention contexts for the question and the document, and fuses them to get a co-dependent representation of document.
is proposed in R-NET introduced before. Because many useful information exist in the passage context while they can not be captured by the traditional LSTM(which mainly exploits information in words’ surrounding window), so the self-matching attention mechanism is proposed to address this problem. It collects evidence for each token from the whole passage and its according question information . And the result is the final passage representation:
here refers to an attention-pooling vector of the whole passage:
and is the gate define in Sec.3.2.
Uniquely, Self-matching Attention captures long-distance information from the passage itself. This helps R-NET in dealing with problems like coreference.
3.4 Pre-trained word representations
How to efficiently represent words as vectors, which serve as the base of most of the modern MRC systems, is a problem that concerns researchers very much. Previously, one-hot representation and N-gram model were popular, however, those simple techniques met their limits in many tasks. To address this problem, many technologies have been proposed. According to the time of occurrence, we introduce them as follows.
proposed two novel models to learn the distributed representations of words, namely the Continuous Bag-of-Words Model(CBOW) and the Continuous Skip-gram Model. The architectures of these two models are given in Fig.22.
The CBOW model uses several history words and future words as input and maximizes the probability of correctly predicting the current word. By contrast the skip-gram model uses current word as input and tries to predict words within a certain range before and after the current word. The result word vectors of both models achieved state-of-the-art performance on several tests.
The word2vec method belongs to local context window methods, those methods can capture fine-grained semantic and syntactic regularities of words efficiently. However, they can not exploit global statistical information like latent semantic analysis(LSA)deerwester1990indexing , which belongs to global matrix factorization methods. GloVepennington2014glove combines the advantages of these two family of methods.
GloVe takes the co-occurrence probabilities of words into consideration, and use the ratio of probabilities to reflect the relations of different words. For example, if we denote the probability that word j appear in the context of word j as , then the ratio can tell the correlation between certain words. An example is given in Fig.23. The GloVe model takes the below form according to above phenomenon.
where are word vectors. varies according to different constrains.
One disadvantages of word vectors generated by above methods is that they are static, thus are independent of application linguistic contexts. This may lead to poor performance when it comes to polysemy. In light of this, ELMopeters2018deep was proposed to addresses this problem.
ELMo’s model employs a bi-LSTMhochreiter1997long with character convolutions on the input.
Then it jointly maximizes the log likelihood of the forward and backward directions and record the internal states.
Finally a task specific linear combination of those internal states are used to obtain the ELMo representation. In this way, ELMo can capture context-dependent aspects of word meaning as well as syntax information for each token. If fine-tuned on domain specific data, the model usually performs better.
Compared to ELMo, GPTradford2018improving uses a variant of Transformervaswani2017attention instead of LSTM to better capture the long term linguistic structure. The overview of this work is given in Fig.24. Given a corpus , a standard language model with a multi-layer Transformer decoderliu2018generating is used:
where is the context window size, is the context vectors of tokens, is the number of layers, is the token embedding matrix, and
is the position embedding matrix. All the parameters are trained using stochastic gradient descentrobbins1985stochastic . The final transformer block’s activation is denoted as .
A supervised fine-tuning can be applied in different down-stream tasks. As for some tasks like text classification, only a linear output layer with parameters is needed to predict :
More recently, its successor GPT2 is released, which is a scale-up of GPT while with much larger volume. GPT2 has 1.5 billion parameters, and claimed to achieve state-of-the-art performance on many language modeling. However its code have not been released by the time this paper is written.
As shown in Fig.25, both ELMo and GPT models only use unidirectional language models to learn the representation of tokens. BERTdevlin2018bert points out that this restriction has severely limited the efficiency of the pre-trained representation. To address this problem, two new prediction tasks are proposed to pre-train BERT in two direction, namely the ”masked language model” and the ”Next Sentence Prediction”.
Inspired by the Clozeref_cloze
task, the ”masked language model” is to predict the randomly masked tokens’ id based on their context in the input. In other words, both the left and the right context will be taken into consideration when computing representations. And to capture sentence level information and relationship, a binarized ”Next Sentence Prediction” task is to predict whether a sentenceis the next sentence of .
The WordPiece embeddingswu2016google are used in the input layer along with the Segment Embeddings and the Position Embeddings. The input embeddings is the sum of above three embeddings, as shown in Fig.26. The main architecture of BERT’s model is a multi-layer bidirectional Transformer encoder almost identical to the original onevaswani2017attention .
Similar to GPT, when fine-tuned on down-steam tasks, only an additional output layer with a minimal number of parameters is needed, as shown in Fig.27. BERT advanced state-of-the-art results on 11 NLP tasks.
A comparison of size of BERT and GPT is given in Table 32.
In this paper, we summarized advances in MRC field in recent years. In section1, we briefly introduced the history of MRC tasks and some early MRC systems. In section 2, we introduced recent datasets in three categories, i.e. SQuAD, CNN/Daily mail, CBT, NewsQA, TriviaQA and CLOTH in Extractive format, MS MARCO and Narrative QA in Narrative format and WIKIHOP, MCTest, RACE, MCScript and ARC in Multiple-choice format. The CoQA, a novel dataset focuses on conversational questions is also included.
In section 3, we first go through several non-neural methods, including Sliding Window, Logistic regression, TF-IDF and Boosted method, then more importantly the neural-based models like mLSTM+Ptr, DCN, GA, BiDAF, FastQA, RNET, ReasoNet and QAnet. Afterwards we discussed and compared two important compositions of these models, namely the Pre-training technology and Attention mechanism, in detail. We covered Word2Vec, Glove, ELMo, GPT&GPT2 and BERT in section 3.4, and hard attention, soft attention, Bi-directional attention, coattention and self-attention mechanisms in section 3.3.
All together, we reviewed the major progress that has been made in recent years in MRC field. However, the MRC direction is developing very fast and it is difficult to include all the newly proposed MRC work in this survey. We hope this review will ease the reference to recent MRC advences, and encourage more researchers to work on MRC field.
- (1) Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2014)
Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic
Journal of machine learning research3(Feb), 1137–1155 (2003)
- (3) Bobrow, D.G., Kaplan, R.M., Kay, M., Norman, D.A., Thompson, H., Winograd, T.: Gus, a frame-driven dialog system. Artificial intelligence 8(2), 155–173 (1977)
- (4) Chen, D., Bolton, J., Manning, C.D.: A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858 (2016)
- (5) Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
Chollet, F.: Xception: Deep learning with depthwise separable convolutions.arXiv preprint pp. 1610–02357 (2017)
- (7) Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., Tafjord, O.: Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 (2018)
- (8) Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., Tafjord, O.: Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 (2018)
- (9) Clark, P., Etzioni, O.: My computer is an honor student—but how intelligent is it? standardized tests as a measure of ai. AI Magazine 37(1), 5–12 (2016)
- (10) Clark, P., Etzioni, O., Khot, T., Sabharwal, A., Tafjord, O., Turney, P.D., Khashabi, D.: Combining retrieval, statistics, and inference to answer elementary science questions. In: AAAI, pp. 2580–2586 (2016)
- (11) Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. Journal of the American society for information science 41(6), 391–407 (1990)
- (12) Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
- (13) Dhingra, B., Liu, H., Yang, Z., Cohen, W.W., Salakhutdinov, R.: Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549 (2016)
- (14) Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. arXiv preprint arXiv:1302.4389 (2013)
- (15) Green Jr, B.F., Wolf, A.K., Chomsky, C., Laughery, K.: Baseball: an automatic question-answerer. In: Papers presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference, pp. 219–224. ACM (1961)
- (16) Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, pp. 1693–1701 (2015)
- (17) Hill, F., Bordes, A., Chopra, S., Weston, J.: The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 (2015)
- (18) Hirschman, L., Gaizauskas, R.: Natural language question answering: the view from here. natural language engineering 7(4), 275–300 (2001)
- (19) Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735–1780 (1997)
- (20) Jia, R., Liang, P.: Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328 (2017)
- (21) Joshi, M., Choi, E., Weld, D.S., Zettlemoyer, L.: Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 (2017)
- (22) Kaiser, L., Gomez, A.N., Chollet, F.: Depthwise separable convolutions for neural machine translation. arXiv preprint arXiv:1706.03059 (2017)
- (23) Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
- (24) Kočiskỳ, T., Schwarz, J., Blunsom, P., Dyer, C., Hermann, K.M., Melis, G., Grefenstette, E.: The narrativeqa reading comprehension challenge. Transactions of the Association of Computational Linguistics 6, 317–328 (2018)
- (25) Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683 (2017)
- (26) Lehnert, W.G.: A conceptual theory of question answering. In: Proceedings of the 5th international joint conference on Artificial intelligence-Volume 1, pp. 158–164. Morgan Kaufmann Publishers Inc. (1977)
- (27) Levy, O., Seo, M., Choi, E., Zettlemoyer, L.: Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115 (2017)
- (28) Liu, P.J., Saleh, M., Pot, E., Goodrich, B., Sepassi, R., Kaiser, L., Shazeer, N.: Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198 (2018)
- (29) Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., McClosky, D.: The stanford corenlp natural language processing toolkit. In: Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pp. 55–60 (2014)
- (30) Merity, S., Xiong, C., Bradbury, J., Socher, R.: Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843 (2016)
- (31) Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
- (32) Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., Deng, L.: Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 (2016)
- (33) Ostermann, S., Modi, A., Roth, M., Thater, S., Pinkal, M.: Mcscript: A novel dataset for assessing machine comprehension using script knowledge. arXiv preprint arXiv:1803.05223 (2018)
- (34) Pennington, J., Socher, R., Manning, C.: Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543 (2014)
- (35) Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding with unsupervised learning.Tech. rep., Technical report, OpenAI (2018)
- (37) Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822 (2018)
- (38) Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016)
- (39) Reddy, S., Chen, D., Manning, C.D.: Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042 (2018)
- (40) Richardson, M., Burges, C.J., Renshaw, E.: Mctest: A challenge dataset for the open-domain machine comprehension of text. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193–203 (2013)
- (41) Richardson, M., Burges, C.J., Renshaw, E.: Mctest: A challenge dataset for the open-domain machine comprehension of text. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193–203 (2013)
- (42) Robbins, H., Monro, S.: A stochastic approximation method. In: Herbert Robbins Selected Papers, pp. 102–109. Springer (1985)
- (43) Rocktäschel, T., Grefenstette, E., Hermann, K.M., Kočiskỳ, T., Blunsom, P.: Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 (2015)
- (44) Seo, M., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 (2016)
- (45) Shankar, S., Garg, S., Sarawagi, S.: Surprisingly easy hard-attention for sequence to sequence learning. In: EMNLP (2018)
- (46) Shankar, S., Sarawagi, S.: Label organized memory augmented neural network. CoRR abs/1707.01461 (2017)
- (47) Shen, Y., Huang, P.S., Gao, J., Chen, W.: Reasonet: Learning to stop reading in machine comprehension. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1047–1055. ACM (2017)
- (48) Simmons, R.F.: Answering english questions by computer: a survey. Tech. rep., SYSTEM DEVELOPMENT CORP SANTA MONICA CALIF (1964)
- (49) Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. arXiv preprint arXiv:1505.00387 (2015)
- (50) Taylor, W.L.: “cloze procedure”: A new tool for measuring readability. Journalism Bulletin 30(4), 415–433 (1953)
- (51) Trischler, A., Wang, T., Yuan, X., Harris, J., Sordoni, A., Bachman, P., Suleman, K.: Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 (2016)
- (52) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
- (53) Vinyals, O., Fortunato, M., Jaitly, N.: Pointer Networks. arXiv e-prints arXiv:1506.03134 (2015)
- (54) Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Advances in Neural Information Processing Systems, pp. 2692–2700 (2015)
- (55) Vrandečić, D.: Wikidata: A new platform for collaborative data collection. In: Proceedings of the 21st International Conference on World Wide Web, pp. 1063–1064. ACM (2012)
- (56) Wadhwa, S., Embar, V., Grabmair, M., Nyberg, E.: Towards inference-oriented reading comprehension: Parallelqa. arXiv preprint arXiv:1805.03830 (2018)
- (57) Wang, S., Jiang, J.: Learning natural language inference with lstm. arXiv preprint arXiv:1512.08849 (2015)
- (58) Wang, S., Jiang, J.: Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 (2016)
- (59) Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: Gated self-matching networks for reading comprehension and question answering. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 189–198 (2017)
- (60) Weissenborn, D., Wiese, G., Seiffe, L.: Fastqa: A simple and efficient neural architecture for question answering. CoRR abs/1703.04816 (2017)
- (61) Weissenborn, D., Wiese, G., Seiffe, L.: Making neural qa as simple as possible but not simpler. arXiv preprint arXiv:1703.04816 (2017)
- (62) Weissenborn, D., Wiese, G., Seiffe, L.: Making neural qa as simple as possible but not simpler. arXiv preprint arXiv:1703.04816 (2017)
- (63) Welbl, J., Stenetorp, P., Riedel, S.: Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics 6, 287–302 (2018)
- (64) Weston, J., Chopra, S., Bordes, A.: Memory networks. CoRR abs/1410.3916 (2014)
- (65) Winograd, T.: Understanding natural language. Cognitive psychology 3(1), 1–191 (1972)
- (66) Woods, W.A.: Progress in natural language understanding: an application to lunar geology. In: Proceedings of the June 4-8, 1973, national computer conference and exposition, pp. 441–450. ACM (1973)
- (67) Wu, Q., Burges, C.J., Svore, K.M., Gao, J.: Adapting boosting for information retrieval measures. Information Retrieval 13(3), 254–270 (2010)
- (68) Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)
- (69) Xie, Q., Lai, G., Dai, Z., Hovy, E.: Large-scale cloze test dataset designed by teachers. arXiv preprint arXiv:1711.03225 (2017)
- (70) Xiong, C., Zhong, V., Socher, R.: Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 (2016)
- (71) Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: International conference on machine learning, pp. 2048–2057 (2015)
- (72) Yih, W.t., Chang, M.W., Meek, C., Pastusiak, A.: Question answering using enhanced lexical semantic models. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1744–1753 (2013)
- (73) Yu, A.W., Dohan, D., Luong, M.T., Zhao, R., Chen, K., Norouzi, M., Le, Q.V.: Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541 (2018)