Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

05/21/2019
by   Kosuke Nishida, et al.
0

Question answering (QA) using textual sources such as reading comprehension (RC) has attracted much attention recently. This study focuses on the task of explainable multi-hop QA, which requires the system to return the answer with evidence sentences by reasoning and gathering disjoint pieces of the reference texts. For evidence extraction of explainable multi-hop QA, the existed method extracted evidence sentences by evaluating the importance of each sentence independently. In this study, we propose the Query Focused Extractor (QFE) model and introduce the multi-task learning of the QA model for answer selection and the QFE model for evidence extraction. QFE sequentially extracts the evidence sentences by an RNN with an attention mechanism to the question sentence, which is inspired by extractive summarization models. It enables QFE to consider the dependency among the evidence sentences and cover the important information in the question sentence. Experimental results show that QFE with the simple RC baseline model achieves a state-of-the-art evidence extraction score on HotpotQA. Although designed for RC, QFE also achieves a state-of-the-art evidence extraction score on FEVER, which is a recognizing textual entailment task on a large textual database.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

11/17/2019

Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering

We propose an unsupervised strategy for the selection of justification s...
06/15/2019

Multi-Hop Paragraph Retrieval for Open-Domain Question Answering

This paper is concerned with the task of multi-hop open-domain Question ...
05/23/2019

Multi-hop Reading Comprehension via Deep Reinforcement Learning based Document Traversal

Reading Comprehension has received significant attention in recent years...
09/17/2018

Commonsense for Generative Multi-Hop Question Answering Tasks

Reading comprehension QA tasks have seen a recent surge in popularity, y...
02/23/2019

Evidence Sentence Extraction for Machine Reading Comprehension

Recently remarkable success has been achieved in machine reading compreh...
10/08/2020

Multi-hop Inference for Question-driven Summarization

Question-driven summarization has been recently studied as an effective ...
11/05/2020

Context-Aware Answer Extraction in Question Answering

Extractive QA models have shown very promising performance in predicting...

1 Introduction

Reading comprehension (RC) is a task that uses textual sources to answer any question. It has seen significant progress since the publication of numerous datasets such as SQuAD Rajpurkar et al. (2016). To achieve the goal of RC, systems must be able to reason over disjoint pieces of information in the reference texts. Recently, multi-hop question answering (QA) datasets focusing on this capability, such as QAngaroo Welbl et al. (2018) and HotpotQA Yang et al. (2018), have been released.

Multi-hop QA faces two challenges. The first is the difficulty of reasoning. It is difficult for the system to find the disjoint pieces of information as evidence and reason using the multiple pieces of such evidence. The second challenge is interpretability. The evidence used to reason is not necessarily located close to the answer, so it is difficult for users to verify the answer.

Yang et al. (2018) released HotpotQA, an explainable multi-hop QA dataset, as shown in Figure 1

. Hotpot QA provides the evidence sentences of the answer for supervised learning. The evidence extraction in multi-hop QA is more difficult than that in other QA problems because the question itself may not provide a clue for finding evidence sentences. As shown in Figure

1, the system finds an evidence sentence (Evidence 2) by relying on another evidence sentence (Evidence 1). The capability of being able to explicitly extract evidence is an advance towards meeting the above two challenges.

Figure 1: Concept of explainable multi-hop QA. Given a question and multiple textual sources, the system extracts evidence sentences from the sources and returns the answer and the evidence.

Here, we propose a Query Focused Extractor (QFE) that is based on a summarization model. We regard the evidence extraction of the explainable multi-hop QA as a query-focused summarization task. Query-focused summarization is the task of summarizing the source document with regard to the given query. QFE sequentially extracts the evidence sentences by using an RNN with an attention mechanism on the question sentence, while the existing method extracts each evidence sentence independently. This query-aware recurrent structure enables QFE to consider the dependency among the evidence sentences and cover the important information in the question sentence. Our overall model uses multi-task learning with a QA model for answer selection and QFE for evidence extraction. The multi-task learning with QFE is general in the sense that it can be combined with any QA model.

Moreover, we find that the recognizing textual entailment (RTE) task on a large textual database, FEVER Thorne et al. (2018), can be regarded as an explainable multi-hop QA task. We confirm that QFE effectively extracts the evidence both on HotpotQA for RC and on FEVER for RTE.

Our main contributions are as follows.

  • We propose QFE for explainable multi-hop QA. We use the multi-task learning of the QA model for answer selection and QFE for evidence extraction.

  • QFE adaptively determines the number of evidence sentences by considering the dependency among the evidence sentences and the coverage of the question.

  • QFE achieves state-of-the-art performance on both HotpotQA and FEVER in terms of the evidence extraction score and comparable performance to competitive models in terms of the answer selection score. QFE is the first model that outperformed the baseline on HotpotQA.

2 Task Definition

Here, we re-define explainable multi-hop QA so that it includes the RC and the RTE tasks.

Def. 1.

Explainable Multi-hop QA

Input:

Context (multiple texts), Query (text)

Output:

Answer Type (label), Answer String (text), Evidence (multiple texts)

The Context is regarded as one connected text in the model. If the connected is too long (e.g. over 2000 words), it is truncated. The Query is the query. The model answers with an answer type or an answer string . The Answer Type is selected from the answer candidates, such as ‘Yes’. The answer candidates depend on the task setting. The Answer String exists only if there are not enough answer candidates to answer . The answer string is a short span in . Evidence consists of the sentences in and is required to answer .

For RC, we tackle HotpotQA. In HotpotQA, the answer candidates are ‘Yes’, ‘No’, and ‘Span’. The answer string exists if and only if the answer type is ‘Span’. consists of ten Wikipedia paragraphs. The evidence consists of two or more sentences in .

For RTE, we tackle FEVER. In FEVER, the answer candidates are ‘Supports’, ‘Refutes’, and ‘Not Enough Info’. The answer string does not exist. is the Wikipedia database. The evidence consists of the sentences in .

3 Proposed Method

This section first explains the overall model architecture, which contains our model as a module, and then the details of our QFE.

Figure 2: Overall model architecture. The answer layer is the version for the RC task.

3.1 Model Architecture

Except for the evidence layer, our model is the same as the baseline Clark and Gardner (2018) used in HotpotQA Yang et al. (2018). Figure 2 shows the model architecture. The input of the model is the context and the query . The model has the following layers.

The Word Embedding Layer

encodes and

as sequences of word vectors. A word vector is the concatenation of a pre-trained word embedding and a character-based embedding obtained using a CNN

Kim (2014). The outputs are , where is the length (in words) of , is the length of and is the size of the word vector.

The Context Layer

encodes as contextual vectors by using a bi-directional RNN (Bi-RNN), where is the output size of a uni-directional RNN.

The Matching Layer

encodes as matching vectors by using bi-directional attention Seo et al. (2017), a Bi-RNN, and self-attention Wang et al. (2017).

The Evidence Layer

first encodes as by a Bi-RNN. Let be the index of the first word of the -th sentence in and be the index of the last word. We define the vector of the -th sentence as:

Here, is the sentence-level context vectors, where is the number of sentences of .

QFE, described later, receives sentence-level context vectors and the contextual query vectors

as Y. QFE outputs the probability distribution that the

-th sentence is the evidence:

(1)

Then, the evidence layer concatenates the word-level vectors and the sentence-level vectors:

where the -th word in is included in the -th sentence in .

The Answer Layer

predicts the answer type and the answer string from . The layer has stacked Bi-RNNs. The output of each Bi-RNN is mapped to the probability distribution by the fully connected layer and the softmax function.

For RC, the layer has three stacked Bi-RNNs. Each probability indicates the start of the answer string, , the end of the answer string , and the answer type, . For RTE, the layer has one Bi-RNN. The probability indicates the answer type.

Loss Function:

Our model uses multi-task learning with a loss function

, where is the loss of the answer and is the loss of the evidence. The answer loss is the sum of the cross-entropy losses for all probability distributions obtained by the answer layer. The evidence loss is defined in subsection 3.3.

3.2 Query Focused Extractor

Figure 3: Overview of Query Focused Extractor at step . is the current summarization vector. is the query vector considering the current summarization. is the extracted sentence. updates the RNN state.

Query Focused Extractor (QFE) is shown as the red box in Figure 2. QFE is an extension of the extractive summarization model of Chen and Bansal (2018), which is not for query-focused settings. Chen and Bansal used an attention mechanism to extract sentences from the source document such that the summary would cover the important information in the source document. To focus on the query, QFE extracts sentences from with attention on such that the evidence covers the important information with respect to . Figure 3 shows an overview of QFE.

The inputs of QFE are the sentence-level context vectors and contextual query vectors . We define the timestep to be the operation to extract a sentence. QFE updates the state of the RNN (the dark blue box in Figure 3) as follows:

where is the index of the sentence extracted at step . We define to be the set of sentences extracted until step .

QFE extracts the -th sentence according to the probability distribution (the light blue box):

Then, QFE selects .

Let be a query vector considering the importance at step . We define as the glimpse vector Vinyals et al. (2016) (the green box):

The initial state of the RNN is the vector obtained via the fully connected layer and the max pooling from

. All parameters and are trainable.

3.3 Training Phase

In the training phase, we use teacher-forcing to make the loss function. The loss of the evidence is the negative log likelihood regularized by a coverage mechanism See et al. (2017):

The max operation in the first term enables the sentence with the highest probability to be extracted. This operation means that QFE extracts the sentences in the predicted importance order. On the other hand, the evidence does not have the ground truth order in which it is to be extracted, so the loss function ignores the order of the evidence sentences. The coverage vector is defined as

In order to learn the terminal condition of the extraction, QFE adds a dummy sentence, called the EOE sentence, to the sentence set. When the EOE sentence is extracted, QFE terminates the extraction. The EOE sentence vector is a trainable parameter in the model, so is independent of the samples. We train the model to extract the EOE sentence after all evidence.

3.4 Test Phase

In the test phase, QFE terminates the extraction by reaching the EOE sentence. The predicted evidence is defined as

where is the predicted evidence until step . QFE uses the beam search algorithm to search .

4 Experiments on RC

4.1 HotpotQA Dataset

In HotpotQA, the query is created by crowd workers, on the condition that answering requires reasoning over two paragraphs in Wikipedia. The candidates of are ‘Yes’, ‘No’, and ‘Span’. The answer string , if it exists, is a span in the two paragraphs. The context is ten paragraphs, and its content has two settings. In the distractor setting, consists of the two gold paragraphs used to create and eight paragraphs retrieved from Wikipedia by using TF-IDF with . Table 1 shows the statistics of the distractor setting. In the fullwiki setting, all ten paragraphs of are retrieved paragraphs. Hence, may not include two gold paragraphs, and in that case, and cannot be extracted. Therefore, the oracle model does not achieve 100 % accuracy. HotpotQA does not provide the training data for the fullwiki setting, and the training data in the fullwiki setting is the same as the distractor setting.

Context Query Evidence
# paragraphs # words # words # sentences
Ave. 10.0 1162.0 17.8 2.4
Max 10 3079 59 8
Median 10 1142 17 2
Min 2 60 7 2
Table 1: Statistics of HotpotQA (the development set in the distractor setting).

4.2 Experimental Setup

Comparison models

Our baseline model is the same as the baseline in Yang et al. (2018) except as follows. Whereas we use equation (1), they use

where are trainable parameters. The evidence loss is the sum of binary cross-entropy functions on whether each of the sentences is evidence or not. In the test phase, the sentences with probabilities higher than a threshold are selected. We set the threshold to 0.4 because it gave the highest F1 score on the development set. The remaining parts of the implementations of our and baseline models are the same. The details are in Appendix A.1.

We also compared DFGN + BERT Xiao et al. (2019), Cognitive Graph Ding et al. (2019), GRN and BERT Plus, which were unpublished at the submission time (4 March 2019).

Evaluation metrics

We evaluated the prediction of , and by using the official metrics in HotpotQA. Exact match (EM) and partial match (F1) were used to evaluate both the answer and the evidence. For the answer evaluation, the score was measured by the classification accuracy of . Only when was ‘Span’ was the score also measured by the word-level matching of . For the evidence, the partial match was evaluated by the sentence ids, so word-level partial matches were not considered. For metrics on both the answer and the evidence, we used Joint EM and Joint F1 Yang et al. (2018).

4.3 Results

Does our model achieve state-of-the-art performance?

Answer Evidence Joint
EM F1 EM F1 EM F1
Baseline 45.6 59.0 20.3 64.5 10.8 40.2
BERT Plus 56.0 69.9 42.3 80.6 26.9 58.1
DFGN + BERT 55.2 68.5 49.9 81.1 31.9 58.2
GRN 52.9 66.7 52.4 84.1 31.8 58.5
QFE 53.9 68.1 57.8 84.5 34.6 59.6
Table 2: Performance of the models on the HotpotQA distractor setting leaderboard111https://hotpotqa.github.io/ (4 March 2019). The models except for the baseline were unpublished at the time of submission of this paper. Our model was submitted on 21 November 2018, three months before the other submissions.
Answer Evidence Joint
EM F1 EM F1 EM F1
Baseline 24.0 32.9 3.86 37.7 1.85 16.2
GRN 27.3 36.5 12.2 48.8 7.40 23.6
Cognitive Graph 37.1 48.9 22.8 57.8 12.4 34.9
QFE 28.7 38.1 14.2 44.4 8.69 23.1
Table 3: Performance of the models on the HotpotQA fullwiki setting leaderboard111https://hotpotqa.github.io/ (4 March 2019). The models except for the baseline were unpublished at the time of submission of this paper. Our model was submitted on 25 November 2018, three months before the other submissions.

Table 1 shows that, in the distractor setting, QFE performed the best in terms of the evidence extraction score among all models compared. It also achieved comparable performance in terms of the answer selection score and therefore achieved state-of-the-art performance on the joint EM and F1 metrics, which are the main metric on the dataset. QFE outperformed the baseline model in all metrics. Although our model does not use any pre-trained language model such as BERT Devlin et al. (2019) for encoding, it outperformed the methods that used BERT such as DFGN + BERT and BERT Plus. In particular, the improvement in the evidence EM score was +37.5 points against the baseline and +5.4 points against GRN.

In the fullwiki setting, Table 1 shows that QFE outperformed the baseline in all metrics. Compared with the unpublished model at the submission time, Cognitive Graph Ding et al. (2019) outperformed our model. There is a dataset shift problem Quionero-Candela et al. (2009) in HotpotQA, where the distribution of the number of gold evidence sentences and the answerability differs between training (i.e., the distractor setting) and test (i.e., the fullwiki setting) phases. In the fullwiki setting, the questions may have less than two gold evidence sentences or be even unanswerable. Our current QA and QFE models do not consider solving the dataset shift problem; our future work will deal with it.

Does QFE contribute to the performance?

Answer Evidence Joint
EM F1 EM F1 EM F1
Yang et al. (2018) 44.4 58.3 22.0 66.7 11.6 40.9
our implementation222The differences in score among the original and our implementations of Yang et al. (2018) are due to the hyper parameters. The main change is increasing from 50 to 150. 52.7 67.3 38.0 78.4 21.9 54.9
+ top 2 extraction 52.7 67.3 48.0 77.8 27.6 54.4
QFE 53.7 68.7 58.8 84.7 35.4 60.6
without glimpse 53.1 67.9 58.4 84.3 34.8 59.6
pipeline model 46.9 63.6
Table 4: Performance of our models and the baseline models on the development set in the distractor setting.

Table 4 shows the results of the ablation study.

QFE performed the best among the models compared. Although the difference between our overall model and the baseline is the evidence extraction model, the answer scores also improved. QFE also outperformed the model that used only RNN extraction without glimpse.

QFE defines the terminal condition as reaching the EOE sentence, which we call adaptive termination. We confirmed that the adaptive termination of QFE contributed to its performance. We compared QFE with a baseline that extracts the two sentences with the highest scores, since the most frequent number of evidence sentences is two. QFE outperformed this baseline.

Our model uses the results of evidence extraction as a guide for selecting the answer, but it is not a pipeline model of evidence extraction and answer selection. Therefore, we evaluated a pipeline model that selects the answer string only from the extracted evidence sentences, where the outputs of the answer layer corresponding to non-evidence sentences are masked with the prediction of the evidence extraction. Although almost all answer strings in the dataset are in the gold evidence sentences, the model performed poorly. We consider that the evidence extraction helps QA model to learn, but its performance is not enough to improve the performance of the answer layer with the pipeline model.

What are the characteristics of our evidence extraction?

Precision Recall Correlation
baseline 79.0 82.4 0.259
QFE 88.4 83.2 0.375
Table 5: Performance of our model and the baseline in evidence extraction on the development set in the distractor setting. The correlation is the Kendall tau correlation of the number of predicted evidence sentences and that of gold evidence.
Figure 4: Number of predicted evidence sentences minus the number of gold evidence sentences.

Table 5

shows the evidence extraction performance in the distractor setting. Our model improves both precision and recall, and the improvement in precision is larger.

Figure 4 reveals the reason for the high EM and precision scores; QFE rarely extracts too much evidence. That is, it predicts the number of evidence sentences more accurately than the baseline. Table 5 also shows the correlation of our model about the number of evidence sentences is higher than that of the baseline.

We consider that the sequential extraction and the adaptive termination help to prevent over-extraction. In contrast, the baseline evaluates each sentence independently, so the baseline often extracts too much evidence.

What questions in HotpotQA are difficult for QFE?

Answer Evidence
# Evi # sample EM F1 Num EM P R F1
all 100 53.7 68.7 2.22 58.8 88.4 83.2 84.7
2 67.4 54.8 69.6 2.09 76.9 88.4 91.1 89.4
3 24.0 52.5 68.4 2.43 26.0 89.3 71.8 78.7
4 7.25 52.5 66.9 2.61 14.0 90.7 59.4 70.4
5 1.08 42.5 57.0 2.65 2.50 92.1 49.5 63.1
Table 6: Performance of our model in terms of the number of gold evidence sentences on the development set in the distractor setting. # sample, Num, P and R mean the proportion in the dataset, number of predicted evidence sentences, precision, and recall, respectively.
Answer Evidence Joint
EM F1 EM F1 EM F1
all 53.7 68.7 58.8 84.7 35.4 60.6
comparison 54.1 60.7 71.2 88.8 42.0 55.6
bridge 53.6 70.7 55.7 83.7 33.8 61.8
Table 7: Performance of our model for each reasoning type on the development set in the distractor setting.
: Which band has more members, Kitchens of Distinction or Royal Blood?  : Kitchens of Distinction
gold predicted probability[%] text
1 96.9

Kitchens of Distinction … are an English three-person alternative rock band …

2

Royal Blood are an English rock duo formed in Brighton in 2013.

3

EOE sentence

In September 2012, … members … as Kitchens of Distinction.

Royal Blood is the eponymous debut studio album by British rock duo Royal Blood.

Table 8: Outputs of QFE. The sentences are extracted in the order shown in the predicted column. The extraction scores of the sentences at each step are in the probability column.

We analyzed the difficulty of the questions for QFE from the perspective of the number of evidence sentences and reasoning type; the results are in Table 6 and Table 7.

First, we classified the questions by the number of gold evidence sentences. Table

6 shows the model performance for each number. The answer scores were low for the questions answered with five evidence sentences, which indicated that questions requiring much evidence are difficult. However, the five-evidence questions amount to only 80 samples, so this observation needs to be confirmed with more analysis. QFE performed well when the number of gold evidence sentences was two. Even though QFE was relatively conservative when extracting many evidence sentences, it was able to extract more than two sentences adaptively.

Second, we should mention the reasoning types in Table 7. HotpotQA has two reasoning types: entity bridge and entity comparison. Entity bridge means that the question mentioned one entity and the article of this entity has another entity required for the answer. Entity comparison means that the question compares two entities.

Table 7 shows that QFE works on each reasoning type. We consider that the difference between the results is due to the characteristics of the dataset. The answer F1 was relatively low in the comparison questions, because all yes/no questions belong to the comparison question and partial matches do not happen in yes/no questions. The evidence EM was relatively high in the comparison questions. One of the reason is that 77.1 % of the comparison questions have just two evidence sentences. This proportion is larger than that in the bridge questions, 64.9%. From another perspective, the comparison question sentence itself will contain the clues (i.e., two entities) required to gather all evidence sentences, while the bridge question sentence itself will provide only a part of the clues and require multi-hop reasoning, i.e., finding an evidence sentence from another evidence sentence. Therefore, the evidence extraction of the bridge questions is more difficult than that of the comparison questions.

Qualitative Analysis.

Table 8 shows an example of the behavior of QFE. In it, the system must compare the number of members of Kitchens of Distinction and with those of Royal Blood. The system extracted the two sentences describing the number of members. Then, the system extracted the EOE sentence.

We should note two sentences that were not extracted. The first sentence includes ‘members’ and ‘Kitchens of Distinction’, which are included in the query. However, this sentence does not mention the number of the members of Kitchens of Distinction. The second sentence also shows that Royal Blood is a duo. However, our model preferred Royal Blood (band name) to Royal Blood (album name) as the subject of the sentence.

Other examples are shown in Appendix A.2.

5 Experiments on RTE

5.1 FEVER Dataset

In FEVER, the query is created by crowd workers. Annotators are given a randomly sampled sentence and a corresponding dictionary. The given sentence is from Wikipedia. The key-value of the corresponding dictionary consists of an entity and a description of the entity. Entities are those that have a hyperlink from the given sentence. The description is the first sentence of the entity’s Wikipedia page. Only using the information in the sentence and the dictionary, annotators create a claim as . The candidates of are ‘Supports’, ‘Refutes’ and ‘Not Enough Info (NEI)’. The proportion of samples with more than one evidence sentence is 27.3% in the samples whose label is not ‘NEI’. The context is the Wikipedia database shared among all samples. Table 9 shows the statistics.

Context Query Evidence
# pages # words # sentences
Ave. 5416537 9.60 1.13
Max 39 52
Median 9 1
Min 3 0
Table 9: Statistics of FEVER (the development set).

5.2 Experimental Setup

Because is large, we used the NSMN document retriever Nie et al. (2019) and gave only the top-five paragraphs to our model. Similar to NSMN, in order to capture the semantic and numeric relationships, we used 30-dimensional WordNet features and five-dimensional number embeddings. The WordNet features are binaries reflecting the existence of hypernymy/antonymy words in the input. The number embedding is a real-valued embedding assigned to any unique number.

Because the number of samples in the training data is biased on the answer type , randomly selected samples were copied in order to equalize the numbers. Our model used ensemble learning of 11 randomly initialized models. For the evidence extraction, we used the union of the predicted evidences of each model. If the model predicts as ‘Supports’ or ‘Refutes’, the model extracts at least one sentence. Details of the implementation are in Appendix A.1.

We evaluated the prediction of and the evidence by using the official metrics in FEVER. was evaluated in terms of the label accuracy. was evaluated in terms of precision, recall and F1, which were measured by sentence id. The FEVER score was used as a metric accounting for both and . The FEVER score of a sample is 1 if the predicted evidence includes all gold evidence and the answer is correct. That is, the FEVER score emphasizes the recall of extracting evidence sentences over the precision.

5.3 Results

Evidence Answer FEVER
F1 Acc.
Nie et al. (2019) 53.0 68.2 64.2
Yoneda et al. (2018) 35.0 67.6 62.5
who 37.4 72.1 66.6
Kudo 36.8 70.6 65.7
avonamila 60.3 71.4 65.3
hz66pasa 71.4 33.3 22.0
aschern 70.4 69.3 60.9
QFE 77.7 69.3 61.8
Table 10: Performance of the models on the FEVER leaderboard333https://competitions.codalab.org/competitions/18814 (4 March 2019). The top two rows are the models submitted during the FEVER Shared Task that have higher FEVER scores than ours. The middle three rows are the top-three FEVER models submitted after the Shared Task. The rows next to the bottom and the bottom row (ours) show the top-three F1 models submitted after the Shared Task. None of the models submitted after the Shared Task has paper information.
Precision Recall F1
Nie et al. (2019) 42.3 70.9 53.0
Yoneda et al. (2018) 22.2 82.8 35.0
Hanselowski et al. (2018) 23.6 85.2 37.0
Malon (2018) 92.2 50.0 64.9
QFE ensemble (test) 79.1 76.3 77.7
QFE single (dev) 90.8 64.9 76.6
QFE ensemble (dev) 83.9 78.1 81.0
Table 11: Performance of evidence extraction. The top five rows are evaluated on the test set. The comparison of our models is on the development set. The models submitted after the Shared Task have no information about precision or recall.

Does our multi-task learning approach achieve state-of-the-art performance?

Table 3 shows QFE achieved state-of-the-art performance in terms of the evidence F1 and comparable performance in terms of label accuracy to the competitive models. The FEVER score of our model is lower than those of other models, because the FEVER score emphasizes recall. However, the importance of the precision and the recall depends on the utilization. QFE is suited to situations where concise output is preferred.

What are the characteristics of our evidence extraction?

Table 11 shows our model achieved high performance on all metrics of evidence extraction. On the test set, it ranked in 2nd place in precision, 3rd place in recall, and 1st place in F1. As for the results on the development set, QFE extracted with higher precision than recall. This tendency was the same as in the RC evaluation. The single model has a larger difference between precision and recall. The ensemble model improves recall and F1.

Examples are shown in Appendix A.2.

6 Related Work

6.1 Reading Comprehension

RC is performed by matching the context and the query Seo et al. (2017). Many RC datasets referring to multiple texts have been published, such as MS MARCO Nguyen et al. (2016) and TriviaQA Joshi et al. (2017). For such datasets, the document retrieval model is combined with the context-query matching model Chen et al. (2017a); Wang et al. (2018a, b); Nishida et al. (2018).

Some techniques have been proposed for understanding multiple texts. Clark and Gardner (2018) used simple methods, such as connecting texts. Choi et al. (2017); Zhong et al. (2019) proposed a combination of coarse reading and fine reading. However, Sugawara et al. (2018) indicated that most questions in RC require reasoning from just one sentence including the answer. The proportion of such questions is more than 63.2 % in TriviaQA and 86.2 % in MS MARCO.

This observation is one of the motivations behind multi-hop QA. HotpotQA Yang et al. (2018) is a task including supervised evidence extraction. QAngaroo Welbl et al. (2018) is a task created by using Wikipedia entity links. The difference between QAngaroo and our focus is two-fold: (1) QAngaroo does not have supervised evidence and (2) the questions in QAngaroo are inherently limited because the dataset is constructed using a knowledge base. MultiRC Khashabi et al. (2018) is also an explainable multi-hop QA dataset that provides gold evidence sentences. However, it is difficult to compare the performance of the evidence extraction with other studies because its evaluation script and leaderboard do not report the evidence extraction score.

Because annotation of the evidence sentence is costly, unsupervised learning of the evidence extraction is another important issue.

Wang et al. (2019) tackled unsupervised learning for explainable multi-hop QA, but their model is restricted to the multiple-choice setting.

6.2 Recognizing Textual Entailment

RTE Bowman et al. (2015); Williams et al. (2018) is performed by sentence matching Rocktäschel et al. (2016); Chen et al. (2017b).

FEVER Thorne et al. (2018) has the aim of verification and fact checking for RTE on a large database. FEVER requires three sub tasks: document retrieval, evidence extraction, and answer prediction. In the previous work, the sub tasks are performed using pipelined models Nie et al. (2019); Yoneda et al. (2018). In contrast, our approach performs evidence extraction and answer prediction simultaneously by regarding FEVER as an explainable multi-hop QA task.

6.3 Summarization

A typical approach to sentence-level extractive summarization has an encoder-decoder architecture Cheng and Lapata (2016); Nallapati et al. (2017); Narayan et al. (2018). Sentence-level extractive summarization is also used for content selection in abstractive summarization (Chen and Bansal, 2018). The model extracts sentences in order of importance and edits them. We have extended this model so that it can be used for evidence extraction because we consider that the evidence must be extracted in order of importance rather than the original order, which the conventional models use.

7 Conclusion

We consider that the main contributions of our study are (1) the QFE model that is based on a summarization model for the explainable multi-hop QA, (2) the dependency among the evidence and the coverage of the question due to the usage of the summarization model, and (3) the state-of-the-art performance in evidence extraction in both RC and RTE tasks.

Regarding RC, we confirmed that the architecture with QFE, which is a simple replacement of the baseline, achieved state-of-the-art performance in the task setting. The ablation study showed that the replacement of the evidence extraction model with QFE improves performance. Our adaptive termination contributes to the exact matching and the precision score of the evidence extraction. The difficulty of the questions for QFE depends on the number of the required evidence sentences. This study is the first to base its experimental discussion on HotpotQA.

Regarding RTE, we confirmed that, compared with competing models, the architecture with QFE has a higher evidence extraction score and comparable label prediction score. This study is the first to show a joint approach for RC and FEVER.

References

Appendix A Supplemental Material

a.1 Details of the Implementation

We implemented our model in PyTorch and trained it on four Nvidia Tesla P100 GPUs. The RNN was a gated recurrent unit (GRU)

(Cho et al., 2014). The optimizer was Adam Kingma and Ba (2014). The word-based word embeddings were fixed GloVe 300-dimensional vectors Pennington et al. (2014). The character-based word embeddings were obtained using trainable eight-dimensional character embeddings and a 100-dimensional CNN and max pooling. Table 12 shows other hyper parameters.

In FEVER, if the model predicts as ‘Supports’ or ‘Refutes’, the model extracts at least one sentence by removing the EOE sentence from the candidates to be extracted at .

a.2 Samples of QFE Outputs

The section describes some examples of QFE outputs. Table 13 shows examples on HotpotQA, and Table 14 shows examples on FEVER. We should note that QFE does not necessarily extract the sentence with the highest probability score at any step because QFE determines the evidence by using the beam search algorithm.

Three or four correct evidence sentences are extracted in the first and second examples in Table 13. The third example is a typical mistake of QFE; QFE extracts too few evidence sentences. In the fourth example, QFE extracts too many evidence sentences. The fifth and sixth questions are typical yes/no questions in HotpotQA. However, like other QA models, our model makes mistakes in answering such easy questions.

One or two evidence sentences are extracted correctly in the first, second, and third examples in Table 14. In FEVER, most claims requiring two evidence sentences can be verified by either of two correct evidence sentences, like in the second example. However, there are some claims that require both evidence sentences, like the third example. The fourth example is a typical mistake of QFE; QFE extracts too few evidence sentences. In the fifth and sixth example, the answers of the questions are ‘Not Enough Info’. QFE unfortunately extracts evidence when the QA model predicts another label.

HotpotQA FEVER
size of word vectors: 400 435
width of RNN: 150 150
dropout keep ratio 0.8 0.8
batch size 72 96
learning rate 0.001 0.001
beam size 5 5
Table 12: Hyper Parameters.
: What plant has about 40 species native to Asia , Manglietia or Abronia?
: Manglietia,  : Manglietia
gold predicted probability[%] text
1 85.4

Abronia … is a genus of about 20 species of ….

2

Manglietia is a genus of flowering plants in the family Magnoliaceae.

3

There are about 40 species native to Asia.

4

EOE sentence


:Ricky Martin’s concert tour in 1999 featured an American heavy metal band formed in what year?
:1991,  : 1991
gold predicted probability[%] text
1 100.0

Formed on October 12, 1991, the group was founded by vocalist/guitarist Robb Flynn and bassist Adam Duce.

2

Other bands that were featured included Machine Head, Slipknot, and Amen.

3

Machine Head is an American heavy metal band from Oakland, California.

4

Livin La Vida Loco … by Ricky Martin, was a concert tour in 1999.

5

EOE sentence


: Where is the singer of ”B Boy” raised?
: Philadelphia,  : Philadelphia
gold predicted probability[%] text
1 100.0

Raised in Philadelphia, he embarked ….

2

”B Boy” is a song by American hip hop recording artist Meek Mill.

3

EOE sentence

Robert Rihmeek Williams … known by his stage name, Meek Mill, ….


: Which comic series involves characters such as Nick Fury and Baron von Strucker?
: Marvel,  : Sgt. Fury
gold predicted probability[%] text
1 70.7

Andrea von Strucker … characters appearing in American comic books published by Marvel Comics.

2

It is the first series to feature Nick Fury Jr. as its main character.

3

Nick Fury is a 2017 ongoing comic book series published by Marvel Comics.

4

EOE sentence

Nick Fury: … the Marvel Comics character Nick Fury.


: Are both ”Cooking Light” and ”Vibe” magazines?
: yes,  : yes
gold predicted probability[%] text
1 89.0

Cooking Light is an American monthly food and lifestyle magazine founded in 1987.

2

Vibe is an American music and entertainment magazine founded by producer Quincy Jones.

3

EOE sentence


: Are Robert Philibosian and David Ignatius both politicians?
: no,  : yes
gold predicted probability[%] text
1 100.0

Robert Harry Philibosian (born 1940) is an American politician.

2

David R. Ignatius (May 26, 1950), is an American journalist and novelist.

3

EOE sentence

Table 13: Outputs of QFE on HotpotQA. The sentences are extracted in the order shown in the predicted column. The extraction scores of the sentences at each step are in the probability column.
: Fox 2000 Pictures released the film Soul Food.  : Supports  : Supports
gold predicted probability[%] text
1 98.0

Soul Food is a 1997 American comedy-drama film … and released by Fox 2000 Pictures.

2

EOE sentence


: Terry Crews was a football player.  : Supports  : Supports
gold predicted probability[%] text
1 96.0

Terry Alan Crews … is an American actor , artist , and former American football player.

2

In football , Crews played as ….

3

EOE sentence


: Jack Falahee is an actor and he is unknown.  : Refutes  : Refutes
gold predicted probability[%] text
1 95.1

Jack Ryan Falahee (born February 20 , 1989) is an American actor.

2

He is known for his role as Connor Walsh on ….

3

EOE sentence


: Same Old Love is disassociated from Selena Gomez.  : Refutes  : Refutes
gold predicted probability[%] text
1 75.0

“Same Old Love” is a song by American singer Selena Gomez ….

2

EOE sentence

Gomez promoted “Same Old Love” ….


: Annette Badland was in the 2015 NBA Finals  : Not Enough Info  : Not Enough Info
gold predicted probability[%] text
1 98.6

EOE sentence


: Billboard Dad is a genre of music.  : Not Enough Info  : Refutes
gold predicted probability[%] text
1 98.4

Billboard Dad (film) is a 1998 American direct-to-video comedy film ….

EOE sentence

Table 14: Outputs of QFE (single model) on FEVER. The sentences are extracted in the order shown in the predicted column. The extraction scores of the sentences at each step are in the probability column.