Multilingual Extractive Reading Comprehension by Runtime Machine Translation

09/10/2018 ∙ by Akari Asai, et al. ∙ Salesforce The University of Tokyo 0

Existing end-to-end neural network models for extractive Reading Comprehension (RC) have enjoyed the benefit of a large amount of hand-annotated training data. However, such a dataset is usually available only in English, which limits one from building an extractive RC model for a language of interest. In this paper, we introduce the first extractive RC systems for non-English languages without using language-specific RC training data, but instead by using an English RC model and an attention-based Neural Machine Translation (NMT) model. To train the NMT model for specific language directions, we take advantage of constantly growing web resources to automatically construct parallel corpora, rather than assuming the availability of high quality parallel corpora of the target domain. Our method first translates a paragraph-question pair into English so that the English extractive RC model can output its answer. The attention mechanism in the NMT model is further used to directly align the answer in the target text of interest. Experimental results in two non-English languages, namely Japanese and French, show that our method significantly outperforms a back-translation baseline of a state-of-the-art product-level machine translation system. Moreover, our ablation studies suggest that adding a small number of manually translated questions, besides an automatically created corpus, could further improve the performance of the extractive RC systems for non-English languages.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Overview of our method. are the attention weights (attention distribution) in the NMT model. and are the answer spans in the pivot language (e.g. English) and target language , respectively.

Extractive Reading Comprehension (RC), in which a model identifies the answer to a given question from a document context by “extracting” the correct answer, has a variety of downstream applications such as search, automated FAQs, and dialogue systems. Recent years have seen rapid progress in the development of RC models Seo et al. (2017); Wang et al. (2017); Xiong et al. (2018); Yu et al. (2018); Hu et al. (2018) due to the availability of large-scale annotated corpora (Hermann et al., 2015; Rajpurkar et al., 2016; Joshi et al., 2017). However, these large-scale annotated datasets are often exclusive to English. Consequently, progress in RC has been largely limited to English.

To alleviate the scarcity of training data in non-English languages, previous work creates a new large-scale dataset for a language of interest He et al. (2017) or combines a medium-scale dataset in the language with an existing dataset translated from English Lee et al. (2018). These efforts in data creation are often costly, and must be repeated for each new language of interest. In addition, they do not leverage existing resources in English RC, such as the wealth of large-scale datasets and state-of-the-art models.

In this paper, we propose a multilingual extractive RC method by runtime Machine Translation (MT), a new method for building RC systems for languages without RC training data. Our method combines an RC model with a Neural Machine Translation model (NMT). Given a language of interest with no RC data and a pivot language with large-scale RC training data, we first translate a document context and question from the language to the language using an attentive NMT model. Next, we obtain an answer from the RC model in language . Finally, we recover the answer in the context in language using soft-alignments from the NMT model.

To our knowledge, our work is the first method that requires no RC training data in the target language to build an RC model for the language of interest. In contrast to existing work on RC in non-English languages, our method leverages existing work in English RC. More importantly, our method requires no additional annotation effort to acquire RC data in the target language.

To demonstrate the effectiveness of our method, we focus on SQuAD, one of the most widely-used large-scale English datasets for extractive RC, and created SQuAD test data in Japanese and French. On Japanese and French SQuAD, our method significantly outperforms a back-translation baseline that first translates from the target language to the pivot language , produces an answer in language , and back-translates the answer into the language . Our method achieves this result despite using much smaller translation data than that of a state-of-the-art MT system used in the back-translation baseline.

Analysis of our experiments shows that the ability to correctly translate questions is crucial for the end task of RC. In particular, oversampling a small set of high quality question translations in training the NMT model results in significant accuracy gains in RC. Moreover, our error analysis shows that under-translation and paraphrasing in translation significantly degrade the downstream RC accuracy, although they do not have large effects on BLEU scores. We make our code and the collected Japanese and French SQuAD datasets available at https://github.com/AkariAsai/extractive_rc_by_runtime_mt.

2 Extractive RC by Runtime MT

Given a language of interest with no RC training data and a pivot language with a copious amount of RC training data, our method leverages an attentive NMT model that translates from language into language and a RC model in language . For a document context and question in the language , we first translate the context and question to the pivot language using the attentive NMT model. Next, we obtain an answer in language using the RC model. Finally, we recover the answer in using soft-alignments from the attentive NMT model. Figure 1 provides an illustration of our method. Here, we assume that we have a bilingual corpus for and with which to train the NMT model and an extractive RC dataset in the language with which to train the RC model.

2.1 Translation to Pivot Language

To translate the the context and question from the target language into the pivot language , one possible approach is to use a web service or a software package for MT (e.g. Google Translate111https://translate.google.com/) as a blackbox MT system (Hartrumpf et al., 2009; Esplà-Gomis et al., 2012; Á. García Cumbreras et al., 2006). However, this approach does not allow us to access the internal intermediate information that is potentially useful for bridging the MT and RC systems.

To overcome the limitations, we instead train an attention-based NMT model (Luong et al., 2015)

as a white-box MT system. Our attention-based NMT implementation uses an bidirectional recurrent neural network (RNN) encoder and a unidirectional RNN decoder with bilinear attention. Given an input sentence of length

, we denote the hidden state of the encoder corresponding to the -th word as , where is the size of the encoder hidden state. Similarly, we denote the hidden state of the decoder while generating the -th output word as , where is the size of the decoder hidden state. We use bilinear attention, which computes the attention score between the -th output word and the th input word.

(1)

Here, is a parameter matrix. The attention score estimates how informative is when predicting the -th target word.

Let denotes our MT model that translates a sequence to a sequence . We translate the context and question in to the corresponding context and question in .

(2)
(3)

2.2 Extractive RC in Pivot Language

Having translated the question and context from the original language to the pivot language , we now apply an RC model trained in the language . In this work, we use a variant of the Bidirecional Attention Flow model (Clark and Gardner, 2017; Seo et al., 2017). Similar models have proven successful on a variety of extractive RC tasks Seo et al. (2017); Wang et al. (2017); Xiong et al. (2018); Yu et al. (2018); Hu et al. (2018).

To identify an answer, the RC model outputs distributions corresponding to the start and end locations of the answer span in the context. We denote these by and

, the probabilities of the

-th word being the start and end of the answer span. We choose the span whose start position and end position yield the largest joint probability, given that the end position occurs after the start position (e.g. ).

(4)

2.3 Answer Alignment to the Target Language

Having produced the answer in the pivot language using the RC model, how do we find the corresponding answer in the language ? One approach is to back-translate the answer using another -to- MT system. However, we find that directly translating the answer from the pivot language tends to yield irrelevant answers in the language because the model lacks grounding from the context and question in .

We instead propose a method to align the start and end positions of the answer in the language with a span of the context in the language using attention weights from equation (1). As shown by Bahdanau et al. (2015); Luong et al. (2015), NMT attention weights provide an estimate of how informative is in predicting the -th target word. Consequently, we recover the answer in the original language by aligning each -th word in the pivot language answer with its corresponding word in the language context. For a position in the pivot language context , we choose , the corresponding position in the target language context , as follows.

(5)

Given an answer in the pivot language demarcated by positions in the pivot language context , we recover the corresponding target language answer by choosing the largest aligned span. Let denote the positions of the answer in the target language context . We compute as

(6)
(7)

3 Japanese and French SQuAD

To evaluate the effectiveness of our method, we create SQuAD test datasets in Japanese and French to evaluate the proposed system. Because the test set of SQuAD is not publically available, we instead create parallel examples from the SQuAD development dataset. These parallel data are used solely for evaluation. That is, no training data in the language is used by the RC model.

The SQuAD development set contains 2,067 paragraphs over 48 articles for a total of 10,570 paragraph-question pairs. We extract the first paragraph and its corresponding questions, resulting in 327 paragraph-question pairs over 48 articles. The paragraphs and questions are then manually translated into each target language by bilingual workers on Amazon Mechanical Turk222https://www.mturk.com/. These translations are subsequently verified by bilingual experts, who also extract the corresponding answers in the translated paragraph, making sure that the answer in the target language retains the same meaning and context as the answer in English.

4 Experiments in MT

We now describe our results in MT, which produces the attentive NMT model to translate the question and context from the target language to the pivot language . For experimental setup, please see Section C of the Appendix.

4.1 L-to-P Bilingual Corpus for SQuAD

One idea to train the NMT model is to use an existing parallel corpus. However, in our preliminary experiments, we find that even a state-of-the-art Japanese-to-English NMT model trained with the ASPEC corpus (Nakazawa et al., 2016), an established English-Japanese corpus, results in poor performance on Japanese SQuAD. This is because the training domain (scientific articles) differ considerably from the inference domain (Wikipedia). The vocabulary and writing style in ASPEC is biased toward scientific fields and tend to be abstruse, while the Wikipedia-based SQuAD dataset covers domains such as musical celebrities and abstract concepts (Rajpurkar et al., 2016) with a generally simple writing style.

To address this domain mismatch, we construct new Wikipedia-based bilingual corpora using a sentence aligner333https://github.com/danielvarga/hunalign on the Japanese and French Wikipedia articles and their English counterparts. Finally, we select the 1,002,000 best aligned sentence pairs, and split the pairs into a training dataset of 1,000,000 pairs and a development dataset of 2,000 pairs. More details can be found in Section A of the Appendix. This data is used to train our NMT model.

4.2 Translations of Question Sentences

Preliminary experiments on MT show that our NMT models tended to fail to translate question sentences due to data imbalance. This is due to reasons noted by Zoph et al. (2016), namely that NMT requires a copious amount of data to generalize, and learns poorly from low-count events.

We observe that question sentences are contained in 0.1% of the Wikipedia-based bilingual corpus. Moreover, most of the sentences are movie titles, captions, and quotes, which differ from SQuAD-style question sentences. Therefore, we introduce the following two approaches to address the problem of low-quality question translations.

Adding Manually Translated Questions

We randomly sample 200 questions in English from the SQuAD training set, manually translate them into the language of interest , and add these translated questions to the bilingual corpus. More details can be found in the supplementary material Section B.

Oversampling

Chu et al. (2017); Johnson et al. (2017) show that in a domain adaptation setting, NMT performance on a low-resource domain can be improved by oversampling a small corpus in the target domain. We adapt this idea by oversampling the above-mentioned manually translated questions to emphasize these high quality translations during training. We first duplicate the manually translated question sentences times, then mix the duplicated questions with the bilingual corpus.

4.3 Results

Ja-En Fr-En
Translation method Wiki Question Wiki Question
Our NMT 23.95 22.75 45.64 40.47
Google Translate 24.09 37.98 41.08 50.91
Table 1: MT BLEU scores of {Japanese, French}-to-English NMT on the bilingual Wikipedia development dataset (Wiki) and SQuAD question sentences (Question).

Our NMT model achieves 23.95 BLEU on Japanese-to-English and 45.64 on French-to-English, as shown in the “Wiki” column of Table 1. Table 1 also shows BLEU scores of translations produced by Google Translate444The translations were obtained at https://translate.google.com in August, 2018.. The competitive results attained by our NMT model compared to that of Google Translate, a state-of-the-art MT system, indicate that our proposed technique of automatically collecting parallel corpora is effective.

As shown in the columns of “Question” in Table 1, the BLEU scores of our NMT models for the question sentences are significantly lower than those of Google Translate. This is not surprising; Google Translate is trained on their internal corpora which are three or four orders of magnitudes larger than our training corpora (Johnson et al., 2017). Therefore, one promising research direction is to focus on how to further improve translation accuracy of question sentences, because question translations are crucial for our task.

4.4 Ablation Study

Ja-En Fr-En
Translation method Wiki Question Wiki Question
Our NMT 23.95 22.75 45.64 40.47
w/o beam search 20.78 23.06 41.93 36.21
w/o question 20.76 16.94 42.05 35.03
       oversampling
w/o questions 20.36 10.68 41.37 22.75
Table 2: MT ablation study on {Japanese, French}-to-English translation showing the development set BLEU score. The ablations are 1) removing beam search. 2) using manually translated questions without oversampling them. 3) not using manually translated questions.

In Table 2, we report an ablation study for our NMT models. As a standard practice, beam search is effective in improving the BLEU scores, except for the Japanese-to-English question translations. More importantly, the use of the small amount of manually translated questions is crucial. Without using the question translations as shown in the row of “w/o questions”, the BLEU scores are almost halved. Our oversampling technique for the question translations is effective in better translating the question sentences.

Although our BLEU scores are competitive with the Google Translate results for the Wikipedia sentence translations, the BLEU scores of the Japanese-to-English dataset are much lower than those of the French-to-English dataset. To reduce the gap, we experimented with jointly using the external parallel corpus, ASPEC, for training our Japanese-to-English model, and the “Wiki” BLEU score slightly improved from 23.95 to 24.47. However, the “Question” BLEU score dropped from 22.75 to 21.80. These results suggest that jointly using a high-quality external corpus does not always lead to a major improvement of translation performance. Moreover, it once again indicate the importance of questions translation pairs, which rarely occur in existing MT corpora. Because we did not observe significant improvements, we do not rely on the external corpus in our main experiments.

5 Experiments in RC

Japanese French
Method F1 EM F1 EM
Our method 52.19 37.00 61.88 40.67
Back-translation by 42.60 24.77 44.02 23.54
using Google Translate
Table 3: RC results of our method and the baseline on Japanese and French SQuAD. The BiDAF model trained on the original English SQuAD dataset achieves an F1 score of 77.1 and an EM score of 67.2.

We now describe our results on the Japanese and French SQuAD tasks, using the best NMT model from Section 4.3. For experimental details, please see Section E of the Appendix. Our BiDAF model achieves an F1 score of 77.1 and an EM score of 67.2, while our BiDAF + Self Attention + ELMo achieves an F1 score of 83.2 and EM score of 74.7 on the SQuAD v1.1 English development dataset. To compare our method to the baseline, we use the BiDAF + Self Attention + ELMo model Clark and Gardner (2017); Peters et al. (2018) as our RC model.

5.1 Baseline

As this is the first work to build an extractive RC system for a new language with no training data in the target language, there is no directly comparable approach. Previous work in Multilingual Question Answering (MLQA) is not easily applicable to multilingual RC due to differences in formulation as mentioned in Section 6.

Instead, we use a simple baseline, which employs a production translator (Google Translate) for both -to- and -to- translation. First, we translate the target language context and question using the -to- production translator. Next, the translated question and context in the pivot language are given to the RC model to identify the answer in the pivot language . Finally, we use the -to- production translator to back-translate the predicted answer from the pivot language to the target language . We refer to this baseline as “back-translation system”.

5.2 Result

Table 3 compares the performance of our method against the back-translation baseline on the Japanese and French SQuAD tasks. Our method achieves the best F1 score of 52.19 and the best EM score of 37.00 on Japanese SQuAD, and the best F1 score of 61.88 and the best EM score of 40.67 on French SQuAD. Our best model outperforms the back-translation baseline by 9.59 F1 and 12.23 EM on Japanese SQuAD, and by 17.86 F1 and 17.13 EM on French SQuAD. The results on two very different languages suggest that our method is potentially applicable to a variety of languages. In addition, we note that we outperform the baseline despite the latter using Google Translate, which performs considerably better than our NMT model in terms of BLEU scores as shown in Table 1. These results underline the importance of using soft-alignments from a white-box attentive NMT model, as opposed to a more performant black-box translation system that obtains higher BLEU score.

5.3 Ablation Study

Japanese French
Method F1 EM F1 EM
Our method 52.19 37.00 61.88 40.67
w/o self attention 50.08 35.47 57.56 37.61
       ELMo
w/o beam search 50.59 34.55 55.14 36.69
w/o question 33.97 20.48 49.28 29.66
       oversampling
w/o questions 25.20 14.63 41.63 26.60
Table 4: RC ablation results of our proposed method for the Japanese and French SQuAD task. The four ablations are 1) replacing the RC model (Clark and Gardner, 2017) with the base BiDAF model (Seo et al., 2017). 2) removing beam search. 3) removing oversampling manually translated questions. 4) removing manually translated questions.

Table 4 shows the ablation study of our method, which consists of our best Japanese-to-English and French-to-English NMT model and the BiDAF + Self Attention + ELMo model.

Similar to our finding in the MT ablation study, adding manually translated questions and oversampling are critical to the end task of RC. On both of the Japanese and French SQuAD, we observed that NMT model trained only with the Wikipedia bilingual corpus (e.g. without manually translated questions) translated question sentences poorly — the performance deteriorates by 26.99 F1 and 22.37 EM on Japanese, and by 20.25 F1 and 14.07 EM on French. Question oversampling also significantly improves performance, especially on Japanese, where it increases F1 by 16.62 and EM by 14.47. We find that our NMT models tend to translate the question sentences as declarative sentences when trained only on the Wikipedia bilingual corpora or when trained without oversampling of the manually translated questions. For instance, a question “テスラは何年に亡くなったか。(In what year did Tesla die?)” is incorrectly translated as “tesla died in a year .”. This results in a distribution mismatch for the RC model, which has been trained with questions as opposed to declarative sentences.

Using a more competitive NMT or RC model is helpful, and combining both gives a notable improvement. On the French SQuAD dataset, the self attention layer and ELMo in the extractive RC model improves the F1 by 4.32 and EM by 3.06, which are more than two times the improvements on the Japanese SQuAD dataset. We attribute the larger gain to the superior performance of the French-to-English NMT model on paragraph and question translation.

5.4 Drawback of the Back-translation System

We observe that the baseline back-translation method consistently makes mistakes due to its missing critical information from the context or question. This lack of information often has a large negative impact on the extractive RC task, in which answers are expected to be precisely retrieved from the context. For instance, we observe instances in which the RC model specifies the correct answer span for the Japanese question “最初にアメリカ人を宇宙に送ったプログラムは何か? (What project put the Americans into space for the first time?)”, but the answer “one-person project Mercury” is incorrectly translated into “一人称水銀プロジェクト (the first-person mercury (the chemical element) project)” by Google Translate, which is far from a reasonable answer to the given question. In this case, because of the lack of the crucial relevant information, the MT system cannot distinguish between the correct translations of “Mercury” and “first-person” from other words with the same-spelling. This kind of issue in MT has been studied by previous work in MLQA (Mitamura et al., 2006; Ture and Boschee, 2016). In addition, in extractive RC, the answer spans are sub-phrases in the context paragraph, so generating answer spans by back-translation is not a desirable approach, as it generates homographic variations. For the baseline on the Japanese SQuAD dataset, we find that only 143 out of 327 answers (44%) were sub-phrases of the context because of these translation errors or homographic variants.

In contrast with back-translation, our method directly identifies the correct answer in the Japanese context without losing critical relevant information. It also guarantees that the answers are precisely extracted from the given context.

5.5 Error Analysis

We conduct an error analysis of our proposed method on the Japanese and French SQuAD task. First, we omit the 41 out of 327 questions (13%) in the Japanese and French SQuAD datasets where the RC model failed to answer the corresponding questions in the original English SQuAD dataset, so as to focus on the errors that do not stem from the RC. We randomly sample 100 questions from the remaining 286 paragraph-question pairs, and manually classify errors into three categories: (1) wrong translation of questions, (2) wrong translation of context, and (3) others. There are 49 errors and 28 errors found respectively in Japanese and French sampled questions. Some of these errors are caused by multiple factors

55514 cases in Japanese and 6 cases in French. Table 5 shows the error types and the number of the errors.

Japanese French
 Type of Errors # (%) # (%)
Wrong question translation 29 (59%) 15 (54%)
Wrong context translation 27 (55%) 11 (39%)
Others 6 (12%) 6 (21%)
Table 5: Types of errors and their frequency in the Japanese SQuAD dataset and French SQuAD dataset.

Wrong Translation of Questions.

Previous work reports that SQuAD models including BiDAF tend to rely on superficial cues or interrogative of questions (Hu et al., 2018; Zhang et al., 2017; Jia and Liang, 2017). Thus, incorrectly translating a single word in a question sentence can result in a fatal error. For instance, the French-to-English NMT model generated the translation “in the cycle of cycle , that becomes the water when it is heated ?”, when it is given the question “Dans le cycle de Rankine, que devient l’eau lorsqu’elle est chauffée ? (In the Rankine cycle, what does water turn into when heated?”. For this case, the RC model failed to find the correct answer “ vapeur (vapour)”, likely because the original interrogative (“what”) is lost.

On the other hand, this implies that even if the questions are not translated appropriately, the keyword-match and heuristics like interrogatives could lead the model to find the correct answers. Compared to French-to-English translation, the Japanese-to-English translation tends to produce incorrect translations of questions as shown in Table 

5. However, the RC model sometimes nevertheless succeeds in extracting the correct answer spans by leveraging artificial cues.

Wrong Translation of Context.

The performance of the RC model deteriorates in both languages when the context is incorrectly translated or when parts of the context is missing. One of the biggest issues of the translation of contexts is the under-translation problem, where some words are mistakenly untranslated. Tu et al. (2016) characterize this behaviour for NMT models, which tend to generate shorter translations Cho et al. (2014) that are valid but miss some phrases. This is especially common when translating long sentences (Goto and Tanaka, 2017). We find that some answer spans were missing or inappropriately translated, which poses significant problems for the RC model. For instance, consider a Japanese context “1562年までにユグノーの数はピークに達し、200万人と推定され、フランスの南部及び中部に主に集まり、フランスカトリック協会の構成員の約8分の1であった。 (Huguenot numbers peaked near an estimated two million by 1562, concentrated mainly in the southern and central parts of France, about one-eighth the number of French Catholics.)”, where the underscored words denote the answer of the question “What was the proportion of Huguenots to Catholics at their peak?”. For this paragraph, the MT model generates “by 1562, the number of huguenots …, and was estimated to be two million to two million, and was a major gathering of the french catholic society members.”. That is, the expected answer sub-phrase is completely missing. This problem is also stated by Lee et al. (2018), who observe that while translating the SQuAD training dataset into Korean, the answer spans are often lost in the translated sentence. We hypothesize that adding explicit constraints to avoid under-translation of important cues should improve performance on the Japanese and French SQuAD datasets.

Others.

Six errors in the Japanese dataset and six in the French dataset occur due to the lack of robustness of the RC model with respect to paraphrasing.

Figure 2: A comparison among the RC results on the English, Japanese, and French SQuAD datasets.

Figure 2 shows an example of this on the French example. Our French-to-English translation model translate the original paragraph and question appropriately, without missing crucial information to answer the given question. Nevertheless, the subsequent RC model failed because the translation paraphrased a key phrase, “spread”, to “diffusion”, in the context. Weissenborn et al. (2017) report that many questions in SQuAD can be answered with heuristics based on type and keyword-matching, and Jia and Liang (2017) find that in their adversarial examples experiment focusing on SQuAD, many SQuAD models perform well without being confused with adversarial examples when the question has an exact -gram match with the original paragraph. These studies suggest that the performance of a SQuAD model relies on the superficial cues, and explains the negative impact on overall RC performance by MT paraphrasing.

6 Related Work

End-to-end RC.

End-to-end models have achieved significant performance in extractive RC. Seo et al. (2017) proposed the BiDAF network, which represents the context at different levels of granularity and uses a bidirectional attention flow mechanism to obtain a query-aware context representation. Wang et al. (2017) used a self-matching attention mechanism to refine the question-aware paragraph representation. However, large-scale hand-annotated extractive RC datasets, which are required to train these models, are often exclusive to English. Our work proposes a method for leveraging existing English models to produce RC systems for target languages that do not have RC training data.

Mlqa

MLQA is a question answering task in which the questions are formulated in a language different from that of the paragraphs. Examples of MLQA tasks include QA@CLEF 666http://www.clef-initiative.eu/track/qaclef and Equer Ayache et al. (2006). A key difference between extractive RC and MLQA is that the questions in extractive RC share the same language as their context, whereas questions in MLQA are formulated in a language different from the context. Moreover, MLQA contains abstractive answers that must be generated by the model whereas extractive RC answers are spans in the context document. For these reasons, the approaches in these two tasks are not easily transferable without significant performance degradation.

It is a common approach in MLQA to translate all non-English text or keyword in queries into English beforehand, and then to treat the task as a monolingual task (Ture and Boschee, 2016; Mitamura et al., 2006; Hartrumpf et al., 2009; Esplà-Gomis et al., 2012).

While both the common MLQA approach and our work involve combining MT with question answering, the goal of the tasks are distinct. The former emphasizes joint reasoning across language while the latter emphasizes building practical RC systems for target languages without any training data.

Datasets for non-English RC

Lee et al. (2018) proposed a method to create a large-scale training dataset for Korean by translating an existing large-scale English RC dataset into Korean, adding a few thousand manually annotated Korean paragraph-question pairs. He et al. (2017) created DuReader, a new large-scale open-domain Chinese RC dataset. These works focus on training an RC model in a language of interest by creating large-scale datasets for the languages manually or semi-automatically. These approaches require additional annotations of RC datasets, which might be costly, and do not leverage the wealth of exiting resources in English RC. In contrast, we propose a method to build an RC system without using any additional RC training data in the language of interest. Furthermore, our technique leverages existing resources in English RC.

7 Conclusion

We proposed an RC system for a target language with no RC training dataset by combining existing English RC models with an attentive NMT model, using soft-alignments from the latter to recover answers in the target language. Our results showed that our approach significantly outperforms a back-translation method with a state-of-the-art MT system on the Japanese and French SQuAD task. In future work, we will investigate how to improve system robustness toward paraphrasing, and how to alleviate the problem of missing key phrases that stem from NMT.

Acknowledgments

We thank Victor Zhong for his help in revising the paper. This project is partially financially supported by Microsoft Japan. This work is also partially supported by JST CREST Grant Number JPMJCR1513, Japan.

References

  • Á. García Cumbreras et al. (2006) Miguel Á. García Cumbreras, L. Alfonso Ureña López, and Fernando Martínez Santiago. 2006. Bruja: Question classification for spanish. using machine translationand an english classifier. In Workshop on Multilingual Question Answering. ACL.
  • Ayache et al. (2006) Christelle Ayache, Brigitte Grau, and Anne Vilnat. 2006. Equer: the french evaluation campaign of question-answering systems. In LREC.
  • Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
  • Balahur and Turchi (2012) Alexandra Balahur and Marco Turchi. 2012.

    Comparative experiments for multilingual sentiment analysis using machine translation.

    In Workshop on Sentiment Discovery from Affective Data. ACL.
  • Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Workshop on Syntax, Semantics and Structure in Statistical Translation. ACL.
  • Chu et al. (2017) Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of domain adaptation methods for neural machine translation. In ACL.
  • Clark and Gardner (2017) Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. NAACL.
  • Esplà-Gomis et al. (2012) Miquel Esplà-Gomis, Felipe Sánchez-Martínez, and Mikel L. Forcada. 2012. Ualacant: Using online machine translation for cross-lingual textual entailment. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics.
  • Goto and Tanaka (2017) Isao Goto and Hideki Tanaka. 2017. Detecting untranslated content for neural machine translation. In Workshop on Neural Machine Translation. ACL.
  • Hartrumpf et al. (2009) Sven Hartrumpf, Ingo Glöckner, and Johannes Leveling. 2009. Efficient Question Answering with Question Decomposition and Multiple Answer Streams. Springer.
  • He et al. (2017) Wei He, Kai Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, et al. 2017. Dureader: a chinese machine reading comprehension dataset from real-world applications. In Workshop on Machine Reading for Question Answering. ACL.
  • Hermann et al. (2015) Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS.
  • Hu et al. (2018) Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2018. Reinforced mnemonic reader for machine reading comprehension. In IJCAI.
  • Jia and Liang (2017) Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP.
  • Johnson et al. (2017) Melvin Johnson, Mike Schuster, Quoc Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernand a Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. TACL.
  • Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL.
  • Lee et al. (2018) Kyungjae Lee, Kyoungho Yoon, Sunghyun Park, and Seung-won Hwang. 2018. Semi-supervised training data generation for multilingual question answering. In LREC.
  • Luong et al. (2015) Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP.
  • Mitamura et al. (2006) Teruko Mitamura, Mengqiu Wang, Hideki Shima, and Frank Lin. 2006. Keyword translation accuracy and cross-lingual question answering inchinese and japanese. In Workshop on Multilingual Question Answering. ACL.
  • Nakazawa et al. (2016) Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. Aspec: Asian scientific paper excerpt corpus. In LREC.
  • Oda et al. (2017) Yusuke Oda, Katsuhito Sudoh, Satoshi Nakamura, Masao Utiyama, and Eiichiro Sumita. 2017. A Simple and Strong Baseline: NAIST-NICT Neural Machine Translation System for WAT2017 English-Japanese Translation Task. In Workshop on Asian Translation. ACL.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.
  • Pascanu et al. (2013) Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014.

    Glove: Global vectors for word representation.

    In EMNLP.
  • Peters et al. (2018) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. NAACL.
  • Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP.
  • Seo et al. (2017) Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. ICLR.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR.
  • Tiedemann et al. (2014) Jörg Tiedemann, Željko Agić, and Joakim Nivre. 2014. Treebank translation for cross-lingual parser induction. In CONLL.
  • Tu et al. (2016) Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In ACL.
  • Ture and Boschee (2016) Ferhan Ture and Elizabeth Boschee. 2016. Learning to translate for multilingual question answering. In EMNLP.
  • Varga et al. (2007) Dániel Varga, Péter Halácsy, András Kornai, Viktor Nagy, László Németh, and Viktor Trón. 2007. Parallel corpora for medium density languages. Amsterdam Studies in the Theory and History of Linguistic Science.
  • Wang et al. (2017) Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In ACL.
  • Weissenborn et al. (2017) Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In CoNLL.
  • Xiong et al. (2018) Caiming Xiong, Victor Zhong, and Richard Socher. 2018. DCN+: Mixed objective and deep residual coattention for question answering. In ICLR.
  • Yu et al. (2018) Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR.
  • Zhang et al. (2017) Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, and Hui Jiang. 2017. Exploring question understanding and adaptation in neural-network-based question answering. In ICCC.
  • Zoph et al. (2016) Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In EMNLP.

Appendix

A. Details of Wikipedia-based Bilingual Corpus Creation

To enable our system to translate paragraphs and questions on the wide range of topics, we train our NMT model with a large Wikipedia bilingual corpus, which is created in the following steps.

First, we collected Japanese, French and English Wikipedia dump files and extracted the plain texts from each article. For a Japanese-to-English Wikipedia-based corpus creation, we used the Japanese and English Wikipedia page articles dataset dumped in December, 2017777https://dumps.wikimedia.org/jawiki/latest/jawiki-latest-pages-articles.xml.bz2-rss.xml888https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2-rss.xml, containing 1,085,986 Japanese Wikipedia articles and 5,523,723 English Wikipedia articles. For the creation of a French-to-English Wikipedia-based corpus, we used the French Wikipedia pages article dataset dumped in April, 2018999https://dumps.wikimedia.org/frwiki/20180420/frwiki-20180420-pages-meta-current.xml.bz2, consisting of 1,976,603 French Wikipedia articles, and the same English Wikipedia articles used in the Japanese-to-English corpus creation.

We extracted the plain text of each article using an Wikipedia article extraction tool101010https://github.com/attardi/wikiextractor. After collecting the plain text data, we utilized the latest langlinks data of Japanese and French Wikipedia data111111https://dumps.wikimedia.org/jawiki/latest/jawiki-latest-langlinks.sql.gz121212https://dumps.wikimedia.org/frwiki/20180420/frwiki-20180420-langlinks.sql.gz, which stores a list of all interlanguage links from the provided pages to other languages. There are 479,551 and 1,525,465 Wikipedia articles in Japanese and French, respectively (around 44% and 77% of the respective entire Wikipedia articles). The articles have their interlanguage links to English Wikipedia articles as well. For French, 1,525,465 French Wikipedia articles (around 77% of the entire French Wikipedia articles) have their interlanguage links to English Wikipedia articles. Then we aligned the sentences in Japanese and French articles to the sentences in English articles by using a sentence-level alignment tool. We employed the hunalign sentence aligner131313https://github.com/danielvarga/hunalign, which aligns bilingual text on the sentence level, based on the sentences’ length and dictionary-based translations (Varga et al., 2007). Although the aligner does not handle changes of sentence order, we assume that the sentence order are less likely to be changed among different language articles on Wikipedia. Hunalign takes a bilingual phrase dictionary, and translates sentences in a source language to a target language based on the phrase dictionary, so that it could calculate a similarity score between a pair of sentences. For the Japanese-to-English dictionary-based translation process, we used MUSE’s ground-truth bilingual dictionaries of Japanese and English141414https://github.com/facebookresearch/MUSE, and the EDICT Dictionary File151515http://www.edrdg.org/jmdict/edict.html, a Japanese-English Dictionary file containing about 175,000 entries. Regarding French-to-English bilingual phrase dictionary, we used MUSE’s ground-truth bilingual dictionaries of French and English161616https://github.com/facebookresearch/MUSE. We omitted the sentence pairs whose alignment score is lower than -0.3 and 0.0 for Japanese-to-English and French-to-English, respectively. We also filtered out the translation pairs whose sentence lengths are longer than 50 or shorter than 5. As a result, we collected 4,567,800 sentence pairs for Japanese-to-English, and 6,398,489 sentence pairs for French-to-English. We extracted the best aligned 1,002,000 pairs to build our {Japanese, French}-to-English NMT training and development datasets.

B. Details of Manually Translated SQuAD Dataset Questions Creation

To obtain Japanese and French translation of randomly sampled 200 question sentences, we used Amazon Mechanical Turk171717https://www.mturk.com/git and asked bilingual workers to translate the questions in English into the target languages (Japanese and French) accurately without using any translation software or web service such as Google Translate. We assigned 20 question sentences to each worker.

C. Details of Experimental Settings of MT

We use the Wikipedia bilingual corpora introduced in Section 4.1 for training {Japanese, French}-to-English NMT models. The word embeddings and the weight matrices of the NMT model are initialized uniformly with random values in

. We train using batched stochastic gradient descent with a batch size of 128, a momentum of 0.75, and an initial learning rate of 1.0. We used 512-dimensional hidden states and embeddings. We use gradient clipping with a threshold of 1.0 

(Pascanu et al., 2013). We calculate the BLEU score (Papineni et al., 2002)

of greedy translations on each development data set at every half epoch while training the models. We use L2-norm regularization with a coefficient of

and applied dropout (Srivastava et al., 2014) with a dropout rate of 0.2. We build the vocabulary with words appearing more than five times in the corpus. When oversampling the questions introduced in Section 4.2, we set the duplication factor of the questions to . At the test time, we used a beam search method proposed in Oda et al. (2017), with a beam size of 5.

D. Pre-Processing for Japanese and French Sentences

For Japanese sentences in the NMT training dataset and SQuAD test dataset, we first normalized sentences with Normalization Form Compatibility Composition (NFKC) and tokenized the sentences by MeCab181818http://taku910.github.io/mecab/. For French, we normalized punctuation by moses-SMT’s punctuation normalizing script191919https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/normalize-punctuation.perl, and tokenized the sentences by moses-SMT’s tokenizer202020https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl.

E. Details of Experimental Settings of RC

We train BiDAF (Seo et al., 2017) and BiDAF + Self Attention + ELMo model (Clark and Gardner, 2017) with the original SQuAD v1.1 English training dataset. For the training of the BiDAF model, we follow the training settings in Seo et al. (2017), except that we set the batch size to 40 and trained the model for 20 epochs. In case of training the BiDAF + Self Attention + ELMo model, we follow the settings shown in Clark and Gardner (2017), adding ELMo embeddings Peters et al. (2018). The only distinction is that we use 100 dimensional pre-trained GloVe Pennington et al. (2014)

, instead of the 300 dimensional one. To evaluate the performance, we calculate EM, which measures the exact match with the ground truth answers, and F1, the weighted average of the precision and recall rate at character level 

(Rajpurkar et al., 2016). The initial learning rate is set to 0.5, and the learning rate was halved when no improvement was seen in the EM score during two epochs.

F. Comparison to translating the entire pivot language dataset

Contrary to our approach, which translates the target language into the pivot language, one can alternatively translate the existing large-scale dataset from the pivot language to the language , and subsequently train an RC model on the large translated dataset to build a RC model in language . Transferring English resources to the target language using MT has been proposed for a variety of NLP tasks Tiedemann et al. (2014); Lee et al. (2018); Balahur and Turchi (2012), and is one of the common approach in multi-lingual NLP. Yet, considering the nature of extractive RC, transferring the existing large-scale datasets into the target language has several obstacles:

  • in extractive RC, the answer must be a continuous sub-span of the target language context , and thus small homographic variations introduced by translation prevent the model from finding the correct answer.

  • a long context in the pivot language is likely to result in shorter translated context due to the fact that NMT models tends to generate shorter translations Cho et al. (2014), missing some important information in the original languages (Goto and Tanaka, 2017). This may result in the answer being missing in the target language .

We sampled 100 paragraph-question pairs with ground-truth answers from the SQuAD training set, translated the sampled pairs into Japanese using Google Translate212121The translations were obtained at https://translate.google.com in October, 2018., and checked how many of the translated answers were actually preserved in the translated paragraph. We observed that for 51 out of the 100 questions (51%), the translated answer did not match any spans in the translated context. The main reasons for this are that the answer spans tend to be lost at the translation process (14 out of 51). Even if the answers are preserved, small variants between translated answers and the spans in the translated paragraph occurred (36 out of 51).

In addition to our finding, Lee et al. (2018) also report that training a RC model only on automatically translated SQuAD training dataset resulted in poor performance because of translation errors. Therefore, we conclude that translating the existing training data from the pivot language into the target language to train a RC model in language would fail to achieve good performance.