Machine Reading Comprehension (MRC) is one of the main components of today’s open-domain question answering systems [1, 2, 3]. Its aim is to answer the questions from the related context(s). One of the main issues in the MRC models is that they are highly dependent on their train dataset, and are fragile to domain changes so that their accuracy drops sharply in out-of-domain111In this paper, we do not distinguish conceptually between out-of-domain and out-of-distribution, like the MRQA shared task  datasets . However, in real-world question answering systems, it is necessary to be able to answer the questions from a wide range of domains with acceptable accuracy.
In recent years, some studies have focused on domain adaptation and knowledge transfer in the MRC models [6, 7, 8, 9]. But the aim of most of them is to adapt or transfer existing models’ knowledge to a specific target domain, while having a domain-independent MRC model remains an unresolved issue.
On the other hand, the common approach used in the most previous studies is supervised or semi-supervised transfer learning, which needs some data from the target domain. Even in the unsupervised domain adaptation approach, some raw texts from the target domain are available. However, in general-purpose question answering systems, due to the diversity of natural languages, you can not determine a specific domain as the target one. Therefore, the zero-shot setting, which assumes no data from the target domain is available, remains as an unexplored area.
In this paper, we investigate the generalization stability of MRC models in out-of-domain datasets and propose a simple zero-shot method to improve the generalization robustness of MRC models against domain changes. The proposed method uses an accuracy-based weighted ensemble, which includes several base models trained on separate datasets, a weight estimation module, and ensemble module to aggregate the base models’ outputs on the target domain based on their out-of-domain estimated weights. In this method, there is no need for any target data in the training phase.
The rest of this paper is organized as follows. In Section 2, the related work is reviewed. The proposed method is introduced in Section 3. The experiments are presented and discussed in Section 4, and the final section is dedicated to the conclusion and future work.
Ii Related work
The MRC task is a popular natural language processing task with several studies in recent years[1, 10, 11, 12]
. Among these, some studies focus on generalization capability and transfer learning using supervised or unsupervised learning approaches across question answering (QA) or MRC models.
Ii-a Supervised Transfer Learning
Chung et al. , investigated the effect of transferring knowledge from one question answering model to other ones with different train datasets. They showed that pretraining a QA model on a source dataset and fine-tuning it on a target one can improve the performance of the target model. MultiQA  investigated the transfer and generalization capability of MRC models across different datasets. They showed that pretraining models on multiple datasets can reduce the need for a large amount of data from the target domain. They also stated that the MRC models have low generalizability in the zero-shot setting. So that, although training an MRC model on multiple datasets can lead to a more generalized model, it does not perform as well as the best model on the target dataset. Also, the MRQA shared task  focused on the generalization capability of the MRC models for out-of-domain data using 18 different datasets. They split these datasets into three parts, train, evaluation, and test, and explored various ideas to tackle the generalization problem in this task. This has been followed by multiple studies like [13, 14, 15]. Even though these studies use labeled target data for training a model, our focus is on the zero-shot setting where no data is available from the target domain.
Ii-B Unsupervised Transfer Learning
Some studies leveraged the unsupervised approach to transfer knowledge from a labeled source domain to an unlabeled target domain. In , an adversarial domain adaptation model is introduced, where the knowledge is transferred from a highly labeled source domain to an unlabeled target domain. Cao et al.  introduced a self-training structure for domain adaptation in MRC. This model used BERT to predict labels for target domain samples and filtered low confidence ones, and then trained target MRC model with an adversarial network. In , a multi-task learning approach has been used to train both source domain MRC and target domain language model with shared layers. They showed this approach improved the accuracy in the target domain with only unlabeled passages.
In the mentioned studies, the aim is to improve performance on the specific target domain, while in our work, the aim is to have a general model that has robust performance on a wide range of datasets.
Iii Proposed Method
Iii-a Machine Reading Comprehension
Machine Reading Comprehension is a supervised learning task that learns to respond to the input questions from the related input context(s).
where Q, C, A, , and are the input question, input context, output answer, question length, and context length, respectively.
The output of the MRC model can be classified as selective or generative
. In the selective mode, the answer is an exact span of the input context, while in generative mode, the answer is a free form text. In this paper, we focus on the selective MRC models. The outputs of a selective MRC model are two probability distributions over context tokens for the start and end position of the answer.
Iii-B Accuracy-Based Weighted Ensemble
As stated in the previous section, most of the studies presented for domain adaptation in MRC task focus on transferring knowledge from the source domain to the desired target domain using some data from the target (at least raw passages). These models are fragile against domain changes and are not domain independent. In this study, we propose a simple zero-shot method to create a model which is robust against domain changes. This method, motivated by Large et al. work , leverages several based models, a weight estimation module, and an ensemble module to generate the final prediction. The proposed framework is shown in Figure 1. Instead of adapting the model to the new domain, the proposed method uses the aggregation of several base model predictions, which is not only low cost compared to the previous approaches, but also can lead to more stability against domain changes.
The base models have similar structures but are trained on separate training datasets ( to ). In the weight estimation module, another set of datasets ( to ) are used to estimate the accuracy of the base models as out-of-domain models’ weights. In the test phase, the predictions for out-of-domain data () are obtained using weighted ensemble of base models’ predictions:
where is the j-th model weight and is the prediction of the -th model on the target . The is the only hyper-parameter of this method. The weight prediction module calculates as follows:
where the is the accuracy of the -th model on the set of samples from to .
|Natural Questions||Search logs||Wikipedia||104071||12836|
|BioASQ||Domain experts||Science articles||-||1504|
|MRQA shared task does not have these train sets, so we used them|
|from the MultiQA project |
In this paper, we use 12 MRC datasets with different sizes and domains. The detailed information is shown in Table I. The datasets are configured corresponding to the MRQA shared task . These datasets are split into three groups which are respectively used for the base model learning, the weight estimation, and testing the final model.
Iv-B Experimental Results
We performed three sets of experiments in this study. First, we trained base models with the simple and popular MRC model, BiDAF  on group 1 datasets, and evaluated their in-domain and out-of-domain accuracies. The AllenNLP library222https://github.com/allenai/allennlp-reading-comprehension is used for training the base models. As you can see in Table II
, in almost all datasets, the best results are obtained when the source and target domains are the same; and accuracy drops significantly for the unseen datasets. The used evaluation measure is F1 score which calculates the weighted average of the precision and recall between the predicted answer and ground-truth at the work level.
The next experiment, presented in Table III, investigates the proposed accuracy-based weighted ensemble method which ensembles the base models’ outputs according to Equation 2. Each of the datasets from group 2 (unseen during training) is used as a train set in the weight estimation module. For simplicity and speed considerations, we only used a subset of 5000 train samples from each dataset. The hyper-parameter is chosen from 1 to 4, where the out-of-fold predictions over the train set are used as the validation set. We also compare this method with the fine-tuning approach, where the best base model is fine-tuned on the unseen datasets and evaluated on other datasets to measure its out-of-domain accuracy. As shown in Table III, the proposed weighted ensemble method obtains the highest accuracies. Specially, the results show that the fine-tuning approach cannot improve the model’s performance for the unseen datasets. The simple ensemble method in this table is the arithmetic mean of the base models’ outputs.
|weighted ensemble||Group 2||27.15||34.69||38.68||65.39|
The last but not the least experiment explores the effect of simultaneous usage of multiple datasets (group 2) for weight estimation in the proposed method as well as in the fine-tuning approach. The performance of the methods are tested on the group 3 datasets. To this end, we randomly select 5000 samples from each dataset in group 2, resulting in a total of 15000 samples, and estimate models’ weights using the proposed module. The results are shown in Table IV. Despite using multiple datasets, the accuracy of the fine-tuning approach is still fragile for the unseen datasets, while the weighted ensemble method remains stable.
According to the experiments, despite its simplicity, the proposed weighted ensemble method can obtain robust results in the out-of-domain data. It seems that the lack of a direct dependence on one particular data distribution is one of the factors contributing to its robustness. In addition, the proposed method does not need a high volume of data to estimate the weight of base models compared to training or fine-tuning an MRC model with many parameters. All these show the high capability of the proposed method in robust generalization on unseen data, and can be a start point for future research in this area.
V Conclusion and future work
In this paper, we investigated the generalization capability of MRC models on out-of-domain datasets and proposed a simple weighted ensemble method to robustly generalize on the unseen datasets. We compared the robustness of our method with the fine-tuning approach, in which the base models are fine-tuned on one or multiple out-of-domain datasets and tested on other ones.
In experiments, we used 5 base models trained on separate datasets. In the first step, we explored the generalization capability of the base models on out-of-domain datasets. The experimental results indicated the poor performance of MRC models on out-of-domain datasets. Then, we evaluated the proposed method and fine-tuning approach trained on one or multiple unseen datasets. The results indicated that our method leads to more robust generalization, where its accuracy on unseen datasets were better than all of the base models and fine-tuning approach.
The advantages of the proposed method are its simple implementation, its flexibility in adding new base models, and the lack of a direct dependence on the specific test data, which leads to more stable results on out-of-domain data.
For future work, we want to explore more sophisticated methods for the weighting module, such as sample-based weighting, which considers each input sample features for weighting different base models.
-  D. Chen, F. Adam, W. Jason, and B. Antoine, “Reading wikipedia to answer open-domain questions,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017, pp. 1870-79.
-  W. Yang, Y. Xie, Aileen. Lin, X. Li, L. Tan, K. Xiong, M. Li, and J. Lin, “End-to-End Open-Domain Question Answering with BERTserini,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), 2019, pp. 72–77.
-  L. Frermann, D. Marcheggiani, R. Blanco, and L. Màrquez, “Book QA: Stories of Challenges and Opportunities,” in Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 2019, pp. 78–85.
-  A. Fisch, A. Talmor, R. Jia, M. Seo, E. Choi, and D. Chen, “MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension,” in Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1–13, Nov 2019.
-  A. Talmor, and J. berant, “An Empirical Investigation of Generalization and Transfer in Reading Comprehension,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, August 2019, pp. 4911–4921.
-  Y. Cao, M. Fang, B. Yu, and J. T. Zhou, “Unsupervised Domain Adaptation on Reading Comprehension,” in AAAI, 2020, pp. 7480–7487.
-  Y. Chung, H. Lee, and J. Glass, “ Supervised and Unsupervised Transfer Learning for Question Answering,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, 2018, pp. 1585–1594.
K. Nishida, K. Nishida, I. Saito, H. Asano, and J. Tomita, “Unsupervised Domain Adaptation of Language Models for Reading Comprehension,” in Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), May 2020, pp. 5392–5399.
-  H. Wang, Z. Gan, X. Liu, J. Liu, and H. Wang, “Adversarial Domain Adaptation for Machine Reading Comprehension,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP, 2019, pp. 2510–2520.
-  K. Van Nguyen, KV. Tran, ST. Luu, and AG. Nguyen, Nl. Nguyen, “Enhancing Lexical-Based Approach With External Knowledge for Vietnamese Multiple-Choice Machine Reading Comprehension,” IEEE Access, pp. 201404–201417, 2020.
M. Hu, F. Wei, Y. Peng, Z. Huang, N. Yang, D. Li, “Read+ verify: Machine reading comprehension with unanswerable questions,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp. 6529–6537.
-  Shanshan. Liu, X. Zhang, Sh. Zhang, H. Wang, and W. Zhang, “Neural machine reading comprehension: Methods and trends,” Applied Sciences, pp. 3698, 2019.
-  T. Takahashi, M. Taniguchi, T. Taniguchi, and T. Ohkuma, “CLER: Cross-task learning with expert representation to generalize reading and understanding,” in Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 2019, pp. 183-190.
-  M. Wu, N. Moosavi, A. Rücklé, and I. Gurevych, “Improving QA Generalization by Concurrent Modeling of Multiple Biases,”, ArXiv:2010.03338, 2020.
-  M. Guo, Y. Yang, D. Cer, Q. Shen, and N. Constant, “MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models,” ArXiv:2005.02507, 2020.
-  J. Devlin, M. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018, pp. 4171-4186.
-  R. Baradaran, R. Ghiasi, and H. Amirkhani, “A survey on machine reading comprehension systems,”arXiv:2001.01582, 2020.
-  James. Large, Jason. Lines, and Anthony. Bagnall, “A probabilistic classifier ensemble weighting scheme based on cross-validated accuracy estimates,”, Data mining and knowledge discovery, vol. 33, pp. 1674–1709, 2019.
-  M. Seo, A. Kembhavi, A. Farhadi, H. Hajishirzi, “Bidirectional attention flow for machine comprehension,” in Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.