Zero-Shot Estimation of Base Models' Weights in Ensemble of Machine Reading Comprehension Systems for Robust Generalization

06/30/2021
by   Razieh Baradaran, et al.
University of Qom
0

One of the main challenges of the machine reading comprehension (MRC) models is their fragile out-of-domain generalization, which makes these models not properly applicable to real-world general-purpose question answering problems. In this paper, we leverage a zero-shot weighted ensemble method for improving the robustness of out-of-domain generalization in MRC models. In the proposed method, a weight estimation module is used to estimate out-of-domain weights, and an ensemble module aggregate several base models' predictions based on their weights. The experiments indicate that the proposed method not only improves the final accuracy, but also is robust against domain changes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/10/2017

Stochastic Answer Networks for Machine Reading Comprehension

We propose a simple yet robust stochastic answer network (SAN) that simu...
07/01/2021

Ensemble Learning-Based Approach for Improving Generalization Capability of Machine Reading Comprehension Systems

Machine Reading Comprehension (MRC) is an active field in natural langua...
06/13/2017

Zero-Shot Relation Extraction via Reading Comprehension

We show that relation extraction can be reduced to answering simple read...
09/28/2021

Single-dataset Experts for Multi-dataset Question Answering

Many datasets have been created for training reading comprehension model...
04/13/2020

From Machine Reading Comprehension to Dialogue State Tracking: Bridging the Gap

Dialogue state tracking (DST) is at the heart of task-oriented dialogue ...
09/16/2021

Zero-Shot Open Information Extraction using Question Generation and Reading Comprehension

Typically, Open Information Extraction (OpenIE) focuses on extracting tr...
08/18/2018

Supremacy by Accelerated Warfare through the Comprehension Barrier and Beyond: Reaching the Zero Domain and Cyberspace Singularity

It is questionable and even unlikely that cyber supremacy could be reach...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Machine Reading Comprehension (MRC) is one of the main components of today’s open-domain question answering systems [1, 2, 3]. Its aim is to answer the questions from the related context(s). One of the main issues in the MRC models is that they are highly dependent on their train dataset, and are fragile to domain changes so that their accuracy drops sharply in out-of-domain111In this paper, we do not distinguish conceptually between out-of-domain and out-of-distribution, like the MRQA shared task [4] datasets [5]. However, in real-world question answering systems, it is necessary to be able to answer the questions from a wide range of domains with acceptable accuracy.

In recent years, some studies have focused on domain adaptation and knowledge transfer in the MRC models [6, 7, 8, 9]. But the aim of most of them is to adapt or transfer existing models’ knowledge to a specific target domain, while having a domain-independent MRC model remains an unresolved issue.

On the other hand, the common approach used in the most previous studies is supervised or semi-supervised transfer learning, which needs some data from the target domain. Even in the unsupervised domain adaptation approach, some raw texts from the target domain are available. However, in general-purpose question answering systems, due to the diversity of natural languages, you can not determine a specific domain as the target one. Therefore, the zero-shot setting, which assumes no data from the target domain is available, remains as an unexplored area.

In this paper, we investigate the generalization stability of MRC models in out-of-domain datasets and propose a simple zero-shot method to improve the generalization robustness of MRC models against domain changes. The proposed method uses an accuracy-based weighted ensemble, which includes several base models trained on separate datasets, a weight estimation module, and ensemble module to aggregate the base models’ outputs on the target domain based on their out-of-domain estimated weights. In this method, there is no need for any target data in the training phase.

The rest of this paper is organized as follows. In Section 2, the related work is reviewed. The proposed method is introduced in Section 3. The experiments are presented and discussed in Section 4, and the final section is dedicated to the conclusion and future work.

Ii Related work

The MRC task is a popular natural language processing task with several studies in recent years

[1, 10, 11, 12]

. Among these, some studies focus on generalization capability and transfer learning using supervised or unsupervised learning approaches across question answering (QA) or MRC models.

Ii-a Supervised Transfer Learning

Chung et al. [7], investigated the effect of transferring knowledge from one question answering model to other ones with different train datasets. They showed that pretraining a QA model on a source dataset and fine-tuning it on a target one can improve the performance of the target model. MultiQA [5] investigated the transfer and generalization capability of MRC models across different datasets. They showed that pretraining models on multiple datasets can reduce the need for a large amount of data from the target domain. They also stated that the MRC models have low generalizability in the zero-shot setting. So that, although training an MRC model on multiple datasets can lead to a more generalized model, it does not perform as well as the best model on the target dataset. Also, the MRQA shared task [4] focused on the generalization capability of the MRC models for out-of-domain data using 18 different datasets. They split these datasets into three parts, train, evaluation, and test, and explored various ideas to tackle the generalization problem in this task. This has been followed by multiple studies like [13, 14, 15]. Even though these studies use labeled target data for training a model, our focus is on the zero-shot setting where no data is available from the target domain.

Ii-B Unsupervised Transfer Learning

Some studies leveraged the unsupervised approach to transfer knowledge from a labeled source domain to an unlabeled target domain. In [9], an adversarial domain adaptation model is introduced, where the knowledge is transferred from a highly labeled source domain to an unlabeled target domain. Cao et al. [6] introduced a self-training structure for domain adaptation in MRC. This model used BERT[16] to predict labels for target domain samples and filtered low confidence ones, and then trained target MRC model with an adversarial network. In [8], a multi-task learning approach has been used to train both source domain MRC and target domain language model with shared layers. They showed this approach improved the accuracy in the target domain with only unlabeled passages.

In the mentioned studies, the aim is to improve performance on the specific target domain, while in our work, the aim is to have a general model that has robust performance on a wide range of datasets.

Iii Proposed Method

Iii-a Machine Reading Comprehension

Machine Reading Comprehension is a supervised learning task that learns to respond to the input questions from the related input context(s).

(1)

where Q, C, A, , and are the input question, input context, output answer, question length, and context length, respectively.

The output of the MRC model can be classified as selective or generative

[17]

. In the selective mode, the answer is an exact span of the input context, while in generative mode, the answer is a free form text. In this paper, we focus on the selective MRC models. The outputs of a selective MRC model are two probability distributions over context tokens for the start and end position of the answer.

Iii-B Accuracy-Based Weighted Ensemble

As stated in the previous section, most of the studies presented for domain adaptation in MRC task focus on transferring knowledge from the source domain to the desired target domain using some data from the target (at least raw passages). These models are fragile against domain changes and are not domain independent. In this study, we propose a simple zero-shot method to create a model which is robust against domain changes. This method, motivated by Large et al. work [18], leverages several based models, a weight estimation module, and an ensemble module to generate the final prediction. The proposed framework is shown in Figure  1. Instead of adapting the model to the new domain, the proposed method uses the aggregation of several base model predictions, which is not only low cost compared to the previous approaches, but also can lead to more stability against domain changes.

The base models have similar structures but are trained on separate training datasets ( to ). In the weight estimation module, another set of datasets ( to ) are used to estimate the accuracy of the base models as out-of-domain models’ weights. In the test phase, the predictions for out-of-domain data () are obtained using weighted ensemble of base models’ predictions:

(2)

where is the j-th model weight and is the prediction of the -th model on the target . The is the only hyper-parameter of this method. The weight prediction module calculates as follows:

(3)

where the is the accuracy of the -th model on the set of samples from to .

. . .

weight estimation module

train phase

test phase

ensemble module

final prediction

. . .

. . .

. . .

...
Fig. 1: The framework of the proposed method.

Iv Experiments

Iv-a Datasets


Dataset Qustion Context Train Test
1 SQuAD Crowdsourced Wikipedia 86588 10507
NewsQA Crowdsourced News 74160 4212
Natural Questions Search logs Wikipedia 104071 12836
DROP Crowdsourced Wikipedia 77409 1503
DuoRC Crowdsourced Movie plots 60721 1501
2 TriviaQA Trivia Web snippets 61688 7785
HotpotQA Crowdsourced Wikipedia 72928 5904
SearchQA Jeopardy Web snippets 117,384 16980
3 RACE Domain experts Examinations - 674
TextbookQA Domain experts Textbook - 1503
BioASQ Domain experts Science articles - 1504
RelationExtraction Synthetic Wikipedia - 2948
MRQA shared task does not have these train sets, so we used them
from the MultiQA project [5]
TABLE I: The datasets used in our experiments

In this paper, we use 12 MRC datasets with different sizes and domains. The detailed information is shown in Table I. The datasets are configured corresponding to the MRQA shared task [4]. These datasets are split into three groups which are respectively used for the base model learning, the weight estimation, and testing the final model.

Iv-B Experimental Results

We performed three sets of experiments in this study. First, we trained base models with the simple and popular MRC model, BiDAF [19] on group 1 datasets, and evaluated their in-domain and out-of-domain accuracies. The AllenNLP library222https://github.com/allenai/allennlp-reading-comprehension is used for training the base models. As you can see in Table II

, in almost all datasets, the best results are obtained when the source and target domains are the same; and accuracy drops significantly for the unseen datasets. The used evaluation measure is F1 score which calculates the weighted average of the precision and recall between the predicted answer and ground-truth at the work level.


train test
SQuAD NewsQA NQ DROP DuoRC
SQuAD 77.83 43.89 34.86 5.79 39.45
NewsQA 53.22 51.12 31.72 11.44 32.88
NQ 38.83 23.88 66.04 13.88 19.90
DROP 17.41 10.65 8.40 79.90 9.98
DuoRC 37.06 22.98 7.10 5.91 35.48
TABLE II: The F1 score of the BiDAF model trained and tested on different datasets. The Natural Questions dataset is abbreviated as NQ.

The next experiment, presented in Table III, investigates the proposed accuracy-based weighted ensemble method which ensembles the base models’ outputs according to Equation 2. Each of the datasets from group 2 (unseen during training) is used as a train set in the weight estimation module. For simplicity and speed considerations, we only used a subset of 5000 train samples from each dataset. The hyper-parameter is chosen from 1 to 4, where the out-of-fold predictions over the train set are used as the validation set. We also compare this method with the fine-tuning approach, where the best base model is fine-tuned on the unseen datasets and evaluated on other datasets to measure its out-of-domain accuracy. As shown in Table III, the proposed weighted ensemble method obtains the highest accuracies. Specially, the results show that the fine-tuning approach cannot improve the model’s performance for the unseen datasets. The simple ensemble method in this table is the arithmetic mean of the base models’ outputs.

model train test HotpotQA SearchQA TriviaQA
base models SQuAD 43.30 19.27 40.94
NewsQA 37.31 14.69 31.79
NQ 20.10 13.35 18.04
DROP 12.28 2.56 12.55
DuoRC 29.63 14.43 28.72
fine-tune HotpotQA - 18.15 35.49
SearchQA 36.42 - 36.99
TriviaQA 36.43 21.63 -
simple ensemble - 42.41 14.84 40.27
weighted ensemble HotpotQA - 21.97 43.40
SearchQA 45.81 - 43.26
TriviaQA 46.23 19.95 -
TABLE III: The F1 score of the base models, the fine-tuning approach, the simple ensemble method, and the proposed weighted ensemble method.
model train test RACE TextbookQA BioASQ RelationExtraction
base models SQuAD 26.90 33.19 36.40 64.21
NewsQA 21.75 23.03 24.58 44.96
NQ 15.32 21.36 16.93 39.77
DROP 6.47 3.24 10.51 13.97
DuoRC 9.93 14.39 8.40 23.18
fine-tune Group 2 17.71 22.63 26.80 60.14
weighted ensemble Group 2 27.15 34.69 38.68 65.39
TABLE IV: The F1 score of the base models, the fine-tuning approach, and the proposed weighted ensemble method trained on a combination of all datasets of group 2 and tested on each dataset of group 3.

The last but not the least experiment explores the effect of simultaneous usage of multiple datasets (group 2) for weight estimation in the proposed method as well as in the fine-tuning approach. The performance of the methods are tested on the group 3 datasets. To this end, we randomly select 5000 samples from each dataset in group 2, resulting in a total of 15000 samples, and estimate models’ weights using the proposed module. The results are shown in Table IV. Despite using multiple datasets, the accuracy of the fine-tuning approach is still fragile for the unseen datasets, while the weighted ensemble method remains stable.

According to the experiments, despite its simplicity, the proposed weighted ensemble method can obtain robust results in the out-of-domain data. It seems that the lack of a direct dependence on one particular data distribution is one of the factors contributing to its robustness. In addition, the proposed method does not need a high volume of data to estimate the weight of base models compared to training or fine-tuning an MRC model with many parameters. All these show the high capability of the proposed method in robust generalization on unseen data, and can be a start point for future research in this area.

V Conclusion and future work

In this paper, we investigated the generalization capability of MRC models on out-of-domain datasets and proposed a simple weighted ensemble method to robustly generalize on the unseen datasets. We compared the robustness of our method with the fine-tuning approach, in which the base models are fine-tuned on one or multiple out-of-domain datasets and tested on other ones.

In experiments, we used 5 base models trained on separate datasets. In the first step, we explored the generalization capability of the base models on out-of-domain datasets. The experimental results indicated the poor performance of MRC models on out-of-domain datasets. Then, we evaluated the proposed method and fine-tuning approach trained on one or multiple unseen datasets. The results indicated that our method leads to more robust generalization, where its accuracy on unseen datasets were better than all of the base models and fine-tuning approach.

The advantages of the proposed method are its simple implementation, its flexibility in adding new base models, and the lack of a direct dependence on the specific test data, which leads to more stable results on out-of-domain data.

For future work, we want to explore more sophisticated methods for the weighting module, such as sample-based weighting, which considers each input sample features for weighting different base models.

References

  • [1] D. Chen, F. Adam, W. Jason, and B. Antoine, “Reading wikipedia to answer open-domain questions,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017, pp. 1870-79.
  • [2] W. Yang, Y. Xie, Aileen. Lin, X. Li, L. Tan, K. Xiong, M. Li, and J. Lin, “End-to-End Open-Domain Question Answering with BERTserini,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), 2019, pp. 72–77.
  • [3] L. Frermann, D. Marcheggiani, R. Blanco, and L. Màrquez, “Book QA: Stories of Challenges and Opportunities,” in Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 2019, pp. 78–85.
  • [4] A. Fisch, A. Talmor, R. Jia, M. Seo, E. Choi, and D. Chen, “MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension,” in Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1–13, Nov 2019.
  • [5] A. Talmor, and J. berant, “An Empirical Investigation of Generalization and Transfer in Reading Comprehension,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, August 2019, pp. 4911–4921.
  • [6] Y. Cao, M. Fang, B. Yu, and J. T. Zhou, “Unsupervised Domain Adaptation on Reading Comprehension,” in AAAI, 2020, pp. 7480–7487.
  • [7] Y. Chung, H. Lee, and J. Glass, “ Supervised and Unsupervised Transfer Learning for Question Answering,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, 2018, pp. 1585–1594.
  • [8]

    K. Nishida, K. Nishida, I. Saito, H. Asano, and J. Tomita, “Unsupervised Domain Adaptation of Language Models for Reading Comprehension,” in Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), May 2020, pp. 5392–5399.

  • [9] H. Wang, Z. Gan, X. Liu, J. Liu, and H. Wang, “Adversarial Domain Adaptation for Machine Reading Comprehension,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP, 2019, pp. 2510–2520.
  • [10] K. Van Nguyen, KV. Tran, ST. Luu, and AG. Nguyen, Nl. Nguyen, “Enhancing Lexical-Based Approach With External Knowledge for Vietnamese Multiple-Choice Machine Reading Comprehension,” IEEE Access, pp. 201404–201417, 2020.
  • [11]

    M. Hu, F. Wei, Y. Peng, Z. Huang, N. Yang, D. Li, “Read+ verify: Machine reading comprehension with unanswerable questions,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp. 6529–6537.

  • [12] Shanshan. Liu, X. Zhang, Sh. Zhang, H. Wang, and W. Zhang, “Neural machine reading comprehension: Methods and trends,” Applied Sciences, pp. 3698, 2019.
  • [13] T. Takahashi, M. Taniguchi, T. Taniguchi, and T. Ohkuma, “CLER: Cross-task learning with expert representation to generalize reading and understanding,” in Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 2019, pp. 183-190.
  • [14] M. Wu, N. Moosavi, A. Rücklé, and I. Gurevych, “Improving QA Generalization by Concurrent Modeling of Multiple Biases,”, ArXiv:2010.03338, 2020.
  • [15] M. Guo, Y. Yang, D. Cer, Q. Shen, and N. Constant, “MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models,” ArXiv:2005.02507, 2020.
  • [16] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018, pp. 4171-4186.
  • [17] R. Baradaran, R. Ghiasi, and H. Amirkhani, “A survey on machine reading comprehension systems,”arXiv:2001.01582, 2020.
  • [18] James. Large, Jason. Lines, and Anthony. Bagnall, “A probabilistic classifier ensemble weighting scheme based on cross-validated accuracy estimates,”, Data mining and knowledge discovery, vol. 33, pp. 1674–1709, 2019.
  • [19] M. Seo, A. Kembhavi, A. Farhadi, H. Hajishirzi, “Bidirectional attention flow for machine comprehension,” in Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.