Reading comprehension (RC) is the task of answering a question given a context passage. Related to Question-Answering (QA), RC is seen as a module in the full QA pipeline, where it assumes a related context passage has been extracted and the goal is to produce an answer based on the context. In recent years, the creation of large-scale open domain comprehension data sets [27, 15, 18, 5, 17, 8] has spurred the development of a host of end-to-end neural comprehension systems with promising results.
In spite of these successes, it is difficult to train these modern comprehension systems on narrow domain data (e.g. biomedical), as these models often have a large number of parameters. A better approach is to transfer knowledge via fine-tuning, i.e. by first pre-training the model using data from a large source domain and continue training it with examples from the small target domain. It is an effective strategy, although a fine-tuned model often performs poorly when it is re-applied to the source domain, a phenomenon known as catastrophic forgetting [4, 26, 7, 19]. This is generally not an issue if the goal is to optimise purely for the target domain, but in real-word applications where model robustness is an important quality, over-optimising for a development set often leads to unexpected poor performance when applied to test cases in the wild.
In this paper, we explore strategies to reduce forgetting for comprehension systems during domain adaption. Our goal is to preserve the source domain’s performance as much as possible, while keeping target domain’s performance optimal and assuming no access to the source data. We experiment with a number of auxiliary penalty terms to regularise the fine-tuning process for three modern RC models: QANet , decaNLP  and BERT . We observe that combining different auxiliary penalty terms results in the best performance, outperforming benchmark methods that require source data.
Technically speaking, the methods we propose are not limited to domain transfer for reading comprehension. We also show that the methodology can be used for transferring to entirely different tasks. With that said, we focus on comprehension here because it is a practical problem in real world applications, where the target domain often has a small number of QA pairs and over-fitting occurs easily when we fine-tune based on a small development set. In this scenario, it is as important to develop a robust model as achieving optimal development performance.
To demonstrate the applicability of our approach, we apply topic modelling to msmarco  — a comprehension data set based on internet search queries — and collect examples that belong to a number of salient topics, producing 6 small to medium sized RC data sets for the following domains: biomedical, computing, film, finance, law and music. We focus on extractive RC, where the answer is a continuous sub-span in the context passage.111Although RC with free-form answers is arguably a more challenging and interesting task, evaluation is generally more difficult . Scripts to generate the data sets are available at: https://github.com/ibm-aur-nlp/domain-specific-QA.
Most large comprehension data sets are open-domain because non-experts can be readily recruited via crowdsourcing platforms to collect annotations. Development of domain-specific RC data sets, on the other hand, is costly due to the need of subject matter experts and as such the size of these data sets is typically limited. Examples include bioasq  in the biomedical domain, which has less than 3k QA pairs — orders of magnitude smaller compared to most large-scale open-domain data sets [15, 18, 5, 8].
 explore supervised domain adaptation for reading comprehension, by pre-training their model first on large open-domain comprehension data and fine-tuning it further on biomedical data. This approach improves the biomedical domain’s performance substantially compared to training the model from scratch. At the same time, its performance on source domain decreases dramatically due to catastrophic forgetting [4, 14, 20].
This issue of catastrophic forgetting is less of a problem when data from multiple domains or tasks are present during training. For example in , their model decaNLP is trained on 10 tasks simultaneously — all casted as a QA problem — and forgetting is minimal. For multi-domain adaptation,  and  propose using a K+1 model to capture domain-general pattern that is shared by K domains, resulting in a more robust model. Using multi-task learning to tackle catastrophic forgetting is effective and generates robust models. The drawback, however, is that when training for each new domain/task, data from the previous domains/tasks has to be available.
Several studies present methods to reduce forgetting with limited or no access to previous data [21, 10, 7, 22, 19]. Inspired by synaptic consolidation,  propose to selectively penalise parameter change during fine-tuning. Significant updates to parameters which are deemed important to the source task incur a large penalty.  introduce a gradient episodic memory (gem
) to allow beneficial transfer of knowledge from previous tasks. More specifically, a subset of data from previous tasks are stored in an episodic memory, against which reference gradient vectors are calculated and the angles with the gradient vectors for the current task is constrained to be betweenand .  suggest combining gem with optimisation based meta-learning to overcome forgetting. Among these three methods, only that of  assumes zero access to previous data. In comparison, the latter two rely on access to a memory storing data from previous tasks, which is not always feasible in real-world applications (e.g. due to data privacy concerns).
We use squad v1.1  as the source domain data for pre-training the comprehension model. It contains over 100K extractive (context, question, answer) triples with only answerable questions.
To create the target domain data, we leverage msmarco , a large RC data set where questions are sampled from Bing™ search queries and answers are manually generated by users based on passages in web documents. We apply LDA topic model  to passages in msmarco and learn 100 topics.222When collecting the passages, we include only those being selected as useful for answering a query (i.e. is_selected ). We tokenise the passages with Stanford CoreNLP  and use MALLET  for topic modelling. Given the topics, we label them and select 6 salient domains: biomedical (ms-bm), computing (ms-cp), film (ms-fm), finance (ms-fn), law (ms-lw) and music (ms-ms). A QA pair is categorised into one of these domains if its passage’s top-topic belongs to them. We create multiple (context, question, answer) training examples if a QA pair has multiple contexts,333We only consider context passages that are marked as being useful by annotators in the original data (i.e. is_selected ). and filter them to keep only extractive examples.444A (context, question, answer) triple is defined to be extractive if the answer has a case-insensitive match to the context.
In addition to the msmarco data sets, we also experiment with a real biomedical comprehension data set: bioasq . Each question in bioasq is associated with a set of snippets as context, and the snippets are single sentences extracted from a scientific publication’s abstract/title in PubMed Central™. There are four types of questions: factoid, list, yes/no, and summary. As our focus is on extractive RC, we use only the extractive factoid questions from bioasq. As before, we create multiple training examples for QA pairs with multiple contexts.
For each target domain, we split the examples into 70%/15%/15% training/development/test partitions.555Partitioning is done at the question level to ensure the same question does not appear in more than one partition. We present some statistics for the data sets in Table 1.
We first pre-train a general domain RC model on squad, our source domain. Given the pre-trained model, we then perform fine-tuning (finetune) on the msmarco and bioasq data sets: 7 target domains in total. By fine-tuning we mean taking the pre-trained model parameters as initial parameters and update them accordingly based on data from the new domain. To reduce forgetting on the source domain (squad), we experiment with incorporating auxiliary penalty terms (e.g. L2 between new and old parameters) to the standard cross entropy loss to regularise the fine-tuning process.
We explore 3 modern RC models in our experiments: QANet ; decaNLP ; and BERT . QANet is a Transformer-based  comprehension model, where the encoder consists of stacked convolution and self-attention layers. The objective of the model is to predict the position of the starting and ending indices of the answer words in the context. decaNLP is a recurrent network-based comprehension model trained on ten NLP tasks simultaneously, all casted as a question-answer problem. Much of decaNLP’s flexibility is due to its pointer-generator network, which allows it to generate words by extracting them from the question or context passages, or by drawing them from a vocabulary. BERT is a deep bi-directional encoder model based on Transformers. It is pre-trained on a large corpus in an unsupervised fashion using a masked language model and next-sentence prediction objective. To apply BERT to a specific task, the standard practice is to add additional output layers on top of the pre-trained BERT and fine-tune the whole model for the task. In our case for RC, 2 output layers are added: one for predicting the start index and another the end index. 
demonstrates that this transfer learning strategy produces state-of-the-art performance on a range of NLP tasks. For RC specifically,BERT (BERT-Large) achieved an F1 score of 93.2 on squad, outperforming human performance by 2 points.
Note that BERT and QANet RC models are extractive models (goal is to predict 2 indices), while decaNLP is a generative model (goal is to generate the correct word sequence). Also, unlike QANet and decaNLP, BERT is not designed specifically for RC. It represents a growing trend in the literature where large models are pre-trained on big corpora and further adapted to downstream tasks.
To reduce the forgetting of source domain knowledge, we introduce auxiliary penalty terms to regularise the fine-tuning process. We favour this approach as it does not require storing data samples from the source domain. In general, there are two types of penalty: selective and non-selective. The former penalises the model when certain parameters diverge significantly from the source model, while the latter uses a pre-defined distance function to measure the change of all parameters.
For selective penalty, we use elastic weight consolidation (EWC: ), which weighs the importance of a parameter based on its gradient when training the source model. For non-selective penalty, we explore L2  and cosine distance. We detail the methods below.
Given a source and target domain, we pre-train the model first on the source domain and fine-tune it further on the target domain. We denote the optimised parameters of the source model as and that of the target model as . For vanilla fine-tuning (finetune
), the loss function is:
where is the cross-entropy loss.
For non-selective penalty, we measure the change of parameters based on a distance function (treating all parameters as equally important), and add it as a loss term in addition to the cross-entropy loss. One distance function we test is the L2 distance:
where is a scaling hyper-parameter to weigh the contribution of the penalty. Henceforth all scaling hyper-parameters are denoted using .
We also experiment with cosine distance, based on the idea that we want to encourage the parameters to be in the same direction after fine-tuning. In this case, we group parameters by the variables they are defined in, and measure the cosine distance between variables:
where denotes the vector of parameters belonging to variable .
For selective penalty, EWC uses the Fisher matrix to measure the importance of parameter in the source domain. Unlike non-selective penalty where all parameters are considered equally important, EWC provides a mechanism to weigh the update of individual parameters:
where is the gradient of parameter update in the source domain, with representing the model and / the data/label from the source domain.
In preliminary experiments, we notice that EWC tends to assign most of the weights to a small subset of parameters. We present Figure 0(a), a plot of mean Fisher values for all variables in QANet after it was trained on squad, the source domain. We see that only the last two variables have some significant weights (and a tiny amount for the rest of the variables). We therefore propose a new variation of EWC, normalised EWC, by normalising the weights within each variable via min-max normalisation, which brings up the weights for parameters in other variables (Figure 0(b)):
where denotes the set of parameters for variable where parameter belongs.
Among the four auxiliary penalty terms, L2 and EWC are proposed in previous work while cosine distance and normalised EWC are novel penalty terms. Observing that EWC and normalised EWC are essentially weighted distances and L2 is based on distance while cosine distance focuses on the angle between variables (and ignores the magnitude), we propose combining them altogether as these different distance metrics may complement each other in regularising the fine-tuning process:
We test 3 comprehension models: QANet, decaNLP and BERT. To pre-process the data, we use the the models’ original tokenisation methods.666That is, we use spaCy (https://spacy.io/), revtok (https://github.com/jekbradbury/revtok), and WordPiece for QANet, decaNLP and BERT, respectively. For BERT, we use the smaller pre-trained model with 110M parameters (BERT-Base).
Fine-Tuning with Auxiliary Penalty
We first pre-train QANet and decaNLP on squad, tuning their hyper-parameters based on its development partition.777We tune for dropout, batch size, learning rate and number of training iterations, and keep other hyper-parameters in their default configuration. For BERT, we fine-tune the released pre-trained model on squad by adding 2 additional output layers to predict the start/end indices (we made no changes to the hyper-parameters). We initialise word vectors of QANet and decaNLP with pre-trained GloVe embeddings  and keep them fixed during training. We also freeze the input embeddings for BERT.888The input embeddings of BERT is a sum of token, segment and position embeddings; we freeze only the token embeddings.
To measure performance, we use the standard macro-averaged F1 as the evaluation metric, which measures the average overlap of word tokens between prediction and ground truth answer.999If there are multiple ground truths, the maximum F1 is taken. Our pre-trained QANet, decaNLP and BERT achieve an F1 score of 80.47, 75.50 and 87.62 respectively on the development partition of squad. Note that the test partition of squad is not released publicly, and so all reported squad performance in the paper is on the development set.
Given the pre-trained squad models, we fine-tune them on the msmarco and bioasq domains. We test vanilla fine-tuning (finetune) and 5 variants of fine-tuning with auxiliary penalty terms: (1) EWC (+ewc); normalised EWC (+ewcn); cosine distance (+cd); L2 (+l2); and combined normalised EWC, cosine distance and L2 (+all). As a benchmark, we also perform fine-tuning with gradient episodic memory (gem), noting that this approach uses the first examples from squad ( in our experiments).
To find the best hyper-parameter configuration, we tune it based on the development partition for each target domain. For a given domain, finetune and its variants (+ewc, +ewcn, +cd, +l2 and +all) all share the same hyper-parameter configuration.101010The only exception are the scaling hyper-parameters (, , and ), where we tune them separately for each model. Detailed hyper-parameter settings are given in the supplementary material.
As a baseline, we train QANet, decaNLP and BERT from scratch (scratch) using the target domain data. As before, we tune their hyper-parameters based on development performance. We present the full results in Table 2.
For each target domain, we display two F1 scores: the source squad development performance (“squad”); and the target domain’s test performance (“Test”). We first compare the performance between scratch and finetune. Across all domains for QANet, decaNLP and BERT, finetune substantially improves the target domain’s performance compared to scratch. The largest improvement is seen in bioasq for QANet, where its F1 improves two-fold (from 29.83 to 65.81). Among the three RC models, BERT has the best performance for both scratch and finetune in most target domains (with a few exceptions such as ms-fn and ms-lw). Between QANet and decaNLP, we see that decaNLP tends to have better scratch performance but the pattern is reversed in finetune, where QANet produces higher F1 than decaNLP in all domains except for ms-lw.
In terms of squad performance, we see that finetune degrades it considerably compared to its pre-trained performance. The average drop across all domains compared to their pre-trained performance is 20.30, 15.30 and 15.07 points for QANet, decaNLP and BERT, respectively. For most domains, F1 scores drop by 10-20 points, while for ms-cp the performance is much worse for QANet, with a drop of 41.34. Interestingly, we see BERT suffers from catastrophic forgetting just as much as the other models, even though it is a larger model with orders of magnitude more parameters.
We now turn to the fine-tuning results with auxiliary penalties (+ewc, +ewcn, +cd and +l2). Between +ewc and +ewcn, the normalised versions consistently produces better recovery for the source domain (one exception is ms-ms for decaNLP), demonstrating that normalisation helps. Between +ewcn, +cd and +l2, performance among the three models vary depending on the domain and there’s no clear winner. Combining all of these losses (+all) however, produces the best squad performance for all models across most domains. The average recovery (+all- finetune) of squad performance is 4.54, 3.93 and 8.77 F1 points for QANet, decaNLP and BERT respectively, implying that BERT benefits from these auxiliary penalties more than decaNLP and QANet.
When compared to gem, +all preserves squad performance substantially better, on average 2.86 points more for QANet and 5.57 points more BERT. For decaNLP, the improvement is minute (0.02); generally gem has the upper hand for most domains but the advantage is cancelled out by its poor performance in one domain (ms-fn). As gem requires storing training data from the source domain (squad training examples in this case), the auxiliary penalty techniques are more favourable for real world applications.
Does adding these penalty terms harm target performance? Looking at the “Test” performance between finetune and +all, we see that they are generally comparable. We found that the average performance difference (+all-finetune) is 0.23, 0.42 and 0.34 for QANet, decaNLP and BERT respectively, implying that it does not (in fact, it has a small positive net impact for QANet and BERT). In some cases it improves target performance substantially, e.g. in bioasq for BERT, the target performance is improved from 71.62 to 76.93, when +all is applied.
Based on these observations, we see benefits for incorporating these penalties when adapting comprehension models, as it produces a more robust model that preserves its source performance (to a certain extent) without trading off its target performance. In some cases, it can even improve the target performance.
In previous experiments, we fine-tune a pre-trained model to each domain independently. With continuous learning, we seek to investigate the performance of finetune and its four variants (+l2, +cd, +ewcn and +all) when they are applied to a series of fine-tuning on multiple domains. For the remainder of experiments in the paper, we test only with decaNLP.
When computing the penalties, we consider the last trained model as the source model.111111The implication is that we have to re-compute the Fisher matrix for the last domain before we fine-tune the model on a new domain. Figure 2 demonstrates the performance of the models on the development set of squad and test sets of ms-bm and ms-cp when they are adapted to ms-bm, ms-cp, ms-fn, ms-ms, ms-fm and ms-lw in sequence.121212In terms of hyper-parameters, we choose a configuration that is generally good for most domains. We exclude plots for the latter domains as they are similar to that of ms-cp.
Including the pre-training on squad, all models are trained for a total of 170K iterations: squad from 0–44K, ms-bm from 45K–65K, ms-cp from 66K–86K, ms-fn from 87K–107K, ms-ms from 108K–128K, ms-fm from 129K–149K and ms-lw from 150K–170K.
We first look at the recovery for squad in Figure 1(a). +all (black line; legend in Figure 1(c)) trails well above all other models after a series of fine-tuning, followed by +ewcn and +cd, while finetune produces the most forgetting. At the end of the continuous learning, +all recovers more than 5 F1 points compared to finetune. We see a similar trend for ms-bm (Figure 1(b)), although the difference is less pronounced. The largest gap between finetune and +all occurs when we fine-tune for ms-fm (iteration 129K–149K). Note that we are not trading off target performance when we first tune for ms-bm (iteration 45K–65K), where finetune and +all produces comparable F1.
For ms-cp (Figure 1(c)), we first notice that there is considerably less forgetting overall (ms-cp performance ranges from 65–75 F1, while squad performance in Figure 1(a) ranges from 45–75 F1). This is perhaps unsurprising, as the model is already generally well-tuned (e.g. it takes less iterations to reach optimal performance for ms-cp compared to ms-bm and squad). Most models perform similarly here. +all produces stronger recovery when fine-tuning on ms-fm (129K–149K) and ms-lw (150K–170K). At the end of the continuous learning, the gap between all models is around 2 F1 points.
In decaNLP, curriculum learning was used to train models for different NLP tasks. More specifically, decaNLP was first pre-trained on squad and then fine-tuned on 10 tasks (including squad) jointly. During the training process, each minibatch consists of examples from a particular task, and they are sampled in an alternating fashion among different tasks.
In situations where we do not have access to training data from previous tasks, catastrophic forgetting occurs when we adapt the model for a new task. In this section, we test our methods for task transfer (as opposed to domain transfer in previous sections). To this end, we experiment with decaNLP and monitor its squad
performance when we fine-tune it for other tasks, including semantic role labelling (SRL), summarisation (SUM), semantic parsing (SP), machine translation (MT), and sentiment analysis (SA). Note that we are not doing joint or continuous learning here: we are taking the pre-trained model (onsquad) and adapting it to the new tasks independently. Description of these tasks are detailed in .
A core novelty of decaNLP is that its design allows it to generate words by extracting them from the question, context or its vocabulary, and this decision is made by the pointer-generator network. Based on the pointer-generator analysis in , we know that the pointer-generator network favours generating words using: (1) context for SRL, SUM, and SP; (2) question for SA; and (3) vocabulary for MT.
As before, finetune serves as our baseline, and we have 5 variants with auxiliary penalty terms. Table 3 displays the F1 performance on squad and the target task; the table shares the same format as Table 2.
In terms of target task performance (“Test”), we see similar performances for all models. This is a similar observation we saw in previously, and it shows that the incorporation of the auxiliary penalty terms does not harm target task or domain performance.
For the source task squad, +all produces substantial recovery for SUM, SRL, SP and SA, but not for MT. We hypothesise that this is due to the difference in nature between the target task and the source task: i.e. for SUM, SRL and SP, the output is generated by selecting words from context, which is similar to squad; MT, on the other hand, generate using words from the vocabulary and question, and so it is likely to be difficult to find an optimal model that performs well for both tasks.
Observing that the model tends to focus on optimising for the target domain/task in early iterations (as the penalty term has a very small value), we explore using a dynamic scale that starts at a larger value that decays over time. With just simple linear decay, we found substantial improvement in +ewc for recovering squad’s performance, although the results are mixed for other penalties (particularly for +ewcn). We therefore only report results that are based on static values in this paper. With that said, we contend that this might be an interesting avenue for further research, e.g. by exploring more complex decay functions.
To validate the assumption made by gem , we conduct gradient analysis for the auxiliary penalty terms. During fine-tuning, at each step , we calculate the gradient cosine similarity , where , , is a memory containing squad examples, and / is training data/label from the current domain. We smooth the scores by averaging over every 1K steps, resulting in 20 cosine similarity values for 20K steps. Figure 3 plots the gradient cosine similarity for our models in ms-fn.
Curiously, our best performing model +all produces the lowest cosine similarity at most steps (the only exception is between 0-1K steps). finetune, on the other hand, maintains relatively high similarity throughout. Similar trends are found for other domains. These observations imply that the inspiration gem draw on — i.e. catastrophic forgetting can be reduced by constraining a positive dot product between and — is perhaps not as empirically effective as intuition might tell us, and that our auxiliary penalty methods represent an alternative (and very different) direction to preserving source performance.
To reduce catastrophic forgetting when adapting comprehension models, we explore several auxiliary penalty terms to regularise the fine-tuning process. We experiment with selective and non-selective penalties, and found that a combination of them consistently produces the best recovery for the source domain without harming its performance in the target domain. We also found similar observations when we apply our approach for adaptation to other tasks, demonstrating its general applicability. To test our approach, we develop and release six narrow domain reading comprehension data sets for the research community.
Latent Dirichlet allocation.
Journal of Machine Learning Research3, pp. 993–1022. Cited by: Data Set.
-  (2007) Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 256–263. Cited by: Related Work.
-  (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: Introduction, Methodology.
-  (1999) Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3 (4), pp. 128–135. Cited by: Introduction, Related Work.
-  (2017) TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601–1611. Cited by: Introduction, Related Work.
-  (2016) Frustratingly easy neural domain adaptation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 387–396. Cited by: Related Work.
Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences 114, pp. 3521–3526. Cited by: Introduction, Related Work, Methodology.
-  (2018) The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics 6, pp. 317–328. Cited by: Introduction, Related Work.
-  (2019) Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Cited by: footnote 1.
-  (2017) Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6467–6476. Cited by: Related Work, Discussion.
The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pp. 55–60. Cited by: footnote 2.
-  (2002) MALLET: a machine learning for language toolkit. Note: http://mallet.cs.umass.edu Cited by: footnote 2.
-  (2018) The natural language decathlon: multitask learning as question answering. CoRR abs/1806.08730. Cited by: Introduction, Related Work, Methodology, Task Transfer, Task Transfer.
-  (1989) Catastrophic interference in connectionist networks: the sequential learning problem. In Psychology of learning and motivation, Vol. 24, pp. 109–165. Cited by: Related Work.
-  (2016) MS MARCO: A human generated machine reading comprehension dataset. CoRR abs/1611.09268. Cited by: Introduction, Introduction, Related Work, Data Set.
-  (2014) GloVe: global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Cited by: Fine-Tuning with Auxiliary Penalty.
-  (2018) Know what you don’t know: unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784–789. Cited by: Introduction.
-  (2016) SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. Cited by: Introduction, Related Work, Data Set.
-  (2018) Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910. Cited by: Introduction, Related Work.
-  (2017) Representation stability as a regularizer for improved text analytics transfer learning. arXiv preprint arXiv:1704.03617. Cited by: Related Work.
-  (2017) Scalable recollections for continual lifelong learning. arXiv preprint arXiv:1711.06761. Cited by: Related Work.
-  (2018) Overcoming catastrophic forgetting with hard attention to the task. arXiv preprint arXiv:1801.01423. Cited by: Related Work.
-  (2015) An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC bioinformatics 16 (1), pp. 138. Cited by: Related Work.
-  (2012) Bioasq: a challenge on large-scale biomedical semantic indexing and question answering. In 2012 AAAI Fall Symposium Series, Cited by: Data Set.
-  (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, pp. 5998–6008. Cited by: Methodology.
-  (2017) Neural domain adaptation for biomedical question answering. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, pp. 281–289. Cited by: Introduction, Related Work, Methodology.
-  (2015) WikiQA: a challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2013–2018. Cited by: Introduction.
-  (2018) QANet: combining local convolution with global self-attention for reading comprehension. CoRR abs/1804.09541. Cited by: Introduction, Methodology.