Making a computer system understand text within context and answer questions is challenging but has attracted a lot of interest of the Artificial Intelligence community and general audience for a long time. In the recent years, many Machine Reading Comprehension (MRC) datasets have been published, with different genres and formats of the context and questions. The context could be in the form of text passages, or in the form of dialogues. The questions could be open-formed (e.g. HotPotQA), asking the system to either extract the answers as spans from the context or external knowledge, or abstract and summarize the answers; the questions could also be in the form of asking the system to choose the best answer from multiple choices. In this note we will focus on the multi-choice MRC tasks, more specifically, the DREAM task .
RACE  is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The average passage length is 322 words. Each question provides 4 answer options to choose from. The human ceiling performance is 94.5.
DREAM  is a much smaller reading comprehension dataset with more than 6,000 dialogues and over 10,000 questions. The average dialogue length is 86 words. Each question provides 3 answer options to choose from. The human ceiling performance is 98.6.
1.2 Related Work
Early works on the DREAM task include feature-based GBDT , and FTLM  which is based on the Transformer  architecture. The top system accuracy on the DREAM leaderboard has been advanced gradually to above 90 percent, since the break-through of the text encoder in the form of large pretrained Transformer-based models (BERT , XLNet , RoBERTa , Albert ).
Transfer learning is a widely used practice in machine learning (ML) that focuses on utilizing knowledge gained while solving one problem and applying it to a different but related problem. Using pretrained language models (LMs) such as ELMO  and BERT  in down-stream tasks is an example of sequential transfer learning. On the other hand, multi-task learning, which involves learning several similar tasks simultaneously, is able to share the knowledge learned among the tasks.
On the DREAM leaderboard111https://dataset.org/dream/, the recent top systems include RoBERTAlarge+MMM in  and Albertxxlarge+DUMA in . Both systems employ model architectures composed of a Transformer-based encoder and some matching/attention mechanism between the context and the question-answer pair. RoBERTAlarge+MMM in  additionally employed two stages of transfer learning: coarse-tuning with natural language inference (NLI) tasks and multi-task learning with multi-choice reading comprehension tasks.
We used Albertxxlarge+DUMA as our model architecture and did multi-task learning on top of that, as through experiments we felt it efficiently boosts the performance on top of the powerful Albertxxlarge model.
The model architecture is composed of a Transformer-based encoder, a linear layer classifier, and an extra attention layer(s) in between to model reasoning between the context and the question-answer, as in. We use the pretrained Albertxxlarge as the encoder, and fine tune it during the training process. Since the DREAM dataset is small, joint training on both the DREAM task and the RACE task helps get a good boost on the DREAM task.
2.1 Model Architecture
When encoding an answer option for a question, we concatenate it with not only the question but also the context (passage for the RACE task, dialogue for the DREAM task) to form one single sequence, the parts separated by the
<sep>token. The sequence is fed through a Transformer-based encoder (Albert in our case). The sequence output of the encoder is then sent to the next part of the model.
- Extra Attention layer
In the next part of the model, we use Dual Multi-head Co-Attention (DUMA) module as described in Section 4.2 of . Basically, it involves 1) splitting the output sequence from the encoder into two parts, one for the context and one for the question-answer; 2) from the two sequences, computing two attention representations, one from the context attending the question-answer, the other vice versa; 3) the two attention representations are individually mean-pooled and then concatenated together and sent to the next part of the model: the classifier.
For each question, over all the answer options, the classifier part takes the outputs from the Extra Attention layer, and feeds them through a linear layer. The answer option with the highest probability is picked as the predicted answer. The Cross Entropy function between the ground truth and the predicted probabilities is applied to compute the loss.
2.2 Multi-task Learning
The question/answer pairs in the DREAM task have syntactic and semantic characteristics that are generally different from the text sequences that are used to pre-train Albert. Because the DREAM dataset is relatively small, it is reasonable to hypothesize that adding a larger amount of similar multi-choice MRC data in the training will be beneficial for the DREAM task.
Inspired by , we did multi-task learning on the DREAM task and the RACE task. Although the number of choices are different in both tasks, we are still able to share all the parts of the model between them. We sampled mini-batches from the DREAM and RACE datasets in proportion of the relative sizes of the two datasets.
3 Experiment Settings
We used the pretrained Albert-xxlarge-v2 model as our encoder, and one layer of DUMA as in . Since we do not have the implementation details on the number of DUMA attention heads and the head size in , for re-implementation we used the setting as in Alberta-xxlarge self attention layers: 64 attention heads and each head has a dimension of 64. We used the same setting in our multi-task learning. Our codes are written based on Transformers 222https://github.com/huggingface/transformers.
The maximum sequence length was set to 512. We used a mini-batch size of 24, and a learning rate of 1e-05. The gradient norm was clipped to 1. We adopted a linear scheduler with a weight decay of 0.01, trained the model for 5 epochs. For the multi-task learning, we usedof the total steps as warming up, evaluated on the dev set at every 1000 steps and saved the best model on the dev set. For the single-task training on the DREAM dataset (the second last line in Table 1), we evaluated on the dev set at every 100 steps and saved the best model on the dev set.
We did not do hyper-parameter searching and only had one run of multi-task learning at the moment. For the single task training we had three runs and picked the model with the best accuracy on the dev set. All the experiments was run on four v100 GPUs in a single machine.
Table 1 summarizes the experiment results. The numbers marked with are from , the numbers marked with are from . Note that the second last line is our implementation of Albertxxlarge+DUMA, with 64 DUMA heads of dimension 64. The multi-task learning in the last line was run with similar settings and parameters.
Compared to , our implementation of Albertxxlarge+DUMA obtained a higher accuracy on the dev set but a lower accuracy on the test set. The possible reasons could be that we did not have the exact setting such as attention head number and size, and/or randomness. Nonetheless, the model from multi-task learning had a good boost over both the numbers from  and the numbers from our re-implementation of the single-task learning. This shows that although the context part in the DREAM task is in dialogue style instead of passage style in the RACE task, the DREAM task could still benefit a lot from learning together with the RACE task, because of similar domain and similar question-answer style.
|Our model (above model with multi-task learning)||91.9||91.8|
The author would like to thank Luis Lastras, Sachindra Joshi, and Chulaka Gunasekara for helpful discussions.
-  (2019-06) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. External Links: Cited by: §1.2, §1.2, Table 1.
-  (2020) MMM: multi-stage multi-task learning for multi-choice reading comprehension. Cited by: §1.2, §2.2, Table 1, §4.
RACE: large-scale ReAding comprehension dataset from examinations.
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 785–794. External Links: Cited by: §1.1.
-  (2020) ALBERT: a lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, External Links: Cited by: §1.2, Table 1.
-  (2019) RoBERTa: A robustly optimized BERT pretraining approach. CoRR abs/1907.11692. External Links: Cited by: §1.2, Table 1.
-  (2018-06) Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 2227–2237. External Links: Cited by: §1.2.
-  (2018) Improving language understanding by generative pre-training. Cited by: §1.2.
-  (2019) DREAM: a challenge dataset and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics. External Links: Cited by: Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension, §1.1, §1.2, §1, Table 1.
-  (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §1.2.
-  (2019) XLNet: generalized autoregressive pretraining for language understanding.. See conf/nips/2019, pp. 5754–5764. External Links: Cited by: §1.2.
-  (2019) XLNet: generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 5754–5764. External Links: Cited by: Table 1.
-  (2018) HotpotQA: a dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §1.
-  (2020) Dual multi-head co-attention for multi-choice reading comprehension. External Links: Cited by: Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension, §1.2, item Extra Attention layer, §2, §3, Table 1, §4, §4.