Question answering is a common tool for assessing how well can computers understand language and reason with it. To this end, the NLP community has introduced several distinct QA formats, with four popular formats illustrated in Figure 1. These formats differ not only in how the question is presented but also in some implicit assumptions. For instance, the assumption that the expected answer is either “yes” or “no”, or that there is always a unique answer span in the associated paragraph (as opposed to multiple or no spans), etc. These differences have motivated their study in silos, often encoding QA format and assumptions into the model architecture itself. Efforts to exploit multiple datasets remain largely restricted to a single format. For example, Clark et al. (2019c) limit consideration to multiple-choice datasets, while Talmor and Berant (2019) focus their generalization study on extractive span prediction models. To the best of our knowledge, no single QA system targets, not to mention excels at, all of these formats.
This raises the question: Can QA models learn linguistic reasoning abilities that generalize across formats? Our intuition is simple: while question format and relevant knowledge may vary across QA datasets, the underlying linguistic understanding and reasoning abilities are largely common. A multiple-choice model may, therefore, benefit from training on an extractive answers dataset. Building upon this intuition, we present a single pre-trained QA system, named UnifiedQA, that exploits information across 4 different QA formats to achieve surprisingly strong performance across 17 different datasets listed in Figure 2.
Our work is enabled by recent progress in text-to-text neural architectures Raffel et al. (2019); Lewis et al. (2019); Radford et al. (2019). This paradigm conceptually unifies many NLP models that formerly had task-specific designs. While there have been hopes of—and a few attempts at—using this paradigm to achieve strong generalization and transfer across tasks, success so far has been limited. Most approaches fine-tuned a different set of parameters for each end task Raffel et al. (2019); Radford et al. (2019), and when they have attempted to make a single model for different NLP tasks Raffel et al. (2019), they have underperformed compared to the standard pre-training plus fine-tuning setup, with a need for explicit task-specific prefixes.
In contrast, by narrowing the scope to tasks that stay within the boundaries of QA, we are able to demonstrate that the text-to-text paradigm can, in fact, be quite powerful for multi-task learning across QA formats. We find that out-of-format training can lead to a single pre-trained QA model that can be applied as-is to different QA tasks, takes in natural text inputs without explicitly specifying a task-specific prefix, generalizes better to other unseen datasets, and with further fine-tuning can achieve state-of-the-art results on many QA tasks.
This work advocates for a unified view of different QA formats, and for building format-agnostic QA systems. To support this view, we present UnifiedQA, a single pre-trained QA system that works well on and generalizes to datasets with different formats (§6.2), while performing on par with state-of-the-art dedicated systems tailored to each dataset (§6.1). Additionally, fine-tuning UnifiedQA into specialized systems sets a new state of the art for 6 datasets (§6.3), establishing it as a powerful starting point for QA research. Our findings demonstrate that crossing QA format boundaries is not only qualitatively desirable but also quantitatively beneficial.
2 Related Work
Several QA efforts have studied generalization across datasets of a single format. For instance, in MultiQA, Talmor and Berant (2019) study generalization and transfer, but only across extractive span selection datasets. Further, while they show strong leave-one-out style results, they find a single system performs substantially worse than one tuned to each dataset. In ORB, Dua et al. (2019a) propose a multi-dataset evaluation benchmark spanning extractive and abstractive formats. However, that study is limited to an evaluation of systems, falling short of addressing how to build such generalizable models. Similarly, the MRQA shared task Fisch et al. (2019) focuses on span-prediction datasets. Unlike all these efforts, our goal is to investigate transfer and generalization across different QA formats, as well as to build a single system that does this well.
3 UnifiedQA: Multi-format Training
Suppose we would like to train a unified QA model that can operate over question-answering formats . For each format , suppose we have datasets sets where includes both training and evaluation examples. In some cases, the training set may be empty or we may want to ignore it in order to treat as an ‘unseen’, evaluation-only dataset and assess a model’s generalization to it.
We use the text-to-text paradigm to convert each training question in format into a plain-text input representation . This conversion uses a natural encoding process that will be described shortly (§3.1) for four common QA formats, and is easily extensible to other formats as well. We follow a simple possible approach of creating a mixed training pool consisting of all available training instances:
Training batches are drawn from this pooled data, , by including each
with a probability proportional. Each batch thus, on average, contains the same number of instances from each training set, regardless of its size. As we will see in the experiments section, our multi-format mixing approach works surprisingly well. It clearly highlights the value of training on out-of-format data and confirms our intuition that there are strong ties across QA formats in terms of the underlying reasoning abilities.222A more sophisticated teaching curriculum Sachan and Xing (2016) or approaches such as model distillation and teacher annealing Clark et al. (2019b) are likely to further improve the performance of the resulting unified model, bolstering the strength of our advocacy for a unified view of all QA formats. We leave their exploration to future work.
Our unified question-answering system is based on the T5 framework Raffel et al. (2019), a recent text-to-text transformer model. For all experiments, we use token-limits of size 512 and 100 for inputs and outputs sequences, respectively. All models are trained for steps, on top of the steps pretraining of the T5 model.
We first define a unifying encoding of the instances across various formats (in §3.1). We then introduce UnifiedQA (in §3.2) that is a QA system trained on datasets in multiple formats, indicating new state-of-the-art results on 7 datasets and generalization to unseen datasets.
3.1 Text-to-Text Encoding
We convert each of our target datasets into a text-in/text-out format Raffel et al. (2019); Lewis et al. (2019); Radford et al. (2019). The question always comes first, followed by some additional information (context paragraph or candidate answers, or both). We use “\n” separators between different parts of the input. This ensures having a human-like encoding while not making it overly-specific to a certain format.
The unified model explored in this work incorporates the following four common question-answering formats:
- Extractive (EX)
questions include a context paragraph (typically a paragraph) and require models to extract the answer as a substring from the context. In some datasets, ‘unanswerable’ can sometimes be the correct response.
- Abstractive (AB)
questions require models to produce answers that are often not mere substrings of the provided context paragraph .
- Multiple-choice (MC)
questions come with a set of candidate answers , of which generally exactly one is correct. In some cases, they also include a context paragraph .
- Yes/No (YN)
questions expect a ‘yes’ or ‘no’ answer as the response, and may include a context paragraph .
Further details of these formats and specific datasets within them are deferred to Section 4.1.
Table 1 provides examples of the natural input and output encoding for each of these formats. Importantly, both input and output representations are raw text. There is no explicit information regarding a question being an MC question or having exactly four candidate answers. The expectation is that a model will learn to infer such notions from the raw text training data, just like humans are able to do.
Specifically, MC questions without any context paragraph are encoded as question \n (A) c1 (B) c2 where c1, c1, are the set of candidate answers (see the example from ARC dataset in Table 1). If the question includes a context paragraph, it is appended after the candidate answers: question \n (A) c1 (B) c2 \n paragraph, as shown in the example from the MCTest dataset in Table 1. Questions in the other three formats (EX, AB, and YN) are encoded simply as question \n paragraph.
To re-emphasize, unlike prior work Raffel et al. (2019), we do not specify any task-, dataset-, or format-specific prefixes in the input representation. Whether the answer should be extracted or abstracted, and whether from the provided context paragraph or candidate answers (or the fact that these even are candidate answers) is expected to be inferred by the system.
3.2 UnifiedQA: The Pre-Trained Model
The specific pre-trained QA model we provide and use in all our experiments is trained on representative datasets for each of the 4 formats discussed earlier. We empirically chose the following 9 datasets for training UnifiedQA, based on their effectiveness in our pilot study (details deferred to Section 5) assessing which datasets are most valuable for out-of-format training:
EX: SQuAD 1.1, SQuAD 2.0
MC: RACE, Regents, ARC, OBQA, MCTest
One can obviously use other combinations of formats and datsets to create variants of our UnifiedQA model, or extend it as future datasets become available or new formats are introduced.
Unless otherwise noted, we use the largest available T5 model (11B parameters) as the starting point for training UnifiedQA. Similar to pre-trained language models, the resulting pre-trained QA model can be used as a starting point for fine-tuning on other QA datasets.
4 Formats and Datasets
We selected 17 existing datasets that target various formats, as well as various complex linguistic phenomena. Table 2 shows different properties for our datasets (whether it comes with a paragraph, whether the paragraph explicitly contains the answer, whether there are candidate-answers as part of the input, etc.) Most importantly, they are grouped into several formats/categories described below. Table 2 gives summary statistics of these datasets.
Extractive QA (EX).
All the datasets in this format require models to extract the answer to a given question as a substring from a context paragraph. SQuAD 1.1 Rajpurkar et al. (2016) contains questions about Wikipedia paragraphs. A later version of this dataset, SQuAD 2 Rajpurkar et al. (2018), includes unanswerable questions which empirically makes the task much harder. For our evaluation, we use the development sets of SQuAD 1.1 and SQuAD 2. NewsQA Trischler et al. (2017) dataset focuses on paraphrased questions with predicate-argument structure understanding collected from news articles from CNN/DailyMail articles. Quoref Dasigi et al. (2019) contains questions that require coreference resolution in Wikipedia articles and can even have disjoint spans as answers. ROPES Lin et al. (2019) centers around situation understanding, where the model must under the causes and effects implicit in the given situation.
Abstractive QA (AB).
All the datasets in this format require models to produce answers that are often not mere substrings of the given context paragraph. NarrativeQA Kociský et al. (2018) focuses on understanding various events that happen in a given movie plot, based on summaries of their movie adaptations from various web resources. Many of the answers do not have high overlap with the context. DROP Dua et al. (2019b) contains questions that involve rudimentary mathematical skills (such as counting, addition, subtraction, maximum, minimum, etc.) and questions query multiple parts of the paragraph. The answer can be either a number or a date that can be inferred from the paragraph, or several spans from the context paragraph.
|Dataset||Train set size||Eval. set size||95% CI (as %)||Input length||Output length|
Dataset Statistics. CQA, OBQA, and NQA refer to CommonsenseQA, OpenBookQA, and NarrativeQA, respectively. The CI column shows the 95% confidence interval for the evaluation set as a percentage, around a mean score of 50%. Input and output representation lengths are measured in the number of tokens and averaged across the dataset.
Multiple-choice QA (MC).
All the datasets in this format contain questions that come with candidate answers. MCTest Richardson et al. (2013) contains questions about simple, fictional stories. RACE Lai et al. (2017) is a challenging set of English comprehension multiple choice exams given in Chinese middle and high schools. OpenBookQA Mihaylov et al. (2018), ARC Clark et al. (2018), Regents Science Exams Clark et al. (2016), QASC Khot et al. (2019) are different MC tests focusing on elementary/high school-style science exams. A slightly different dataset within this format is CommonsenseQA Talmor et al. (2019) which is geared towards activity/concept commonsense questions. Other than MCTest and RACE, the rest of the datasets do not come with accompanying paragraphs. On such datasets, occasionally a retrieval system is used to supplement each question with a relevant retrieved context paragraph. For most of this the work, we keep the questions as is with no additional retrieval (unless otherwise mentioned), except in §6.3 where we use IR to get numbers comparable to earlier work. One other variability among these datasets is their number of candidate answers. While many datasets have four candidates (see Figure 2), others have more. Later, in §6.2 we will see that our approach generalizes to datasets with different number of candidates, even if it’s not seen during training.
Yes/No QA (YN).
All the datasets in this format contain questions that could be responded with yes/no answers. One can think of these as multiple-choice questions with 2 candidates; however, they’re usually treated differently. Several examples we use are BoolQ Clark et al. (2019a) and a version of this dataset with natural-perturbations, BoolQ-NP Khashabi et al. (2020), the subset of MultiRC Khashabi et al. (2018) that have binary(yes/no) answers.
Additionally, we use contrast-sets Gardner et al. (2020) corresponding to several of our datasets (indicated by “CS” tag): BoolQ-CS, ROPES-CS, Quoref-CS, DROP-CS. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset.
4.2 Evaluation Metrics for Textual Output
We evaluate each dataset using the metric most often used for it in prior work. Specifically, for the EX format, we use the F1 score for the extracted span relative to the gold label. For the AB format, we use ROUGE-L metric Lin et al. (2006); Min et al. (2019); Nishida et al. (2019). For the MC format, we match the generated the text with the closest answer candidate (by token overlap) and measure how often this is the correct answer. For the YN format, we follow Clark et al. (2019a) to measure if the generated output matches the correct ‘yes’ or ‘no’ label. In rare cases where the output is longer than one word (e.g., ‘yes it is’), we check if it contains the correct label but not the incorrect one.
5 Pilot Study: Can Out-of-Format Training Help?
We first answer the question: Is the broad idea of benefiting from out-of-format training even viable? For instance, is our intuition correct that an MC dataset can, in practice, benefit from training on an EX dataset? Before discussing our main experimental results, we briefly report on a pilot study that assesses the following basic question: Given a training set (the anchor dataset) of QA format , is there an out-of-format training set of format such that training jointly on improves performance relative to training only on ? To this end, we evaluate both on the matching evaluation set as well as on ‘unseen’ data of the same format.
The results are summarized in Table 3. The two rows in each individual table correspond to training on (the anchor dataset) and on , where is an out-of-format dataset corresponding to above. The columns represent various evaluation sets of format . For each column, ‘’ at the very bottom indicates the out-of-format dataset that was the most helpful in improving performance on the evaluation set in that column.333Appendix A.3 reports extended results, including the performance with various choices of .
Consider, for example, the case of the anchor set being BoolQ and the evaluation set being NP-BoolQ, both of format YN. Here, including out-of-format training data SQuAD2 boosts performance from 51% to as much as 59%. The gain in other cases is often not this extreme. Nevertheless, across all anchor and evaluation datasets, we generally observe that there is at least one out-of-format training set whose inclusion improves performance.
This pilot study thus provides a proof of concept that out-of-format training can indeed help a QA model in nearly every case. Of course, this study only shows the existence of such an out-of-format dataset, rather than provide a single unified model. Nevertheless, it helps identify representative training sets from each format that were most helpful. As hinted to earlier, we used this empirical data to guide which training sets to include when building UnifiedQA in Section 3.2.
6 Experimental Results
We now discuss our main experimental results, evaluating our proposed UnifiedQA system on seen (used for training the system) and unseen datasets.
6.1 UnifiedQA vs. 9 Dedicated Models
Is UnifiedQA, a single pre-trained multi-format QA system, as good as dedicated systems trained for individual datasets? We emphasize that the answer to this question is not as simple as it may seem, since earlier works have observed that a system addressing multiple tasks often underperforms a focused system Raffel et al. (2019).
Figure 3 summarizes the results of the relevant experiment. As it can be observed from the figure, UnifiedQA performs almost as good as the best single dataset experts. In some cases UnifiedQA performs even better than than the single-dataset experts (e.g., on OBQA or NQA.) On average (last column) UnifiedQA is doing much better dataset/format-specific systems. In conclusion, UnifiedQA offers flexibility across multiple QA formats while compromising almost nothing compared to dataset-specific experts.
6.2 Generalization to Unseen Datasets
The question we want to explore here is whether UnifiedQA generalizes well to other unseen datasets. Table 4 summarizes the results of experiments during which we evaluate various models on datasets that are not used to train them.
The first few rows of the table shows T5 models trained for individual datasets, followed by UnifiedQA. For completeness, we include the highest previous scores for each dataset; one must be careful when reading these numbers as the best previous numbers follow the fully supervised protocol (for NewsQA Zhang et al. (2020), Quoref Dasigi et al. (2019), DROP Dua et al. (2019b), ROPES Lin et al. (2019), QASC Khot et al. (2019), CommonsenseQA Zhu et al. (2020) and x-CS datasets Gardner et al. (2020).)
The key observations are: (1) On average (last column), UnifiedQA shows a much stronger generalization across a wide range of datasets. (2) on 5 (out of 12) datasets UnifiedQA shows a better generalization than any single-dataset experts. For example, while the system is trained on multiple-choice questions with 4 candidate answers, it does work pretty well on datasets with more than 4 candidate answers (QASC and CommonsenseQA have has 8 and 5 candidate ansers per question, respectively). (3) single-dataset experts are better at generalization only when the source and target datasets are pretty similar (for instance SQuAD and Quoref).
6.3 State-of-the-art via Simple Fine-tuning
Fine-tuning of pre-trained language models has become the standard paradigm for building dataset-specific stat-of-art systems Devlin et al. (2019); Liu et al. (2019). The question we address here is whether there a value in using UnifiedQA as a starting point for fine-tuning, as opposed to a vanilla language model that has not seen other QA datasets before?
To address question, we fine-tune both UnifiedQA and T5 on several datasets. Table 5 summarizes the results of the experiments. The two last rows of the table show the performance UnifiedQA and T5, both fine-tuned for the target task. The fine-tuning process involves selection of the best checkpoint on the dev set and evaluation on the test set.
The columns indicate the evaluation on the test set corresponding to the data that was used for training. For several multiple-choice datasets that do not come with evidence with paragraphs, we include two variants: use them as is and another variant which uses paragraphs fetched via an Information Retrieval (IR) system as additional evidence, indicated with “w/ IR” tags. We use the same IR sentences as used by the baselines on these datasets: Aristo corpus for ARC and OBQA datasets Clark et al. (2019c), and 2-step IR for QASC Khot et al. (2019).
Additionally, we show the best published scores on each dataset: ALBERT Lan et al. (2019) (on RACE), RoBERTa Clark et al. (2019c) (on OBQA and ARC), KF+SIR Banerjee and Baral (2020) (on OBQA and QASC), FreeLB+RoBERTa Zhu et al. (2020) (on ARC-easy and CommonsenseQA).
As it can be seen, fine-tuning on UnifiedQA consistently dominates fine-tuning on T5, as well as the best previous scores on each dataset. Intuitively, since UnifiedQA has seen different formats should be positioned to achieve higher scores after a bit of fine-tuning, comparing to fine-tuning on a vanilla T5. This could be especially effective when a user has limited training data for a target QA task.
6.4 Ablation: Training Set Contributions
In this experiment we would like to better understand the contribution of each dataset to UnifiedQA through a leave-one-out experiment.
We take the system from §3.2 and evaluate the model when individual models are dropped from the union. The result of this experiment is summarized in Table 6 compares the performance of UnifiedQA all the default datasets (the first row) followed with ablated versions which exclude one dataset at a time. The rows are sorted based the last column: the dataset with biggest contribution appear first.
The top-few datasets have the highest contributions: BoolQ, SQuAD 2.0, OBQA, NQA are the top four contributing datasets, each with different format. SQuAD 1.1 has the least importance in the union, since its mostly covered by SQuAD 2.0.
The conclusion here is that to build an effective variant of UnifiedQA, one can just use a relatively small number of datasets, so long as that it is trained on representative members of each format.
The question-answering community has fruitfully explored the design of strong models, but while staying within the boundaries of individual QA formats. We argued that such boundaries are artificial and can even limit the performance of systems, because the desired reasoning abilities being taught and probed are not tied to specific formats. Training data in one format should, in principle, help QA systems perform better even on questions in another format.
With this intuition in mind, we presented UnifiedQA, a single pre-trained QA system based on the T5 text-to-text language model, seeking to bring unification across four common QA formats. We showed that even with its simple multi-format training methodology, UnifiedQA achieves performance on par with nearly a dozen dataset-specific expert models (§6.1), while also generalizing well to many unseen datasets (of seen formats) (§6.2). At the same time, we demonstrated that UnifiedQA is a strong starting point for building QA systems: it can achieve state-of-the-art performance by simply fine-tuning on target datasets (6.3).
We hope this effort will inspire a future line of work in the QA and NLP communities, moving towards more general and broader system designs. We leave extensions of UnifiedQA to other formats such as to direct-answer questions Kwiatkowski et al. (2019) as a promising avenue for future work.
The authors would like to thank Collin Raffel, Adam Roberts and Nicholas Lourie for their help with the T5 framework and providing suggestions on an earlier version of this work. TPU machines for conducting experiments were provided by Google.
- Knowledge fusion and semantic knowledge ranking for open domain question answering. arXiv preprint arXiv:2004.03101. Cited by: §6.3.
- Multitask learning. Machine learning 28 (1), pp. 41–75. Cited by: §2.
- BoolQ: exploring the surprising difficulty of natural yes/no questions. In NAACL-HLT, Cited by: §4.1, §4.2.
- BAM! Born-again multi-task networks for natural language understanding. In ACL, pp. 5931–5937. Cited by: §2, footnote 2.
- Think you have solved question answering? Try ARC, the AI2 reasoning challenge. ArXiv abs/1803.05457. Cited by: §4.1.
- From ’F’ to ’A’ on the NY Regents science exams: an overview of the Aristo project. ArXiv abs/1909.01958. Cited by: §1, §6.3, §6.3.
- Combining retrieval, statistics, and inference to answer elementary science questions. In AAAI, Cited by: §4.1.
- Quoref: a reading comprehension dataset with questions requiring coreferential reasoning. In EMNLP/IJCNLP, Cited by: §4.1, §6.2.
- BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, Cited by: §6.3.
- Comprehensive multi-dataset evaluation of reading comprehension. In 2nd Workshop on Machine Reading for Question Answering, Cited by: §2.
- DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In NAACL, Cited by: §4.1, §6.2.
- MRQA 2019 shared task: evaluating generalization in reading comprehension. In 2nd Workshop on Machine Reading for Question Answering, at EMNLP, Cited by: §2.
- Evaluating NLP models via contrast sets. ArXiv abs/2004.02709. Cited by: §4.1, §6.2.
- Unifying question answering, text classification, and regression via span extraction. arXiv preprint arXiv:1904.09286. Cited by: §2.
- Looking beyond the surface: a challenge set for reading comprehension over multiple sentences. In NAACL-HLT, Cited by: §4.1.
- Natural perturbation for robust question answering. ArXiv abs/2004.04849. Cited by: §4.1.
- QASC: a dataset for question answering via sentence composition. In AAAI, Cited by: §4.1, §6.2, §6.3.
- The narrativeqa reading comprehension challenge. TACL 6, pp. 317–328. Cited by: §4.1.
- Natural questions: a benchmark for question answering research. TACL 7, pp. 453–466. Cited by: §7.
- RACE: Large-scale reading comprehension dataset from examinations. In EMNLP, Cited by: §4.1.
ALBERT: a lite bert for self-supervised learning of language representations. In ICLR, Cited by: §6.3.
BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Cited by: §1, §3.1.
- An information-theoretic approach to automatic evaluation of summaries. In NAACL, Cited by: §4.2.
- Reasoning over paragraph effects in situations. In 2nd Workshop on Machine Reading for Question Answering, at EMNLP, Cited by: §4.1, §6.2.
- Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §6.3.
- The natural language decathlon: multitask learning as question answering. arXiv preprint arXiv:1806.08730. Cited by: §2.
- Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP, Cited by: §4.1.
- A discrete hard EM approach for weakly supervised question answering. In EMNLP/IJCNLP, Cited by: §4.2.
- Answering while summarizing: multi-task learning for multi-hop qa with evidence extraction. In ACL, Cited by: §4.2.
- Language models are unsupervised multitask learners. OpenAI Blog 1 (8), pp. 9. Cited by: §1, §3.1.
- Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv abs/1910.10683. Cited by: §1, §2, §3.1, §3.1, §3, §6.1.
- Know what you don’t know: unanswerable questions for squad. In ACL, Cited by: §4.1.
- SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, Cited by: §4.1.
- MCTest: a challenge dataset for the open-domain machine comprehension of text. In EMNLP, Cited by: §4.1.
- Easy questions first? a case study on curriculum learning for question answering. In ACL, pp. 453–463. Cited by: footnote 2.
- MultiQA: an empirical investigation of generalization and transfer in reading comprehension. In ACL, Cited by: §1, §2.
- CommonsenseQA: a question answering challenge targeting commonsense knowledge. In NAACL-HLT, Cited by: §4.1.
- NewsQA: a machine comprehension dataset. In Rep4NLP@ACL, Cited by: §4.1.
- Retrospective reader for machine reading comprehension. ArXiv abs/2001.09694. Cited by: §6.2.
- FreeLB: enhanced adversarial training for natural language understanding. In ICLR, Cited by: §6.2, §6.3.
Appendix A Appendices
a.1 UnifiedQA: Different Sizes
For completeness we’re also showing the scores of UnifiedQA of different sizes on each dataset. For these systems each row is a single system.
a.2 Comparison with the Dedicated Models: extended results
Here we summarize an extension of the results in §6.1. Table 8 summarizes the results of the relevant experiment. In the top portion of the table we have evaluations of T5 model fine-tuned for individual datasets, followed by UnifiedQA. As it can be observed from the table, UnifiedQA performs almost as good as the best single dataset experts. In some cases UnifiedQA performs even better than than the single-dataset experts (e.g., on OBQA or NQA.) On average (last column) UnifiedQA is doing much better dataset/format-specific systems. In conclusion, UnifiedQA offers flexibility across multiple QA formats while compromising almost nothing compared to dataset-specific experts.
a.3 Pairwise Mixing: extended results
Here we summarize an extension of the results in §5. The question addressed here is whether there is value in mixing datasets with different formats. We evaluated this by adding one dataset of a different format to four different datasets (one for each format). The results are summarized in Table 9. The goal of each sub-table is to measure the within-format generalization one can gain via out-of-format training. Each sub-table has an anchor dataset, indicated in the first column. For example in the first table the anchor dataset is SQuAD. Rows of the table: Each table combines datasets of other formats with the anchor dataset (e.g., SQuAD + RACE, etc). The columns of the sub-tables contain evaluations on the dataset with the same format as the anchor dataset. For example, on the first table, the evaluation is done on SQuAD 1.1/2.0, NewsQA, Quoref which have the same format as SQuaD 1.1, the anchor dataset. The results show that one can achieve gains for question-answering in a certain format by incorporating resources in other formats. In the first two sub-tables, we see that NQA (AB) and OBQA (MC) help a SQuAD models generalize better to other EX datasets. In the third table where the anchor dataset is NQA (AB), EX datasets help a NQA model generalize better to other AB datasets. In the 4th/5th subtable, EX and AB datasets help a RACE/OBQA (MC) models generalize better to other MC datasets. Similarly, in the final sub-table, MC dataset helps improve the scores on a YN datasets.