Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension

02/02/2020
by   Max Bartolo, et al.
UCL
0

Innovations in annotation methodology have been a propellant for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation approach and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalisation to data collected without a model. We find that training on adversarially collected samples leads to strong generalisation to non-adversarially collected datasets, yet with progressive deterioration as the model-in-the-loop strength increases. Furthermore we find that stronger models can still learn from datasets collected with substantially weaker models in the loop: When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 36.0F1 on questions that it cannot answer when trained on SQuAD - only marginally lower than when trained on data collected using RoBERTa itself.

READ FULL TEXT VIEW PDF

page 15

page 17

page 19

04/30/2020

STARC: Structured Annotations for Reading Comprehension

We present STARC (Structured Annotations for Reading Comprehension), a n...
06/28/2022

Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop

We present our experience as annotators in the creation of high-quality,...
01/27/2020

Retrospective Reader for Machine Reading Comprehension

Machine reading comprehension (MRC) is an AI challenge that requires mac...
06/02/2021

Why Machine Reading Comprehension Models Learn Shortcuts?

Recent studies report that many machine reading comprehension (MRC) mode...
10/10/2019

RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension

Recent studies revealed that reading comprehension (RC) systems learn to...
07/03/2020

Reading Comprehension in Czech via Machine Translation and Cross-lingual Transfer

Reading comprehension is a well studied task, with huge training dataset...
07/25/2018

Repartitioning of the ComplexWebQuestions Dataset

Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a da...

1 Introduction

Data collection is a fundamental prerequisite for Machine Learning-based approaches to Natural Language Processing (NLP). Innovations in data acquisition methodology, such as crowdsourcing, have led to major breakthroughs in scalability and preceded the “deep learning revolution”, for which they can arguably be seen as co-responsible 

Deng et al. (2009); Bowman et al. (2015); Rajpurkar et al. (2016). Annotation approaches include expert annotation, e.g. by relying on trained linguists Marcus et al. (1993), crowd-sourced annotation by non-experts Snow et al. (2008), distant supervision Mintz et al. (2009); Joshi et al. (2017), and leveraging document structure for annotation purposes Hermann et al. (2015). The concrete data collection paradigm chosen dictates the degree of scalability, annotation cost, precise task structure (which often arises as a compromise of the above), domain coverage, task difficulty, as well as resulting dataset biases and model blind spots Jia and Liang (2017); Schwartz et al. (2017); Gururangan et al. (2018).

A recently emerging trend in NLP dataset assembly is the use of a model in the loop when composing the samples: a contemporary model is used either as a filter or directly during annotation, retaining only samples wrongly predicted by the model. Examples of this method are realised in Build It Break It, The Language Edition Ettinger et al. (2017), SWAG Zellers et al. (2018), HotpotQA Yang et al. (2018), DROP Dua et al. (2019), CODAH Chen et al. (2019), Quoref Dasigi et al. (2019) and AdversarialNLI Nie et al. (2019).111Richardson et al. (2013) alluded to this idea in their work, but it has only recently seen wider adoption. The practice probes model robustness and ensures that the resulting datasets pose a challenge to current models, in turn driving research and modelling efforts to tackle the new problem set.

But how robust is the approach itself in the face of continuously progressing models – do such datasets quickly become outdated in their usefulness as models become stronger Devlin et al. (2019)? Based on models trained on the widely used SQuAD dataset, and following the same basic annotation protocol, we investigate the additional annotation requirement that the annotator has to compose questions for which the model predicts the wrong answer. As a result, only samples which the model fails to predict correctly are retained in the dataset – see Fig. 1 for an example.

We apply this annotation strategy with three distinct models in the loop, resulting in datasets with 12,000 samples each. We then study the reproducibility of the adversarial effect when retraining the models with the same data, as well as the generalisation abilities of models trained on the resulting datasets to datasets composed with and without a model adversary. Models can, to a considerable degree, learn to generalise towards these challenging questions, based on training sets collected with both stronger and also weaker models in the loop. Compared to training on SQuAD, training on adversarially composed questions leads to a similar degree of generalisation to non-adversarially written questions, both for SQuAD and NaturalQuestions Kwiatkowski et al. (2019). It furthermore leads to general improvements across the model-in-the-loop datasets we collected, as well as improvements of more than 20.0F for both BERT and RoBERTa on an extractive subset of DROP Dua et al. (2019), another adversarially composed dataset. When conducting a systematic analysis of the concrete questions different models fail to answer correctly, as well as non-adversarially composed questions, we see that the nature of the resulting questions changes: questions composed with a model in the loop are overall more diverse, use more paraphrasing, multi-hop inference, background knowledge and comparisons, and are generally less easily answerable by matching an explicit statement that states the required information literally. Given our observations, we believe a model-in-the-loop approach to annotation shows promise and should be considered as an option when creating future RC datasets.

To summarise, our contributions are as follows:

  1. An investigation into the model-in-the-loop approach to RC data collection based on three progressively stronger RC models.

  2. An empirical performance comparison of models trained on datasets constructed with adversaries of different strength.

  3. A comparative investigation into the nature of questions composed to be unsolvable by a sequence of progressively stronger RC models.

2 Related Work

Constructing Challenging Datasets

Recent efforts in dataset construction have driven considerable progress in the RC task, yet dataset structures are diverse and annotation methodologies vary. With its large size and combination of free-form questions with answers as extracted spans, SQuAD1.1 Rajpurkar et al. (2016) has become an established benchmark which has inspired the construction of a series of similarly structured datasets. However, mounting evidence suggests that models can achieve strong generalisation performance merely by relying on superficial cues – such as lexical overlap, term frequencies, or entity type matching Chen et al. (2016); Weissenborn et al. (2017); Sugawara et al. (2018). It has thus become an increasingly important consideration to construct datasets which RC models find both challenging, and for which natural language understanding is a requisite for generalisation. Attempts to achieve this non-trivial aim have typically revolved around extensions to the SQuAD dataset annotation methodology. They include unanswerable questions Trischler et al. (2016); Rajpurkar et al. (2018); Reddy et al. (2019); Choi et al. (2018), adding the option of “Yes” or “No” answers Dua et al. (2019); Kwiatkowski et al. (2019), questions requiring reasoning over multiple sentences or documents Welbl et al. (2018); Yang et al. (2018), questions requiring rule interpretation or context awareness Saeidi et al. (2018); Choi et al. (2018); Reddy et al. (2019), limiting annotator passage exposure by sourcing questions first Kwiatkowski et al. (2019), controlling answer types by including options for dates, numbers, or spans from the question Dua et al. (2019), as well as questions with free form answers Nguyen et al. (2016); Kočiský et al. (2018); Reddy et al. (2019).

Adversarial Annotation

One recently adopted approach to constructing challenging datasets involves the use of an adversarial model to select examples which it does not perform well on. Here, we make a sub-distinction between two categories: i) adversarial filtering, where the adversarial model is applied offline in a separate stage of the process, usually after data generation; examples include SWAG Zellers et al. (2018), ReCoRD Zhang et al. (2018), HotpotQA Yang et al. (2018), and HellaSWAG Zellers et al. (2019); ii) model-in-the-loop adversarial annotation, where the annotator can directly interact with the adversary during the annotation process and uses the feedback to further inform the generation process; examples include CODAH Chen et al. (2019), Quoref Dasigi et al. (2019), DROP Dua et al. (2019), AdversarialNLI Nie et al. (2019), as well as work by Kaushik et al. (2019) and Wallace et al. (2019) on the Quizbowl task.

We are primarily interested in the latter category, as this feedback loop creates an environment where the annotator can probe the model directly to explore its weaknesses and formulate targeted adversarial attacks. While Dua et al. (2019) and Dasigi et al. (2019) make use of adversarial annotations for RC, both annotation setups limit the reach of the model-in-the-loop: in DROP, primarily due to the imposition of specific answer types, and in Quoref by focusing on co-reference, which is already a known RC model weakness.

In contrast, we investigate a scenario where annotators interact with a performant model in its original task setting – annotators must thus explore a range of natural adversarial attacks, as opposed to merely filtering out “easy” samples during the annotation process.

3 Annotation Methodology

3.1 Annotation Protocol

The protocol used for data annotation is based on SQuAD1.1, but with the additional instruction that questions should have only one possible answer in the passage – as well as a model adversary in the loop.

Formally, provided with a passage , a human annotator generates a question and selects a (human) answer by highlighting the corresponding span in the passage. The input is then given to the model, which returns a predicted (model) answer . To compare the two, a word-overlap F score between and is computed; a score above a threshold of is considered a win for the model.222 This threshold is set after initial experiments to not be overly restrictive given acceptable answer spans, e.g. a human answer of ‘New York’ vs. model answer ‘New York City’ would still lead to a model “win”. This process is repeated until the human “wins”; Figure 2 gives a schematic overview of the process. All successful triples, i.e. those which the model is unable to answer correctly, are then retained for further validation.

Figure 2: Overview of the annotation process to collect adversarially written questions from humans using a model in the loop.

3.2 Annotation Details

Models in the Annotation Loop

We begin by training three different models, which are used as adversaries during data annotation. As a seed dataset for training the models we select the widely used SQuAD1.1 Rajpurkar et al. (2016) dataset, a large-scale resource for which a variety of mature and well-performing models are readily available. Furthermore, unlike cloze-based datasets, SQuAD is robust to passage/question-only adversarial attacks Kaushik and Lipton (2018). We will compare dataset annotation with a series of three progressively stronger models as adversary in the loop, namely BiDAF Seo et al. (2017), BERT Devlin et al. (2019) and RoBERTa Liu et al. (2019). Each of these will serve as a model adversary in a separate annotation experiment and result in separate datasets; we will refer to these as , and , respectively. We rely on the AllenNLP Gardner et al. (2017) and Transformers Wolf et al. (2019) model implementations, and our models achieve EM/F scores of 65.5%/77.5%, 82.7%/90.3% and 86.9%/93.6% for BiDAF, BERT and RoBERTa, respectively on the SQuAD1.1 validation set.

Our choice of models reflects both the transition from LSTM-based to pre-trained transformer-based models, as well as a graduation among the latter; we will investigate how this is reflected in datasets collected with each of these different models in the annotation loop. For each of the models we collect 10,000 training, 1,000 validation and 1,000 test examples. Dataset sizes are motivated by the improved data efficiency of transformer-based pretrained models Devlin et al. (2019); Liu et al. (2019), which has improved the viability of smaller-scale data collection efforts for investigative and analysis purposes.

To ensure the experimental integrity provided by reporting all results on a held-out test set, we split the existing SQuAD1.1 validation set in half (stratified by document title) since the test set is not publicly available. We maintain passage consistency across the training, validation and test sets of all analysis datasets to enable like-for-like comparisons. Since SQuAD1.1 validation set questions commonly have multiple answers and the standard SQuAD1.1 evaluation method involves taking the maximum score over all possible answers, we enforce an additional evaluation constraint by taking the majority vote answer as ground truth for SQuAD1.1. This ensures that all our experimental resources have one valid answer per question, enabling us to fairly draw direct comparisons. For clarity, we will hereafter refer to this modified version of SQuAD1.1 as .

Crowdsourcing

We use custom-designed Human Intelligence Tasks (HITs) served through Amazon Mechanical Turk (AMT) for all annotation efforts (see Appendix B). Workers are required to be based in Canada, the UK, or the US, have a HIT Approval Rate greater than , and have previously completed at least 1,000 HITs successfully. We experiment with and without the AMT Master requirement and find no substantial difference in quality, yet a throughput reduction of nearly 90%. We pay $2 for every question generation HIT, during which workers are required to compose up to five questions which “beat” the model in the loop. The mean HIT completion times for BiDAF, BERT and RoBERTa are 551.8s, 722.4s and 686.4s respectively. Furthermore we find that human workers are able to generate questions which successfully “beat” the model in the loop of the time for BiDAF, for BERT and for RoBERTa. These metrics broadly reflect the strength of the models.

3.3 Quality Control

Training and Qualification

We provide a two-part worker training interface in order to i) familiarise workers with the process, and ii) conduct a first screening based on workers’ outputs. The interface familiarises workers with formulating questions, and answering them through span selection controls. Workers are asked to highlight two answers for provided questions, generate two questions for provided answers, generate one full question-answer pair, and finally complete a question generation HIT with BiDAF as the model in the loop. Each worker’s output is then manually reviewed; those who pass the screening are qualified to a second annotation stage.

Manual Worker Validation

In the second annotation stage, workers produce data for the “Beat the AI” question generation task. A sample of every worker’s question generation HITs is manually reviewed based on their total number of completed tasks , determined by , chosen for convenience. This is done after every annotation batch; if workers fall below an success threshold at any point, their qualification is revoked and their work discarded in its entirety.

Question Answerability

As the models used in the annotation task become stronger, the resulting questions tend to become more complex. However, this also means that it becomes more challenging to disentangle measures of dataset quality from inherent question difficulty. As such, we define the condition of human answerability for an annotated question-answer pair as follows: it is answerable if at least one of three additional non-expert human validators can provide an answer matching the original. We conduct answerability checks on both the validation and test sets and achieve answerability scores of , and for , and , respectively. We discard all questions deemed unanswerable from the validation and test sets, and further discard all data from any workers with less than half of their questions considered answerable. It should be emphasised that the main purpose of this process is to create a level playing field for comparison across datasets constructed for different model adversaries and can inevitably result in valid questions being discarded. The total cost for training and qualification, dataset construction and validation is approximately $27,000.

Human Performance

We select a randomly chosen validator’s answer to each question and compute Exact Match (EM) and word overlap F scores with the original to calculate non-expert human performance; Table 1 shows the result. We observe a clear trend: the stronger the model in the loop used to construct the dataset, the harder the resulting questions become for humans.

Dev Test
Resource EM F EM F
63.0 76.9 62.6 78.5
59.2 74.3 63.9 76.9
58.1 72.0 58.7 73.7
Table 1: Non-expert human performance results for a randomly-selected validator per question.

3.4 Dataset Statistics

In Table 2

we provide general details on the number of passages and question-answer pairs used in the different dataset splits. The average number of words in questions and answers, as well as the average longest n-gram overlap between passage and question are furthermore given in Table 

3.

Figure 3: Distribution of longest n-gram overlap between passage and question for different datasets. : mean;

: standard deviation.

We can again observe two clear trends: from weaker towards stronger models used in the annotation loop, the average length of answers increases, and the largest n-gram overlap drops from 3 to 2 tokens. That is, on average there is a trigram overlap between the passage and question for , but only a bigram overlap for (Figure 3).333Note that the original SQuAD1.1 dataset can be considered a limit case of the adversarial annotation framework, in which the model in the loop always predicts the wrong answer, thus every question is accepted. This is in line with prior observations on lexical overlap as a predictive cue in SQuAD Weissenborn et al. (2017); Min et al. (2018); questions with less overlap are harder to answer for any of the three models.

#Passages #QAs
Resource Train Dev Test Train Dev Test
18,891 971 1,096 87,599 5,278 5,292
2,523 278 277 10,000 1,000 1,000
2,444 283 292 10,000 1,000 1,000
2,552 341 333 10,000 1,000 1,000
Table 2: Number of passages and question-answer pairs for each data resource.
Question length 10.3 9.8 9.8 10.0
Answer length 2.6 2.9 3.0 3.2
N-gram overlap 3.0 2.2 2.1 2.0
Table 3: Average number of words per question and answer, and average longest n-gram overlap between passage and question.

We furthermore perform analyses on question types based on the question wh-word. We find that – in contrast to – the datasets collected with a model in the loop have fewer when, how and in questions, and instead more which, where and why questions, as well as questions in the other category, which indicates increased question diversity. In terms of answer types, we observe more common noun and verb phrase clauses than in , as well as fewer dates, names, and numeric answers. This reflects on the strong answer-type matching capabilities of contemporary RC models. For further dataset statistics on this, see Appendix A.

While , and were created for the investigation and analysis of human-sourced adversarial examples in a model-in-the-loop setting for RC, we recognise their potential value to the community and plan to release all three training and validation sets publicly.

Seed 1 Seed 2
Model Resource EM F EM F
BiDAF -dev 0.0 5.3 10.3 19.4
BERT -dev 0.0 4.9 20.5 30.3
RoBERTa -dev 0.0 6.1 16.5 26.4
BiDAF -test 0.0 5.5 12.2 21.7
BERT -test 0.0 5.3 18.6 29.6
RoBERTa -test 0.0 5.9 16.2 27.3
Table 4: Consistency of the adversarial effect (or lack thereof) for different models in the loop when retraining the model on the same data again, but with a new random seed.

4 Experiments

4.1 Consistency of the Model in the Loop

We begin with an experiment about the consistency of the adversarial nature of the models in the annotation loop. Our annotation pipeline is designed to reject any samples where the model correctly predicts the answer. How reproducible is this when retraining the same model with the same data? To measure this, we evaluate the performance of two models of identical setup for each respective architecture, which differ only in their random initialisation and data order during SGD sampling. We can thus isolate how strongly the resulting dataset depends on the particular random initialisation and order of data points used to train the model. The results of this experiment are shown in Table 4.

First, we observe – as expected given our annotation constraints – that model performance is 0.0EM on datasets created with the same respective model in the annotation loop. We observe however that a retrained model does not reliably perform as poorly on those samples. For example, BERT reaches as much as 20.5EM, whereas the initial model (Seed 1, used during annotation) has no correct answer and 0.0EM. We observed this effect repeatedly when re-running this experiment several times for other random re-initialisations. This demonstrates that random components in the model can substantially affect the adversarial annotation process. The evaluation furthermore serves as a baseline for subsequent model evaluations: this much of the performance range can be learned merely by retraining the same model. A possible takeaway for employing the model-in-the-loop annotation strategy in the future is to rely on ensembles of adversaries and reduce the dependency on one particular model instantiation.

Evaluation (Test) Dataset
Model Trained On
EM F EM F EM F EM F EM F EM F
40.6 54.6 7.0 15.1 5.3 12.8 5.7 13.2 4.5 9.3 26.7 40.6
BiDAF 12.1 22.1 5.7 12.9 6.4 13.6 6.0 13.2 6.1 12.0 14.1 26.7
9.9 18.8 6.4 13.3 8.5 15.6 8.8 15.7 8.3 14.5 14.9 27.5
10.9 20.8 6.6 13.8 10.1 18.0 9.7 16.7 14.8 23.3 13.3 26.0
70.5 83.6 36.4 50.3 15.0 26.5 10.6 21.2 20.0 31.3 54.9 69.5
BERT 67.9 81.6 46.5 62.4 37.5 49.0 32.3 44.2 41.1 51.5 55.8 71.0
60.9 75.2 42.2 57.8 36.4 46.6 28.3 39.6 35.7 44.4 50.7 65.4
57.6 71.8 36.8 50.9 34.1 44.9 31.0 41.7 37.6 45.9 48.2 63.8
70.0 83.7 39.4 55.4 21.5 33.7 11.1 22.1 20.3 30.9 48.0 64.8
RoBERTa 65.0 80.4 46.6 62.3 38.9 50.8 25.1 36.0 40.0 51.3 46.9 65.3
58.7 74.1 42.5 58.0 34.8 45.6 24.7 34.6 37.8 48.5 42.7 60.4
55.4 71.4 37.9 53.5 37.5 48.6 28.2 38.9 39.5 49.0 38.8 57.9
Table 5: Training models on various datasets, each with 10,000 samples, and measuring their generalisation to different evaluation datasets. Results in bold indicate the best result per model.
Evaluation (Test) Dataset
Model Training Dataset
EM F EM F EM F EM F
58.7 71.9 0.0 5.5 8.9 17.6 8.3 17.0
BiDAF + 57.3 70.6 14.9 25.8 16.9 25.5 15.3 24.2
+ 57.0 70.4 16.3 26.5 14.5 24.1 14.7 24.1
+ 55.9 69.6 16.2 25.6 17.3 26.2 15.6 25.0
70.7 84.0 36.7 50.2 0.0 5.3 15.2 25.8
BERT + 74.5 85.9 47.2 61.1 33.7 43.6 29.1 39.4
+ 74.3 85.8 48.1 61.1 37.8 47.3 31.1 41.5
+ 73.2 85.2 47.3 60.5 36.8 46.2 30.1 39.7
74.1 86.8 50.4 64.9 31.9 44.1 0.0 5.9
RoBERTa + 75.2 87.6 56.3 71.2 47.8 58.0 31.3 42.8
+ 76.2 88.0 56.3 70.8 48.3 58.2 33.4 44.4
+ 75.1 87.5 58.2 73.2 52.8 62.7 36.4 47.2
Table 6: Training models on SQuAD, as well as SQuAD combined with different adversarially created datasets. Results in bold indicate the best result per model.

4.2 Adversarial Generalisation

A potential problem with the focus on challenging questions is that they might all be very distinct from one another, hence leading to difficulties in learning to generalise from and to them. We next conduct a series of experiments in which we train on , , and , and observe how well models can then learn to generalise on the respective test portions of these datasets. Table 5 shows the results, and there is a multitude of observations.

First, one clear trend we observe across all training data setups is a clear negative performance progression when evaluated against datasets constructed with a stronger model in the loop. This trend holds true for all but the BiDAF model, in each of the training configurations, and for each of the evaluation datasets. For example, RoBERTa trained on achieves 71.4, 53.5, 48.6 and 38.9F when evaluated on , , and , respectively.

Second, we observe that the BiDAF model is not able to generalise well to datasets constructed with a model in the loop, independent of its training setup. In particular it is unable to learn from , thus failing to overcome some of its own blind spots through adversarial training. Both when training only on , as well as when adding to during training (cf. Table 6), BiDAF performs poorly across all the adversarial datasets.

In contrast, BERT and RoBERTa are able to partially overcome their blind spots through training on data collected with a model in the annotation loop, and to a degree that far exceeds what one would expect from random retraining (cf. Table 4). For example, RoBERTa trained on reaches 38.9F on , and this number further increases to 47.2F when including SQuAD during training (cf. Table 6). These observations suggest that there exists learnable structure among harder questions which can be picked up by some of the models, yet not all, as BiDAF fails to achieve this. The fact that even BERT can learn to generalise to , but not BiDAF to suggests the existence of an inherent limitation to what the BiDAF model can learn from these new samples, compared to BERT and RoBERTa.

Next, we observe that training on where is a stronger model helps generalise to with a weaker RC model , e.g. training on and testing on . But on the other hand, training on also leads to generalisation towards : for example, the baseline of RoBERTa trained on 10,000 SQuAD samples reaches 22.1F on (), whereas training RoBERTa on and () bumps this number to 36.0F and 34.6F, respectively. This suggests an encouraging takeaway for the model-in-the-loop annotation paradigm: even though a particular model might be chosen as adversary in the annotation loop, which at some point falls behind more competitive state-of-the-art models, these future models can still use the data collected with the weaker model in the loop, and generalise better even to samples composed with the stronger model in the loop.

In Table 6 we show experimental results for the same models and training datasets, but now including SQuAD as additional training data. In this training setup we generally see improved generalisation to , , and . Interestingly, the relative differences between , , and as training set used in conjunction with SQuAD are now much diminished, and especially as (part of the) training set now generalises substantially better. RoBERTa achieves the strongest results on any of the , , and evaluation sets, in particular when trained on +. This stands in contrast to the previous results in Table 5, where training on in several cases led to better generalisation than training on . A possible explanation for this observation is that training on leads to a larger degree of adversarial overfitting than training on , and the inclusion of a large number of standard SQuAD training samples can mitigate this effect.

Finally, we identify a risk of datasets constructed with weaker models in the loop becoming outdated. For example, RoBERTa achieves 58.2EM/73.2F on , in contrast to 0.0EM/5.5F for BiDAF – which is not far from non-expert human performance of 62.6EM/78.5F.

4.3 Generalisation to Non-Adversarial Data

Compared to standard annotation, the model-in-the-loop approach generally results in a new question distribution. Consequently, models trained on adversarially composed questions might not be able to generalise to standard (“easy”) questions, thus limiting the usefulness of the resulting data resource in practice. To what extent do models trained on model-in-the-loop questions generalise differently to standard (“easy”) questions, compared to training on standard (“easy”) questions?

To measure this we further train each of our three models on either , , or and test on , with results in the columns of Table 5. For comparison, the models are also trained on 10,000 SQuAD1.1 samples (referred to as ) chosen from the same passages as the adversarial datasets, thus eliminating size and paragraph choice as potential confounding factors. The models are tuned for Exact Match (EM) on our held-out validation data derived from the split SQuAD1.1 validation set after applying majority vote (-dev). Note that for the reasons described earlier, this means that performance values are lower on the majority vote dataset than the unaltered one, but importantly enables us to make direct comparisons across datasets.

Remarkably, neither BERT or RoBERTa show a substantial drop when trained on compared to training on SQuAD data (2.0F, and 3.3F): training these models on a dataset with a weaker model in the loop still leads to strong generalisation even to data from the original SQuAD distribution, which all models in the loop are trained on. BiDAF, on the other hand, fails to learn such information from the adversarially collected data, and drops >30F for each of the new training sets, compared to training on SQuAD.

We furthermore observe a gradual decrease in generalisation to SQuAD when training on towards training on . This suggests that the stronger the model used in the annotation loop, the more dissimilar the data distribution becomes from the original SQuAD distribution. We will later find further support for this explanation in a qualitative analysis (Section 5). It may however also be due to a limitation of BERT and RoBERTa – similar to BiDAF – in learning from a data distribution designed to beat these models; an even stronger model might learn more e.g. from .

4.4 Generalisation to DROP and NaturalQuestions

Finally, we investigate to what extent models can transfer skills learned on datasets created with a model in the loop to other datasets, concretely DROP and NaturalQuestions. In this experiment we select the subsets of DROP and NaturalQuestions which align with the structural constraints of SQuAD to ensure a like-for-like analysis. Specifically, we only consider questions in DROP where the answer is a span in the passage and where there is only one candidate answer. For NaturalQuestions, we consider all non-tabular long answers as passages, remove HTML tags and use the short answer as the extracted span. We apply this filtering on the validation sets for both datasets. Next we split it, stratifying by passage (as we did for ), which results in validation and test set examples for DROP, and for NaturalQuestions, respectively. We denote these datasets as and for clarity and distinction from their unfiltered versions. We consider the same models and training datasets as before, but tune on the respective validation set portions of and . In Table 5 we can see the results of these experiments in the respective and columns.

First, we observe clear generalisation improvements towards across all models compared to training on when using any of the , , or datasets for training. That is, including a model in the loop for the training dataset leads to improved transfer towards . Note that the DROP dataset also makes use of a BiDAF model in the loop during annotation; these results are in line with our prior observations when testing the same setups on , and , compared to training on .

Second, we observe overall strong transfer results towards : up to 71.0F for a BERT model trained on . Note that this result is similar and even slightly improves over model training with SQuAD data of the same size. That is, relative to training on SQuAD data, training on adversarially collected data does not impede generalisation to the dataset, which was created without a model in the annotation loop. We then however see a similar negative performance progression as observed before when testing on : the stronger the model in the annotation loop of the training dataset, the lower the test accuracy on test data from a data distribution composed without using a model in the loop.

Figure 4: Comparison of comprehension types of the questions in different datasets. The label types are neither mutually exclusive nor comprehensive. Values above columns indicate excess of the axis range.

5 Qualitative Analysis

Having applied the general model-in-the-loop methodology on models of varying strength, we next perform a qualitative comparison of the nature of the resulting questions. As reference points we also include the original SQuAD questions, as well as DROP and NaturalQuestions in this comparison: these datasets are both constructed to overcome limitations in SQuAD and have subsets which overlap sufficiently with SQuAD to make analysis possible. Specifically, we seek to understand the qualitative differences in terms of reading comprehension challenges posed by the questions in each of these datasets.

5.1 Comprehension Requirements

There exists a variety of prior work which seeks to understand the types of knowledge, comprehension skills or types of reasoning required to answer questions based on text Rajpurkar et al. (2016); Clark et al. (2018); Sugawara et al. (2019); Dua et al. (2019); Dasigi et al. (2019); we are however unaware of any commonly accepted formalism. We take inspiration from these but develop our own taxonomy of comprehension requirements which suits the datasets being analysed, see Appendix D for a detailed breakdown and examples of our annotation catalogue. We annotate questions with labels from this catalogue in a manner that is not mutually exclusive, and neither fully comprehensive; the development of such a catalogue itself is very challenging. Instead, we focus on capturing the most salient characteristics of each given question, and assign it up to three of the labels in our catalogue. In total, we analyse 100 samples from the validation set of each of the datasets; Fig. 4 displays the results of this analysis.

5.2 Observations

An initial observation is that the majority (57%) of answers to SQuAD questions are stated in an explicit fashion, without comprehension requirements beyond the literal level. This number decreases substantially for any of the model-in-the-loop datasets derived from SQuAD (e.g. 8% for ) and also , yet 42% of questions in share this property. In contrast to SQuAD, the model-in-the-loop questions generally tend to involve more paraphrasing. They also require more external knowledge, and multi-hop inference (beyond co-reference resolution) with an increasing trend for stronger models used in the annotation loop. Model-in-the-loop questions further fan out into a variety of small, but non-negligible proportions of more specific types of inference required for comprehension, e.g. spatial or temporal inference (both going beyond explicitly stated spatial or temporal information) – SQuAD rarely requires these at all. Some of these more particular inference types are common features of the other two datasets, in particular comparative questions for DROP (60%) and to a small extent also NaturalQuestions. Interestingly, possess the largest amount of comparison questions (11%) among our model-in-the-loop datasets, whereas and only possess 1% and 3%, respectively. This offers an explanation for our previous observation in Table 5, where models trained on outperformed those trained on or when evaluated on . It is likely that BiDAF as a model in the loop is worse than BERT and RoBERTa at comparative questions, as evidenced by the results in Table 5 with BiDAF reaching 9.3F and RoBERTa reaching 30.9F on (when trained on ).

The distribution of NaturalQuestions contains elements of both the distribution of SQuAD and of , which offers a potential explanation for the strong performance of models trained on and on . Finally, the gradually shifting distribution away from both SQuAD and NaturalQuestions as the model-in-the-loop strength increases reflects our prior observations on the decreasing performance on SQuAD and NaturalQuestions of models trained on datasets with progressively stronger models in the annotation loop.

6 Discussion and Conclusions

We have in this work investigated an RC annotation paradigm which includes a model in the loop that has to be “beaten” by the annotator. Applying this approach with a series of progressively stronger RC models in the annotation loop, we arrived at three separate RC datasets, graduated by the difficulty of the model adversary. Based on this dataset series we investigated several questions surrounding the annotation paradigm, in particular whether such datasets grow outdated as stronger models emerge, and about their generalisation to standard (non-adversarially collected) questions. We found that stronger RC models can still learn from data collected with a weak adversary in the loop, and their generalisation improves even on datasets collected with a very strong adversary. Models trained on data collected with a model in the loop furthermore generalise well towards non-adversarially collected data, both on SQuAD and on NaturalQuestions, yet we observe a slow deterioration with progressively stronger adversaries.

We see our work as a contribution towards the emerging paradigm of model-in-the-loop annotation, both in RC and potentially other tasks. While the scope of this paper is focused on RC, with SQuAD as the original dataset used to train model adversaries, we see no reason in principle why similar findings would not be made for other tasks using the same annotation paradigm, when crowdsourcing the creation of challenging samples with a current model in the loop. We would expect the insights and benefits conveyed by model-in-the-loop annotation to be greatest on mature datasets where models exceed human performance: here the resulting data provides a magnifying glass on model performance, focused in particular on samples which models struggle on. On the other hand, applying the method on datasets where performance increments have not plateaued yet would likely result in a more similar distribution to the original data, which is challenging to models a priori. We hope that the series of experiments on replication, transfer between datasets collected with model adversaries of different strength, as well as our findings regarding generalisation to non-adversarially collected data, can support and inform future research and annotation efforts following the model-in-the-loop data collection paradigm.

References

  • S. R. Bowman, G. Angeli, and C. D. Manning (2015) A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §1.
  • D. Chen, J. Bolton, and C. D. Manning (2016) A thorough examination of the CNN/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 2358–2367. External Links: Link, Document Cited by: §2.
  • M. Chen, M. D’Arcy, A. Liu, J. Fernandez, and D. Downey (2019) CODAH: an adversarially-authored question answering dataset for common sense. In

    Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP

    ,
    pp. 63–69. Cited by: §1, §2.
  • E. Choi, H. He, M. Iyyer, M. Yatskar, W. Yih, Y. Choi, P. Liang, and L. Zettlemoyer (2018) QuAC: question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 2174–2184. External Links: Link, Document Cited by: §2.
  • P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord (2018) Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. arXiv preprint arXiv:1803.05457. Cited by: §5.1.
  • P. Dasigi, N. F. Liu, A. Marasović, N. A. Smith, and M. Gardner (2019) Quoref: a reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 5925–5932. External Links: Link, Document Cited by: §1, §2, §2, §5.1.
  • J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, Cited by: §1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. External Links: Link, Document Cited by: §1, §3.2, §3.2.
  • D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner (2019) DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 2368–2378. External Links: Link, Document Cited by: §1, §1, §2, §2, §2, §5.1.
  • A. Ettinger, S. Rao, H. D. III, and E. M. Bender (2017) Towards linguistically generalizable NLP systems: A workshop and shared task. CoRR abs/1711.01505. External Links: Link, 1711.01505 Cited by: §1.
  • M. Gardner, J. Grus, M. Neumann, O. Tafjord, P. Dasigi, N. F. Liu, M. Peters, M. Schmitz, and L. S. Zettlemoyer (2017) AllenNLP: a deep semantic natural language processing platform. External Links: arXiv:1803.07640 Cited by: §3.2.
  • S. Gururangan, S. Swayamdipta, O. Levy, R. Schwartz, S. Bowman, and N. A. Smith (2018) Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 107–112. External Links: Link, Document Cited by: §1.
  • K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom (2015) Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 1693–1701. External Links: Link Cited by: §1.
  • R. Jia and P. Liang (2017) Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 2021–2031. External Links: Link, Document Cited by: §1.
  • M. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer (2017) TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. Cited by: §1.
  • D. Kaushik, E. Hovy, and Z. C. Lipton (2019) Learning the difference that makes a difference with counterfactually-augmented data. arXiv preprint arXiv:1909.12434. Cited by: §2.
  • D. Kaushik and Z. C. Lipton (2018) How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 5010–5015. External Links: Link, Document Cited by: §3.2.
  • T. Kočiský, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette (2018) The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics 6, pp. 317–328. External Links: Link, Document Cited by: §2.
  • T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov (2019) Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Cited by: §1, §2.
  • Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) RoBERTa: A robustly optimized BERT pretraining approach. CoRR abs/1907.11692. External Links: Link, 1907.11692 Cited by: §3.2, §3.2.
  • M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz (1993) Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics 19 (2), pp. 313–330. External Links: Link Cited by: §1.
  • S. Min, V. Zhong, R. Socher, and C. Xiong (2018) Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 1725–1735. External Links: Link, Document Cited by: §3.4.
  • M. Mintz, S. Bills, R. Snow, and D. Jurafsky (2009) Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, Suntec, Singapore, pp. 1003–1011. External Links: Link Cited by: §1.
  • T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng (2016) MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268. External Links: Link Cited by: §2.
  • Y. Nie, A. Williams, E. Dinan, M. Bansal, J. Weston, and D. Kiela (2019) Adversarial nli: a new benchmark for natural language understanding. External Links: 1910.14599 Cited by: §1, §2.
  • P. Rajpurkar, R. Jia, and P. Liang (2018) Know what you don’t know: unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 784–789. External Links: Link, Document Cited by: §2.
  • P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. External Links: Link, Document Cited by: §1, §2, §3.2, §5.1.
  • S. Reddy, D. Chen, and C. D. Manning (2019) CoQA: a conversational question answering challenge. Transactions of the Association for Computational Linguistics 7, pp. 249–266. External Links: Link, Document Cited by: §2.
  • M. Richardson, C. J.C. Burges, and E. Renshaw (2013) MCTest: a challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, pp. 193–203. External Links: Link Cited by: footnote 1.
  • M. Saeidi, M. Bartolo, P. Lewis, S. Singh, T. Rocktäschel, M. Sheldon, G. Bouchard, and S. Riedel (2018) Interpretation of natural language rules in conversational machine reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 2087–2097. External Links: Link, Document Cited by: §2.
  • R. Schwartz, M. Sap, I. Konstas, L. Zilles, Y. Choi, and N. A. Smith (2017) The effect of different writing tasks on linguistic style: a case study of the ROC story cloze task. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, pp. 15–25. External Links: Link, Document Cited by: §1.
  • M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi (2017) Bidirectional attention flow for machine comprehension. In The International Conference on Learning Representations (ICLR), Cited by: §3.2.
  • R. Snow, B. O’Connor, D. Jurafsky, and A. Ng (2008) Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, Hawaii, pp. 254–263. External Links: Link Cited by: §1.
  • S. Sugawara, K. Inui, S. Sekine, and A. Aizawa (2018) What makes reading comprehension questions easier?. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 4208–4219. External Links: Link, Document Cited by: §2.
  • S. Sugawara, P. Stenetorp, K. Inui, and A. Aizawa (2019) Assessing the benchmarking capacity of machine reading comprehension datasets. arXiv preprint arXiv:1911.09241. Cited by: §5.1.
  • A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman (2016) NewsQA: a machine comprehension dataset. arXiv preprint arXiv:1611.09830. Cited by: §2.
  • E. Wallace, P. Rodriguez, S. Feng, I. Yamada, and J. Boyd-Graber (2019) Trick me if you can: human-in-the-loop generation of adversarial examples for question answering. Transactions of the Association for Computational Linguistics 7, pp. 387–401. External Links: Link, Document Cited by: §2.
  • D. Weissenborn, G. Wiese, and L. Seiffe (2017) Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, pp. 271–280. External Links: Link, Document Cited by: §2, §3.4.
  • J. Welbl, P. Stenetorp, and S. Riedel (2018) Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics 6, pp. 287–302. External Links: Link, Document Cited by: §2.
  • T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew (2019) HuggingFace’s transformers: state-of-the-art natural language processing. ArXiv abs/1910.03771. Cited by: §3.2.
  • Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning (2018) HotpotQA: a dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 2369–2380. External Links: Link, Document Cited by: §1, §2, §2.
  • R. Zellers, Y. Bisk, R. Schwartz, and Y. Choi (2018) SWAG: a large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 93–104. External Links: Link, Document Cited by: §1, §2.
  • R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi (2019) HellaSwag: can a machine really finish your sentence?. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 4791–4800. External Links: Link, Document Cited by: §2.
  • S. Zhang, X. Liu, J. Liu, J. Gao, K. Duh, and B. Van Durme (2018) ReCoRD: bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885. Cited by: §2.

Appendix A Additional Dataset Statistics

Question statistics

In Figure 6 we analyse question lengths across SQuAD1.1 and compare them to questions constructed with different models in the annotation loop. While the mean of the distributions is similar, there is more question length variability when using a model in the loop. We also perform analysis of question types by wh- word as described earlier (see Figure 5). This is in further detail displayed using sunburst plots of the first three question tokens for (cf. Figure 10), (cf. Figure 12), (cf. Figure 11) and (cf. Figure 13). We observe a general trend towards more diverse questions with increasing model-in-the-loop strength.

Figure 5: Analysis of question types across datasets.
Figure 6: Question length distribution across datasets.
Figure 7: Analysis of answer types across datasets.
Figure 8: Answer length distribution across datasets.

Answer statistics

Figure 8 allows for further analysis of answer lengths across datasets. We observe that answers for all datasets constructed with a model in the loop tend to be longer than in SQuAD. There is furthermore a trend of increasing answer length and variability with increasing model-in-the-loop strength. We show an analysis of answer types in Figure 7).

Figure 9: Worker distribution, together with the number of manually validated QA pairs per worker.

Appendix B Annotation Interface Details

We have three key steps in the dataset construction process: i) training and qualification, ii) “Beat the AI” annotation and iii) answer validation.

Training and Qualification

This is a combined training and qualification task; a screenshot of the interface is shown in Figure 14. The first step involves a set of five assignments requiring the worker to demonstrate an ability to generate questions and indicate answers by highlighting the corresponding spans in the passage. Once complete, the worker is shown a sample “Beat the AI” HIT for a pre-determined passage which helps facilitate manual validation. In earlier experiments, these two steps were presented as separate interfaces, however, this created a bottleneck between the two layers of qualification and slowed down annotation considerably. In total, 1,386 workers completed this task with 752 being assigned the qualification.

“Beat the AI Annotation”

The “Beat the AI” question generation HIT presents workers with a randomly selected passage from SQuAD1.1, about which workers are expected to generate questions and provide answers. This data is sent to the corresponding model-in-the-loop API running on AWS infrastructure and primarily consisting of a load balancer and a t2.xlarge EC2 instance with the T2/T3 Unlimited setting enabled to allow high sustained CPU performance during annotation runs. The model API returns a prediction which is scored against the worker’s answer to determine whether the worker has successfully managed to “beat” the model. Only questions which the model fails to answer are considered valid; a screenshot for this interface is shown in Figure 15. Workers are tasked to ideally submit at least three valid questions, however fewer are also accepted – in particular for very short passages. A sample of each worker’s HITs is manually validated; those who do not satisfy the question quality requirements have their qualification revoked and all their annotated data discarded. This was the case for 99 workers. Worker validation distributions are shown in Figure 9.

Answer Validation

The answer validation interface (cf. Figure 16) is used to validate the answerability of the validation and test sets for each different model used in the annotation loop. Every previously collected question generation HIT from these dataset parts, which had not been discarded during manual validation, is submitted to at least 3 distinct annotators. Workers are shown the passage and previously generated questions and are asked to highlight the answer in the passage. In a post-processing step, only questions with at least 1 valid matching answer out of 3 are finally retained.

Appendix C Examples of Annotated Questions

In Table 7 we provide a few examples of the questions collected with each different model in the annotation loop.

Appendix D Catalogue of Comprehension Requirements

We give a description for each of the items in our catalogue of comprehension requirements in Table 8, accompanied with an example for illustration. These are the labels used for the qualitative analysis performed in Section 5.

Figure 10: Question sunburst plot for .
Figure 11: Question sunburst plot for .
Figure 12: Question sunburst plot for .
Figure 13: Question sunburst plot for .
Figure 14: Training and qualification interface. Workers are first expected to familiarise themselves with the interface and them complete a sample “Beat the AI” task for validation.
Figure 15: “Beat the AI” question generation interface. Human annotators are tasked with asking questions about a provided passage which the model in the loop fails to answer correctly.
Figure 16: Answer validation interface. Workers are expected to provide answers to questions generated in the “Beat the AI” task. The additional answers are used to determine question answerability and non-expert human performance.
Model-in-the-loop Passage Question
BiDAF […] the United Methodist Church established and is affiliated with around one hundred colleges and universities in the United States, including Syracuse University, Boston University, Emory University, Duke University, Drew University, University of Denver, University of Evansville, and Southern Methodist University. Most are members of the International Association of Methodist-related Schools, Colleges, and Universities. The church operates three hundred sixty schools and institutions overseas. Where does the United Methodist Church have more educational affiliates, in the US or overseas?
BiDAF Agreement was achieved on fourteen points out of fifteen, the exception being the nature of the Eucharist - the sacrament of the Lord’s Supper-an issue crucial to Luther. The Eucharist was one of how many issues debated by those in attendance of the meeting?
BiDAF In a purely capitalist mode of production (i.e. where professional and labor organizations cannot limit the number of workers) the workers wages will not be controlled by these organizations, or by the employer, but rather by the market. Wages work in the same way as prices for any other good. Thus, wages can be considered as a function of market price of skill. And therefore, inequality is driven by this price. What determines worker wages?


BERT
Jochi died in 1226, during his father’s lifetime. Some scholars, notably Ratchnevsky, have commented on the possibility that Jochi was secretly poisoned by an order from Genghis Khan. Rashid al-Din reports that the great Khan sent for his sons in the spring of 1223, and while his brothers heeded the order, Jochi remained in Khorasan. Juzjani suggests that the disagreement arose from a quarrel between Jochi and his brothers in the siege of Urgench. Who went to Khan after his order in 1223?
BERT In the Sandgate area, to the east of the city and beside the river, resided the close-knit community of keelmen and their families. They were so called because they worked on the keels, boats that were used to transfer coal from the river banks to the waiting colliers, for export to London and elsewhere. In the 1630s about 7,000 out of 20,000 inhabitants of Newcastle died of plague […] Where did almost half the people die?
BERT The Grainger Market replaced an earlier market originally built in 1808 called the Butcher Market. Which market came first?
RoBERTa […] Luther developed his original four-stanza psalm paraphrase into a five-stanza Reformation hymn […]. Luther’s reformed hymn did not feature stanzas of what quantity?
RoBERTa Aken, adopted by Mexican movie actress Lupe Mayorga, grew up in the neighboring town of Madera and his song chronicled the hardships faced by the migrant farm workers he saw as a child. When did Aken encounter the topic of his song?
RoBERTa Newton’s leading receivers were tight end Greg Olsen, who caught a career-high 77 passes for 1,104 yards and seven touchdowns, and wide receiver Ted Ginn, Jr., who caught 44 passes for 739 yards and 10 touchdowns; Ginn also rushed for 60 yards and returned 27 punts for 277 yards. Other key receivers included veteran Jerricho Cotchery (39 receptions for 485 yards), rookie Devin Funchess (31 receptions for 473 yards and five touchdowns), and second-year receiver Corey Brown (31 receptions for 447 yards). Who caught the second most passes?

Table 7: Examples of questions collected using different models in the annotation loop. The annotated answer is highlighted in yellow.
Type Description Passage Question
Explicit Answer stated nearly word-for-word in the passage as it is in the question. Sayyid Abul Ala Maududi was an important early twentieth-century figure in the Islamic revival in India […] Who was an important early figure in the Islamic revival in India?
Paraphrasing Question paraphrases parts of the passage, generally relying on context-specific synonyms. Seamans’ establishment of an ad-hoc committee […] Who created the ad-hoc committee?
External Knowledge The question cannot be answered without access to sources of knowledge beyond the passage. […] the 1988 film noir thriller Stormy Monday, directed by Mike Figgis and starring Tommy Lee Jones, Melanie Griffith, Sting and Sean Bean. Which musician was featured in the film Stormy Monday?
Co-reference Requires resolution of a relationship between two distinct words referring to the same entity. Tamara de Lempicka was a famous artist born in Warsaw. […] Better than anyone else she represented the Art Deco style in painting and art […] Through what creations did Lempicka express a kind of art popular after WWI?
Multi-Hop Requires more than one step of inference, often across multiple sentences. […] and in 1916 married a Polish lawyer Tadeusz Lempicki. Better than anyone else she represented the Art Deco style in painting and art […] Into what family did the artist who represented the Art Deco style marry?
Comparative Requires a comparison between two or more attributes (e.g. smaller than, last) The previous chairs were Rajendra K. Pachauri, elected in May 2002; Robert Watson in 1997; and Bert Bolin in 1988. Who was elected earlier, Robert Watson or Bert Bolin?
Numeric Any numeric reasoning (e.g. some form of calculation is required to arrive at the correct answer).

[…] it has been estimated that

Africans will make up at least 30% of the delegates at the 2012 General Conference, and it is also possible that 40% of the delegates will be from outside […]
From which continent is it estimated that members will make up nearly a third of participants in 2012?
Negation Requires interpreting a single or multiple negations. Subordinate to the General Conference are the jurisdictional and central conferences which also meet every four years. What is not in charge?
Filtering Narrowing down a set of answers to select one by some particular distinguishing feature. […] was engaged with Johannes Bugenhagen, Justus Jonas, Johannes Apel, Philipp Melanchthon and Lucas Cranach the Elder and his wife as witnesses […] Whose partner could testify to the couple’s agreement to marry?
Temporal Requires an understanding of time and change, and related aspects. Goes beyond directly stated answers to When questions or external knowledge. In 2010 the Amazon rainforest experienced another severe drought, in some ways more extreme than the 2005 drought. What occurred in 2005 and then again five years later?
Spatial Requires an understanding of the concept of space, location, or proximity. Goes beyond finding directly stated answers to Where questions. Warsaw lies in east-central Poland about 300 km (190 mi) from the Carpathian Mountains and about 260 km (160 mi) from the Baltic Sea, 523 km (325 mi) east of Berlin, Germany. Is Warsaw closer to the Baltic Sea or Berlin, Germany?
Inductive A particular case is addressed in the passage but inferring the answer requires generalisation to a broader category. […] frequently evoked by particular events in his life and the unfolding Reformation. This behavior started with his learning of the execution of Johann Esch and Heinrich Voes, the first individuals to be martyred by the Roman Catholic Church for Lutheran views […] How did the Roman Catholic Church deal with non-believers?
Implicit Builds on information implied in the passage and does not otherwise require any of the above types of reasoning. Despite the disagreements on the Eucharist, the Marburg Colloquy paved the way for the signing in 1530 of the Augsburg Confession, and for the […] What could not keep the Augsburg confession from being signed?

Table 8: Comprehension requirement definitions and examples from adversarial model-in-the-loop annotated RC datasets. Note that these types are not mutually exclusive. The annotated answer is highlighted in yellow.