1 Introduction
Significant advances in Question Answering (QA) have recently been achieved by pretraining deep transformer language models on large amounts of unlabeled text data, and finetuning the pretrained models on hand labeled QA datasets, e.g. with BERT Devlin et al. (2018).
Language modeling is however just one example of how an auxiliary prediction task can be constructed from widely available natural text, namely by masking some words from each passage and training the model to predict them. It seems plausible that other auxiliary tasks might exist that are better suited for QA, but can still be constructed from widely available natural text. It also seems intuitive that such auxiliary tasks will be more helpful the closer they are to the particular QA task we are attempting to solve.
Input (C)  … in 1903, boston participated in the 

first modern world series, going up  
against the pittsburgh pirates …  
(1)  1903 
(2)  when did the red sox first go to the 
world series  
(3)  1903 
(4)  Yes 
Based on this intuition we construct auxiliary tasks for QA, generating millions of synthetic questionanswercontext triples from unlabeled passages of text, pretraining a model on these examples, and finally finetuning on a particular labeled dataset. Our auxiliary tasks are illustrated in Table 1.
For a given passage , we sample an extractive short answer (Step (1) in Table 1). In Step (2), we generate a question conditioned on and , then (Step (3)) predict the extractive answer conditioned on and . If and match we finally emit as a new synthetic training example (Step (4)). We train a separate model on labeled QA data for each of the first three steps, and then apply the models in sequence on a large number of unlabeled text passages. We show that pretraining on synthetic data generated through this procedure provides us with significant improvements on two challenging datasets, SQuAD2 Rajpurkar et al. (2018) and NQ Kwiatkowski et al. (2019), achieving a new state of the art on the latter.
2 Related Work
Question generation is a wellstudied task in its own right Heilman and Smith (2010); Du et al. (2017); Du and Cardie (2018). yang2017semi and dhingra2018simple both use generated questionanswer pairs to improve a QA system, showing large improvements in lowresource settings with few gold labeled examples. Validating and improving the accuracy of these generated QA pairs, however, is relatively unexplored.
In machine translation, modeling consistency with dual learning He et al. (2016) or backtranslation Sennrich et al. (2016) across both translation directions improves the quality of translation models. Backtranslation, which adds synthetically generated parallel data as training examples, was an inspiration for this work, and has led to stateoftheart results in both the supervised Edunov et al. (2018) and the unsupervised settings Lample et al. (2018).
lewis2018generative model the joint distribution of questions and answers given a context and use this model directly, whereas our work uses generative models to generate synthetic data to be used for pretraining. Combining these two approaches could be an area of fruitful future work.
3 Model
Given a dataset of contexts, questions, and answers: , we train three models: (1) answer extraction: , (2) question generation: , and (3) question answering: .
We use BERT Devlin et al. (2018)^{1}^{1}1Some experiments use a variant of BERT that masks out whole words at training time, similar to Sun2019ERNIEER. See https://github.com/googleresearch/bert for both the original and whole word masked versions of BERT. to model each of these distributions. Inputs to each of these models are fixed length sequences of wordpieces, listing the tokenized question (if one was available) followed by the context . The answer extraction model is detailed in §3.1 and two variants of question generation models in §3.2 and §3.3. The question answering model follows alberti2019bert.
3.1 Question (Un)Conditional Extractive QA
We define a questionunconditional extractive answer model and a questionconditional extractive answer model as follows:
where , are defined to be token spans over . For , and are constrained to be of length up to , set to 32 word piece tokens. The key difference between the two expressions is that scores the start and the end of each span independently, while scores them jointly.
Specifically we define and to be transformations of the final token representations computed by a BERT model:
Here
is the hidden representation dimension,
is the answer span, is the BERT representation of the ’th token in token sequence .is a multilayer perceptron with a single hidden layer, and
is an affine transformation.We found it was critical to model span start and end points jointly in because, when the question is not given, there are usually multiple acceptable answers for a given context, so that the start point of an answer span cannot be determined separately from the end point.
3.2 Question Generation: Finetuning Only
Text generation allows for a variety of choices in model architecture and training data. In this section we opt for a simple adaptation of the public BERT model for text generation. This adaptation does not require any additional pretraining and no extra parameters need to be trained from scratch at finetuning time. This question generation system can be reproduced by simply finetuning a publicly available pretrained BERT model on the extractive subsets of datasets like SQuAD2 and NQ.
Finetuning
We define the model as a lefttoright language model
where is the sequence of question tokens and is a predetermined maximum question length, but, unlike the more usual encoderdecoder approach, we compute using the single encoder stack from the BERT model:
where is the word piece embedding matrix in BERT. All parameters of BERT including
are finetuned. In the context of question generation, the input answer is encoded by introducing a new token type id for the tokens in the extractive answer span, e.g. the question tokens being generated have type 0 and the context tokens have type 1, except for the ones in the answer span that have type 2. We always pad or truncate the question being input to BERT to a constant length
to avoid giving the model information about the length of the question we want it to generate.This model can be trained efficiently by using an attention mask that forces to zero all the attention weights from to and from to for all .
Question Generation
At inference time we generate questions through iterative greedy decoding, by computing for . Questionanswer pairs are kept only if they satisfy roundtrip consistency.
3.3 Question Generation: Full Pretraining
The prior section addressed a restricted setting in which a BERT model was finetuned, without any further changes. In this section, we describe an alternative approach for question generation that fully pretrains and finetunes a sequencetosequence generation model.
Pretraining
Section 3.2 used only an encoder for question generation. In this section, we use a full sequencetosequence Transformer (both encoder and decoder). The encoder is trained identically (BERT pretraining, Wikipedia data), while the decoder is trained to output the next sentence.
Finetuning
Finetuning is done identically as in Section 3.2, where the input is and the output is from tuples from a supervised questionanswering dataset (e.g., SQuAD).
Question Generation
To get examples of synthetic triples, we sample from the decoder with both beam search and Monte Carlo search. As before, we use roundtrip consistency to keep only the high precision triples.
3.4 Why Does Roundtrip Consistency Work?
A key question for future work is to develop a more formal understanding of why the roundtrip method improves accuracy on question answering tasks (similar questions arise for the backtranslation methods of edunov2018 and sennrich2016improving; a similar theory may apply to these methods). In the supplementary material we sketch a possible approach, inspired by the method of Balcan:2005 for learning with labeled and unlabeled data. This section is intentionally rather speculative but is intended to develop intuition about the methods, and to propose possible directions for future work on developing a formal grounding.
In brief, the approach discussed in the supplementary material suggests optimizing the loglikelihood of the labeled training examples, under a constraint that some measure of roundtrip consistency on unlabeled data is greater than some value . The value for
can be estimated using performance on development data. The auxiliary function
is chosen such that: (1) the constraint eliminates a substantial part of the parameter space, and hence reduces sample complexity; (2) the constraint nevertheless includes ‘good’ parameter values that fit the training data well. The final step in the argument is to make the case that the algorithms described in the current paper may effectively be optimizing a criterion of this kind. Specifically, the auxiliary function is defined as the loglikelihood of noisy triples generated from unlabeled data using the and models; constraining the parameters to achieve a relatively high value on is achieved by pretraining the model on these examples. Future work should consider this connection in more detail.4 Experiments
4.1 Experimental Setup
Dev  Test  

EM  F1  EM  F1  
Finetuning Only  
BERTLarge (Original)  78.7  81.9  80.0  83.1 
+ 3M synth SQuAD2  80.1  82.8     
+ 4M synth NQ  81.2  84.0  82.0  84.8 
Full Pretraining  
BERT (Whole Word Masking)^{2}^{2}2https://github.com/googleresearch/bert  82.6  85.2     
+ 50M synth SQuAD2  85.1  87.9  85.2  87.7 
+ ensemble  86.0  88.6  86.7  89.1 
Human      86.8  89.5 
Long Answer Dev  Long Answer Test  Short Answer Dev  Short Answer Test  

P  R  F1  P  R  F1  P  R  F1  P  R  F1  
BERT_{joint}  61.3  68.4  64.7  64.1  68.3  66.2  59.5  47.3  52.7  63.8  44.0  52.1 
+ 4M synth NQ  62.3  70.0  65.9  65.2  68.4  66.8  60.7  50.4  55.1  62.1  47.7  53.9 
Single Human  80.4  67.6  73.4        63.4  52.6  57.5       
Superannotator  90.0  84.6  87.2        79.1  72.6  75.7       
Question  Answer  

NQ  what was the population of chicago in 1857?  over 90,000 
SQuAD2  what was the weight of the brigg’s hotel?  22,000 tons 
NQ  where is the death of the virgin located?  louvre 
SQuAD2  what person replaced the painting?  carlo saraceni 
NQ  when did rick and morty get released?  2012 
SQuAD2  what executive suggested that rick be a grandfather?  nick weidenfeld 
We considered two datasets in this work: SQuAD2 Rajpurkar et al. (2018) and the Natural Questions (NQ) Kwiatkowski et al. (2019). SQuAD2 is a dataset of QA examples of questions with answers formulated and answered by human annotators about Wikipedia passages. NQ is a dataset of Google queries with answers from Wikipedia pages provided by human annotators. We used the full text from the training set of NQ (1B words) as a source of unlabeled data.
In our finetuning only experiments (Section 3.2) we trained two triples of models on the extractive subsets of SQuAD2 and NQ. We extracted 8M unlabeled windows of 512 tokens from the NQ training set. For each unlabeled window we generated one example from the SQuAD2trained models and one example from the NQtrained models. For we picked an answer uniformly from the top 10 extractive answers according to . For we picked the best extractive answer according to
. Filtering for roundtrip consistency gave us 2.4M and 3.2M synthetic positive instances from SQuAD2 and NQtrained models respectively. We then added synthetic unanswerable instances by taking the question generated from a window and associating it with a nonoverlapping window from the same Wikipedia page. We then sampled negatives to obtain a total of 3M and 4M synthetic training instances for SQuAD2 and NQ respectively. We trained models analogous to alberti2019bert initializing from the public BERT model, with a batch size of 128 examples for one epoch on each of the two sets of synthetic examples and on the union of the two, with a learning rate of
and no learning rate decay. We then finetuned the the resulting models on SQuAD2 and NQ.In our full pretraining experiments (Section 3.3) we only trained on SQuAD2. However, we pretrained our question generation model on all of the BERT pretraining data, generating the next sentence lefttoright. We created a synthetic, roundtrip filtered corpus with 50M examples. We then finetuned the model on SQuAD2 as previously described. We experimented with both the single model setting and an ensemble of 6 models.
4.2 Results
The final results are shown in Tables 2 and 3. We found that pretraining on SQuAD2 and NQ synthetic data increases the performance of the finetuned model by a significant margin. On the NQ short answer task, the relative reduction in headroom is 50% to the single human performance and 10% to human ensemble performance. We additionally found that pretraining on the union of synthetic SQuAD2 and NQ data is very beneficial on the SQuAD2 task, but does not improve NQ results.
The full pretraining approach with ensembling obtains the highest EM and F1 listed in Table 2. This result is only from human performance and is the third best model on the SQuAD2 leaderboard as of this writing (5/31/19).
Roundtrip Filtering
Roundtrip filtering appears to be consistently beneficial. As shown in Figure 1, models pretrained on roundtrip consistent data outperform their counterparts pretrained without filtering. From manual inspection, of 46 triples that were roundtrip consistent 39% were correct, while of 44 triples that were discarded only 16% were correct.
Data Source
Generated questionanswer pairs are illustrative of the differences in the style of questions between SQuAD2 and NQ. We show a few examples in Table 4, where the same passage is used to create a SQuAD2style and an NQstyle questionanswer pair. The SQuAD2 models seem better at creating questions that directly query a specific property of an entity expressed in the text. The NQ models seem instead to attempt to create questions around popular themes, like famous works of art or TV shows, and then extract the answer by combining information from the entire passage.
5 Conclusion
We presented a novel method to generate synthetic QA instances and demonstrated improvements from this data on SQuAD2 and on NQ. We additionally proposed a possible direction for formal grounding of this method, which we hope to develop more thoroughly in future work.
References
 Alberti et al. (2019) Chris Alberti, Kenton Lee, and Michael Collins. 2019. A bert baseline for the natural questions. arXiv preprint arXiv:1901.08634.
 Balcan and Blum (2005) MariaFlorina Balcan and Avrim Blum. 2005. A pacstyle model for learning from labeled and unlabeled data. In Proceedings of the 18th Annual Conference on Learning Theory, COLT’05, pages 111–126, Berlin, Heidelberg. SpringerVerlag.
 Devlin et al. (2018) Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
 Dhingra et al. (2018) Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and effective semisupervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 582–587.
 Du and Cardie (2018) Xinya Du and Claire Cardie. 2018. Harvesting paragraphlevel questionanswer pairs from wikipedia. arXiv preprint arXiv:1805.05942.
 Du et al. (2017) Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352. Association for Computational Linguistics.

Edunov et al. (2018)
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018.
Understanding
backtranslation at scale.
In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
, pages 489–500. Association for Computational Linguistics.  He et al. (2016) Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, TieYan Liu, and WeiYing Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828.
 Heilman and Smith (2010) Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617. Association for Computational Linguistics.
 Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, MingWei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.
 Lample et al. (2018) Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrasebased & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049.
 Lewis and Fan (2019) Mike Lewis and Angela Fan. 2019. Generative question answering: Learning to answer the whole question. International Conference on Learning Representations (ICLR).
 Rajpurkar et al. (2018) Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 784–789.

Sennrich et al. (2016)
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016.
Improving neural machine translation models with monolingual data.
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86–96.  Sun et al. (2019) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. CoRR, abs/1904.09223.
 Yang et al. (2017) Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semisupervised qa with generative domainadaptive nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1040–1050.
Appendix A Supplementary Material: a Sketch of a Formal Justification for the Approach
This section sketches a potential approach to giving a formal justification for the roundtrip method, inspired by the method of Balcan:2005 for learning with labeled and unlabeled data. This section is intentionally rather speculative but is intended to develop intuition about the methods, and to propose possible directions for future work in developing a more formal grounding.
Assume that we have parameter estimates and derived from labeled examples. The loglikelihood function for the remaining parameters is then
Estimation from labeled examples alone would involve the following optimization problem:
(1) 
where is a set of possible parameter values—typically would be unconstrained, or would impose some regularization on .
Now assume we have some auxiliary function that measures the roundtrip consistency of parameter values on a set of unlabeled examples. We will give concrete proposals for below. A natural alternative to Eq. 1 is then to define
for some value of , and to derive new parameter estimates
(2) 
The value for can be estimated using crossvalidation of accuracy on tuning data.
Intuitively a good choice of auxiliary function would have the property that there is some value of such that: (1) is much ”smaller” or less complex than , and hence many fewer labeled examples are required for estimation (Balcan:2005 give precise guarantees of this type); (2) nevertheless contains ‘good’ parameter values that perform well on the labeled data.
A first suggested auxiliary function is the following, which makes use of unlabeled examples for :
where .
This auxiliary function encourages roundtrip consistency under parameters . It is reasonable to assume that the optimal parameters achieve a high value for , and hence that this will be a useful auxiliary function.
A second auxiliary function, which may be more closely related to the approach in the current paper, is derived as follows. Assume we have some method of deriving triples from unlabeled data, where a significant proportion of these examples are ‘correct’ questionanswer pairs. Define the following auxiliary function:
Here is some function that encourages high values for . One choice would be ; another choice would be if , otherwise, where is a target ‘margin’. Thus under this auxiliary function the constraint
would force the parameters to fit the triples derived from unlabeled data.
A remaining question is how to solve the optimization problem in Eq. 2. One obvious approach would be to perform gradient ascent on the objective
where dictates the relative weight of the two terms, and can be estimated using crossvalidation on tuning data (each value for implies a different value for ).
A second approach may be to first pretrain the parameters on the auxiliary function , then finetune on the function . In practice this may lead to final parameter values with relatively high values for both objective functions. This latter approach appears to be related to the algorithms described in the current paper; future work should investigate this more closely.
Comments
There are no comments yet.