Synthetic QA Corpora Generation with Roundtrip Consistency

06/12/2019 ∙ by Chris Alberti, et al. ∙ Google 0

We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency. By pretraining on the resulting corpora we obtain significant improvements on SQuAD2 and NQ, establishing a new state-of-the-art on the latter. Our synthetic data generation models, for both question generation and answer extraction, can be fully reproduced by finetuning a publicly available BERT model on the extractive subsets of SQuAD2 and NQ. We also describe a more powerful variant that does full sequence-to-sequence pretraining for question generation, obtaining exact match and F1 at less than 0.1 on SQuAD2.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Significant advances in Question Answering (QA) have recently been achieved by pretraining deep transformer language models on large amounts of unlabeled text data, and finetuning the pretrained models on hand labeled QA datasets, e.g. with BERT Devlin et al. (2018).

Language modeling is however just one example of how an auxiliary prediction task can be constructed from widely available natural text, namely by masking some words from each passage and training the model to predict them. It seems plausible that other auxiliary tasks might exist that are better suited for QA, but can still be constructed from widely available natural text. It also seems intuitive that such auxiliary tasks will be more helpful the closer they are to the particular QA task we are attempting to solve.

Input (C) … in 1903, boston participated in the
first modern world series, going up
against the pittsburgh pirates …
(1)   1903
(2)   when did the red sox first go to the
world series
(3)   1903
(4)   Yes
Table 1: Example of how synthetic question-answer pairs are generated. The model’s predicted answer () matches the original answer the question was generated from, so the example is kept.

Based on this intuition we construct auxiliary tasks for QA, generating millions of synthetic question-answer-context triples from unlabeled passages of text, pretraining a model on these examples, and finally finetuning on a particular labeled dataset. Our auxiliary tasks are illustrated in Table 1.

For a given passage , we sample an extractive short answer (Step (1) in Table 1). In Step (2), we generate a question conditioned on and , then (Step (3)) predict the extractive answer conditioned on and . If and match we finally emit as a new synthetic training example (Step (4)). We train a separate model on labeled QA data for each of the first three steps, and then apply the models in sequence on a large number of unlabeled text passages. We show that pretraining on synthetic data generated through this procedure provides us with significant improvements on two challenging datasets, SQuAD2 Rajpurkar et al. (2018) and NQ Kwiatkowski et al. (2019), achieving a new state of the art on the latter.

2 Related Work

Question generation is a well-studied task in its own right Heilman and Smith (2010); Du et al. (2017); Du and Cardie (2018). yang2017semi and dhingra2018simple both use generated question-answer pairs to improve a QA system, showing large improvements in low-resource settings with few gold labeled examples. Validating and improving the accuracy of these generated QA pairs, however, is relatively unexplored.

In machine translation, modeling consistency with dual learning He et al. (2016) or back-translation Sennrich et al. (2016) across both translation directions improves the quality of translation models. Back-translation, which adds synthetically generated parallel data as training examples, was an inspiration for this work, and has led to state-of-the-art results in both the supervised Edunov et al. (2018) and the unsupervised settings Lample et al. (2018).

lewis2018generative model the joint distribution of questions and answers given a context and use this model directly, whereas our work uses generative models to generate synthetic data to be used for pretraining. Combining these two approaches could be an area of fruitful future work.

3 Model

Given a dataset of contexts, questions, and answers: , we train three models: (1) answer extraction: , (2) question generation: , and (3) question answering: .

We use BERT Devlin et al. (2018)111Some experiments use a variant of BERT that masks out whole words at training time, similar to Sun2019ERNIEER. See https://github.com/google-research/bert for both the original and whole word masked versions of BERT. to model each of these distributions. Inputs to each of these models are fixed length sequences of wordpieces, listing the tokenized question (if one was available) followed by the context . The answer extraction model is detailed in §3.1 and two variants of question generation models in §3.2 and §3.3. The question answering model follows alberti2019bert.

3.1 Question (Un)Conditional Extractive QA

We define a question-unconditional extractive answer model and a question-conditional extractive answer model as follows:

where , are defined to be token spans over . For , and are constrained to be of length up to , set to 32 word piece tokens. The key difference between the two expressions is that scores the start and the end of each span independently, while scores them jointly.

Specifically we define and to be transformations of the final token representations computed by a BERT model:

Here

is the hidden representation dimension,

is the answer span, is the BERT representation of the ’th token in token sequence .

is a multi-layer perceptron with a single hidden layer, and

is an affine transformation.

We found it was critical to model span start and end points jointly in because, when the question is not given, there are usually multiple acceptable answers for a given context, so that the start point of an answer span cannot be determined separately from the end point.

3.2 Question Generation: Fine-tuning Only

Text generation allows for a variety of choices in model architecture and training data. In this section we opt for a simple adaptation of the public BERT model for text generation. This adaptation does not require any additional pretraining and no extra parameters need to be trained from scratch at finetuning time. This question generation system can be reproduced by simply finetuning a publicly available pretrained BERT model on the extractive subsets of datasets like SQuAD2 and NQ.

Fine-tuning

We define the model as a left-to-right language model

where is the sequence of question tokens and is a predetermined maximum question length, but, unlike the more usual encoder-decoder approach, we compute using the single encoder stack from the BERT model:

where is the word piece embedding matrix in BERT. All parameters of BERT including

are finetuned. In the context of question generation, the input answer is encoded by introducing a new token type id for the tokens in the extractive answer span, e.g. the question tokens being generated have type 0 and the context tokens have type 1, except for the ones in the answer span that have type 2. We always pad or truncate the question being input to BERT to a constant length

to avoid giving the model information about the length of the question we want it to generate.

This model can be trained efficiently by using an attention mask that forces to zero all the attention weights from to and from to for all .

Question Generation

At inference time we generate questions through iterative greedy decoding, by computing for . Question-answer pairs are kept only if they satisfy roundtrip consistency.

3.3 Question Generation: Full Pretraining

The prior section addressed a restricted setting in which a BERT model was fine-tuned, without any further changes. In this section, we describe an alternative approach for question generation that fully pretrains and fine-tunes a sequence-to-sequence generation model.

Pretraining

Section 3.2 used only an encoder for question generation. In this section, we use a full sequence-to-sequence Transformer (both encoder and decoder). The encoder is trained identically (BERT pretraining, Wikipedia data), while the decoder is trained to output the next sentence.

Fine-tuning

Fine-tuning is done identically as in Section 3.2, where the input is and the output is from tuples from a supervised question-answering dataset (e.g., SQuAD).

Question Generation

To get examples of synthetic triples, we sample from the decoder with both beam search and Monte Carlo search. As before, we use roundtrip consistency to keep only the high precision triples.

3.4 Why Does Roundtrip Consistency Work?

A key question for future work is to develop a more formal understanding of why the roundtrip method improves accuracy on question answering tasks (similar questions arise for the back-translation methods of edunov2018 and sennrich2016improving; a similar theory may apply to these methods). In the supplementary material we sketch a possible approach, inspired by the method of Balcan:2005 for learning with labeled and unlabeled data. This section is intentionally rather speculative but is intended to develop intuition about the methods, and to propose possible directions for future work on developing a formal grounding.

In brief, the approach discussed in the supplementary material suggests optimizing the log-likelihood of the labeled training examples, under a constraint that some measure of roundtrip consistency on unlabeled data is greater than some value . The value for

can be estimated using performance on development data. The auxiliary function

is chosen such that: (1) the constraint eliminates a substantial part of the parameter space, and hence reduces sample complexity; (2) the constraint nevertheless includes ‘good’ parameter values that fit the training data well. The final step in the argument is to make the case that the algorithms described in the current paper may effectively be optimizing a criterion of this kind. Specifically, the auxiliary function is defined as the log-likelihood of noisy triples generated from unlabeled data using the and models; constraining the parameters to achieve a relatively high value on is achieved by pre-training the model on these examples. Future work should consider this connection in more detail.

4 Experiments

4.1 Experimental Setup

Dev Test
EM F1 EM F1
Fine-tuning Only
BERT-Large (Original) 78.7 81.9 80.0 83.1
  + 3M synth SQuAD2 80.1 82.8 - -
    + 4M synth NQ 81.2 84.0 82.0 84.8
Full Pretraining
BERT (Whole Word Masking)222https://github.com/google-research/bert 82.6 85.2 - -
  + 50M synth SQuAD2 85.1 87.9 85.2 87.7
    + ensemble 86.0 88.6 86.7 89.1
Human - - 86.8 89.5
Table 2: Our results on SQuAD2. For our fine-tuning only setting, we compare a BERT baseline (BERT single model - Google AI Language on the SQuAD2 leaderboard) to similar models pretrained on our synthetic SQuAD2-style corpus and on a corpus containing both SQuAD2- and NQ-style data. For the full pretraining setting, we report our best single model and ensemble results.
Figure 1: Learning curves for pretraining using synthetic question-answering data (fine-tuning only setting). “no-RT” refers to omitting the roundtrip consistency check. Best exact match is reported after fine-tuning on SQuAD2. Performance improves with the amount of synthetic data. For a fixed amount of synthetic data, having a more diverse source (NQ+SQuAD vs. just SQuAD) yields higher accuracies. Roundtrip filtering gives further improvements.
Long Answer Dev Long Answer Test Short Answer Dev Short Answer Test
P R F1 P R F1 P R F1 P R F1
BERTjoint 61.3 68.4 64.7 64.1 68.3 66.2 59.5 47.3 52.7 63.8 44.0 52.1
+ 4M synth NQ 62.3 70.0 65.9 65.2 68.4 66.8 60.7 50.4 55.1 62.1 47.7 53.9
Single Human 80.4 67.6 73.4 - - - 63.4 52.6 57.5 - - -
Super-annotator 90.0 84.6 87.2 - - - 79.1 72.6 75.7 - - -
Table 3: Our results on NQ, compared to the previous best system and to the performance of a human annotator and of an ensemble of human annotators. BERTjoint is the model described in alberti2019bert.
Question Answer
NQ what was the population of chicago in 1857? over 90,000
SQuAD2 what was the weight of the brigg’s hotel? 22,000 tons
NQ where is the death of the virgin located? louvre
SQuAD2 what person replaced the painting? carlo saraceni
NQ when did rick and morty get released? 2012
SQuAD2 what executive suggested that rick be a grandfather? nick weidenfeld
Table 4: Comparison of question-answer pairs generated by NQ and SQuAD2 models for the same passage of text.

We considered two datasets in this work: SQuAD2 Rajpurkar et al. (2018) and the Natural Questions (NQ) Kwiatkowski et al. (2019). SQuAD2 is a dataset of QA examples of questions with answers formulated and answered by human annotators about Wikipedia passages. NQ is a dataset of Google queries with answers from Wikipedia pages provided by human annotators. We used the full text from the training set of NQ (1B words) as a source of unlabeled data.

In our fine-tuning only experiments (Section 3.2) we trained two triples of models on the extractive subsets of SQuAD2 and NQ. We extracted 8M unlabeled windows of 512 tokens from the NQ training set. For each unlabeled window we generated one example from the SQuAD2-trained models and one example from the NQ-trained models. For we picked an answer uniformly from the top 10 extractive answers according to . For we picked the best extractive answer according to

. Filtering for roundtrip consistency gave us 2.4M and 3.2M synthetic positive instances from SQuAD2- and NQ-trained models respectively. We then added synthetic unanswerable instances by taking the question generated from a window and associating it with a non-overlapping window from the same Wikipedia page. We then sampled negatives to obtain a total of 3M and 4M synthetic training instances for SQuAD2 and NQ respectively. We trained models analogous to alberti2019bert initializing from the public BERT model, with a batch size of 128 examples for one epoch on each of the two sets of synthetic examples and on the union of the two, with a learning rate of

and no learning rate decay. We then fine-tuned the the resulting models on SQuAD2 and NQ.

In our full pretraining experiments (Section 3.3) we only trained on SQuAD2. However, we pretrained our question generation model on all of the BERT pretraining data, generating the next sentence left-to-right. We created a synthetic, roundtrip filtered corpus with 50M examples. We then fine-tuned the model on SQuAD2 as previously described. We experimented with both the single model setting and an ensemble of 6 models.

4.2 Results

The final results are shown in Tables 2 and 3. We found that pretraining on SQuAD2 and NQ synthetic data increases the performance of the fine-tuned model by a significant margin. On the NQ short answer task, the relative reduction in headroom is 50% to the single human performance and 10% to human ensemble performance. We additionally found that pretraining on the union of synthetic SQuAD2 and NQ data is very beneficial on the SQuAD2 task, but does not improve NQ results.

The full pretraining approach with ensembling obtains the highest EM and F1 listed in Table 2. This result is only from human performance and is the third best model on the SQuAD2 leaderboard as of this writing (5/31/19).

Roundtrip Filtering

Roundtrip filtering appears to be consistently beneficial. As shown in Figure 1, models pretrained on roundtrip consistent data outperform their counterparts pretrained without filtering. From manual inspection, of 46 triples that were roundtrip consistent 39% were correct, while of 44 triples that were discarded only 16% were correct.

Data Source

Generated question-answer pairs are illustrative of the differences in the style of questions between SQuAD2 and NQ. We show a few examples in Table 4, where the same passage is used to create a SQuAD2-style and an NQ-style question-answer pair. The SQuAD2 models seem better at creating questions that directly query a specific property of an entity expressed in the text. The NQ models seem instead to attempt to create questions around popular themes, like famous works of art or TV shows, and then extract the answer by combining information from the entire passage.

5 Conclusion

We presented a novel method to generate synthetic QA instances and demonstrated improvements from this data on SQuAD2 and on NQ. We additionally proposed a possible direction for formal grounding of this method, which we hope to develop more thoroughly in future work.

References

Appendix A Supplementary Material: a Sketch of a Formal Justification for the Approach

This section sketches a potential approach to giving a formal justification for the roundtrip method, inspired by the method of Balcan:2005 for learning with labeled and unlabeled data. This section is intentionally rather speculative but is intended to develop intuition about the methods, and to propose possible directions for future work in developing a more formal grounding.

Assume that we have parameter estimates and derived from labeled examples. The log-likelihood function for the remaining parameters is then

Estimation from labeled examples alone would involve the following optimization problem:

(1)

where is a set of possible parameter values—typically would be unconstrained, or would impose some regularization on .

Now assume we have some auxiliary function that measures the roundtrip consistency of parameter values on a set of unlabeled examples. We will give concrete proposals for below. A natural alternative to Eq. 1 is then to define

for some value of , and to derive new parameter estimates

(2)

The value for can be estimated using cross-validation of accuracy on tuning data.

Intuitively a good choice of auxiliary function would have the property that there is some value of such that: (1) is much ”smaller” or less complex than , and hence many fewer labeled examples are required for estimation (Balcan:2005 give precise guarantees of this type); (2) nevertheless contains ‘good’ parameter values that perform well on the labeled data.

A first suggested auxiliary function is the following, which makes use of unlabeled examples for :

where .

This auxiliary function encourages roundtrip consistency under parameters . It is reasonable to assume that the optimal parameters achieve a high value for , and hence that this will be a useful auxiliary function.

A second auxiliary function, which may be more closely related to the approach in the current paper, is derived as follows. Assume we have some method of deriving triples from unlabeled data, where a significant proportion of these examples are ‘correct’ question-answer pairs. Define the following auxiliary function:

Here is some function that encourages high values for . One choice would be ; another choice would be if , otherwise, where is a target ‘margin’. Thus under this auxiliary function the constraint

would force the parameters to fit the triples derived from unlabeled data.

A remaining question is how to solve the optimization problem in Eq. 2. One obvious approach would be to perform gradient ascent on the objective

where dictates the relative weight of the two terms, and can be estimated using cross-validation on tuning data (each value for implies a different value for ).

A second approach may be to first pre-train the parameters on the auxiliary function , then fine-tune on the function . In practice this may lead to final parameter values with relatively high values for both objective functions. This latter approach appears to be related to the algorithms described in the current paper; future work should investigate this more closely.