Variations on masked language models (MLMs) (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019b; Conneau et al., 2019; Lewis et al., 2019a; Raffel et al., 2019; Clark et al., 2020) provide highly effective self supervision for pre-training by removing and then reconstructing parts of an input text. In this paper, we present the first viable pretraining alternative to MLMs; self supervision is instead provided by learning to paraphrase collections of related documents in many languages.
More specifically, we introduce MARGE, a Multilingual Autoencoder that Retrieves and Generates. We train MARGE by self-supervising the reconstruction of target text by first retrieving a set of related texts (in many languages) and then conditioning on them to maximize the likelihood of generating the original. We pre-train a multi-source sequence to sequence model that separately encodes each retrieved document and decodes the target, piecing together and translating content from the appropriate inputs as needed to provide the best reconstruction possible. The retrieval model scores are used to bias the cross attention to the most relevant retrieved documents, allowing the retrieval model to be trained jointly from the reconstruction loss.
Our approach can be viewed as a new type of denoising auto-encoder where the noise comes from the retrieval step and is much more diverse than masking; retrieved documents may have little lexical overlap with the target, and may not even be in the same language, but should communicate the same underlying information. In this way, the pre-training task is designed to emphasize paraphrasing and reduce the amount of encyclopedic knowledge the model must memorize. The set of retrieved documents and relevance scores are an autoencoder bottleneck from which the input must be reconstructed.MARGE is related to recent work that learns to do retrieval as part of the end task model, for example to find evidence documents in open domain question answering (Guu et al., 2020; Lewis et al., 2020). This leads to a more challenging retrieval problem that, unlike ours, requires a separate pre-training phase.
Overall, our pre-trained models capture elements of traditional paraphrasing, translation, multi-document summarization, and information retrieval tasks — without any fine tuning.111Masked language models, in contrast, are less directly related to target fine tuning tasks and significant ongoing research focuses on understanding why they work so well, see Rogers et al. (2020) for a survey. This allows effective zero-shot learning in many cases; with no fine-tuning we achieve BLEU scores of up to 35.8 for document translation, and outperform strong baselines for cross-lingual transfer in summarization. These results provide a step towards pre-trained models that can perform any task with little or no fine-tuning. With fine-tuning, we achieve competitive performance with masked language models on a range of discriminate and generative tasks in many languages, making MARGE the most generally applicable pre-training method to date.
During pre-training, the input to the model is a batch of evidence documents222We use document to refer to contiguous chunks of text up to maximum length (here, 512 tokens). and target documents . The model is trained to maximize the likelihood of the targets, conditioned on the evidence documents, and the relevance of each evidence document to each target:
The model then computes the likelihood of reconstructing each conditioned on and each
, using a modified seq2seq model. The similarity score encourages the model to attend more to relevant evidence documents. Backpropagating the reconstruction loss therefore improves both the sequence-to-sequence model and the relevance model (§2.3).
We construct batches so that evidence documents are relevant to the targets, using the relevance model for retrieval (§2.4).
Training this model is a chicken-and-egg problem. The reconstruction and relevance models cannot be effectively updated if the batches do not contain relevant evidence documents, but batch construction relies on a relevance model. However, we found that, in practice, the model is able to learn from a random initialization, which effectively provides a type of hashing of random features for each word.
2.2 Relevance Scores
To learn the relevance scores for a pair of documents, we train a document encoder that maps a list of tokens to a fixed size representation. We apply the same encoder to both the target and evidence document, and take the cosine similarity between their representations:
Using the same encoder for both the target and evidence documents allows even random models to compute meaningful similarity functions, as documents with higher lexical overlap are more likely to be projected to more similar representations (Wieting and Kiela (2019) demonstrate this for recurrent models). This property is crucial at initialization.
We encode documents by taking the representation of the first token from the top of a 4-layer Transformer (Vaswani et al., 2017). We share parameters with the first four layers of the reconstruction-model encoder, which saves computation and allows multitask learning.
2.3 Reconstruction Model
Given a set of evidence documents and similarity scores , the reconstruction model computes the likelihood of target document .
This provides an auto-encoder loss where the reconstruction of document is indirectly conditioned on , but with an intermediate bottleneck provided by the retrieved documents and relevance scores, as described in more detail below.
First, the input documents are encoded individually with a bidirectional Transformer, and then the resulting embeddings are concatenated. The similarity score is used to bias the cross-attention from the decoder to the encoder, so that the decoder will pay more attention to more relevant evidence documents. Using more relevant evidence documents will improve the likelihood of reconstructing , so gradient descent on (2) will improve the quality of the similarity scores.
Standard Transformer sequence-to-sequence models (Vaswani et al., 2017)
compute a matrix of cross-attention probabilities between all elements of target documentand evidence document :
where and compute query and key representations for layer and head , and denotes a softmax normalised over elements of .
We instead compute cross attention over a set of evidence documents , biasing the attention scores with the document relevant score from (1):
where is a trainable scalar parameter that weights the importance of the document similarity score.
Guu et al. (2020) propose a related approach in which the likelihood of a target is calculated by marginalizing out latent documents : . Our attention-like mechanism is (1) more expressive, because it can pay complete attention to a token from one document at one timestep and a token from another document at another timestep, and (2) more efficient because is not computed separately for each . However, our method does not allow attention from to .
2.4 Batch Construction
Batches are constructed to create evidence document sets that give useful information for reconstructing target documents , as detailed in this section. Overall, we divide the data into shards of related documents. Periodically, we compute the similarities between pairs of documents within each shard, using the relevance model, and apply a threshold to keep the strongest connections. The final batches are constructed to maximize connectivity between evidence and target documents.
We use simple heuristic constraints to divide documents into related shards, to improve both the accuracy and efficiency of retrieval. Specifically, for news text, documents are in the same shard iff they were published on the same date. For Wikipedia, we split articles into chunks of length 512. We create 1000 shards, where all chunks from the same article, or the equivalent article in another language, are in the same shard (otherwise dividing chunks randomly).
While we backpropagate through the relevance model in (4), the construction of the batch itself is inherently non-differentiable. For convenience we perform the nearest neighbour search offline. Every 10k model updates, we sample a set of shards of documents. For each shard, we compute for every pair of target and evidence documents, using the current relevance model.
We select which documents are sufficiently related by taking the top most similar document pairs across all pairs in the shard. Some targets may have no sufficiently relevant evidence documents, and are unused until the shard is re-indexed with an updated relevance model.
We aim to construct batches containing clusters of related target and evidence documents, to maximize available information for reconstructing each target. The output from the thresholding step is a bipartite graph of evidence and target documents with edges between them. A batch is a subgraph, and we perform a small local search to find subgraphs maximizing the sum of the weights of all edges in the subgraph. To encourage the model to build multilingual batches, edges where the evidence and target are in different languages are given weight 100, and other edges have weight 1. To create batches, we iterate over seed evidence documents with an edge to at least one evidence document. We then greedily add evidence and target documents to the batch to maximize the sum of the weights of edges, until the maximum number of tokens that can fit in GPU memory is reached.
We use a Transformer model (Vaswani et al., 2017). The encoder consists of 12 Transformer layers of dimension 1024, with feedforward layers of size 4096. Recent work showed that large models train more efficiently (Li et al., 2020; Kaplan et al., 2020). The decoder is similar to the encoder, but we increase the size of the feed-forward layers in the Transformer decoder to 16536. We also add 4 additional Transformer layers to the base of the decoder with only self-attention and feedforward layers of size 4096, which allows words in the target to contextualize locally before the more expensive cross-attention and feed-forward layers. We focus on scaling up the decoder, because it has access to more information than the encoder (which sees only evidence documents). In total, the model contains roughly 960M parameters. For the relevance model, we use the first 4 layers of the encoder, and take the documents representation from the beginning-of-sentence token.
During pre-training, workers process sub-batches containing an average of 2 evidence documents and 2 target documents, and accumulate gradients across workers. Using the CC-NEWS corpus (Liu et al., 2019), we train initially using the with 64 workers for 450k steps (linearly annealing the learning rate from 1e-04 to 0 with 10k warmup steps), and then continue training with 2048 workers with 550k steps (annealing the learning rate from 2e-04 to 0).333Initially training with a smaller learning rate reduced instability with an untrained retrieval model. We refer to this model as MARGE-NEWS. To explore domain effects, we further pre-train for 100k steps on Wikipedia data, annealing the learning rate from 1e-04 to 0, and refer to the resulting model as MARGE. We rebuild the index every 10k updates. We set retrieval thresholds such that we take on average 4 monolingual and 4 crosslingual links per target document.
We de-duplicate the data, and identify languages using FastText (Joulin et al., 2016). We select documents published in 26 different languages (based on their prevalence in downstream tasks), summarized in the Appendix. We divide documents into chunks of length 512. We allow all chunks to be evidence documents. For the news domain, we only allow the first chunk in each document to be used as a target, which we found improved performance during development. We prepend a language identifier token as the first decoder input, to control the output language.
For fine-tuning, we use a similar procedure to Lewis et al. (2019a). For generation problems, such as translation and summarization, the task input is fed into the encoder, and the output is generated by the decoder. For classification problems the task input is fed into both the encoder and decoder, and a representation is used from the decoder’s final layer hidden state. For zero-shot transfer experiments, we freeze word embeddings and the first 4 decoder layers.
As a multi-lingual sequence-to-sequence model, MARGE is applicable to a very broad range of tasks. We focus on multi-lingual tasks with elements of retrieval, document comprehension, and document generation, because they are the most directly related to our pre-training.
Table 1 lists the strongest available multilingual pre-trained models, along with relevant model statistics. We compare performance to published numbers for these models.
4.1 Cross-lingual Sentence Retrieval
Our pre-training task requires the model to retrieve similar texts, which may be in different languages. As an extrinsic evaluation of this functionality, we study cross-lingual sentence retrieval, in which a model must identify the correct translation of a sentence from a set of distractors. We report performance on BUCC2018 (Zweigenbaum et al., 2018) and Tatoeba (Artetxe and Schwenk, 2019).
We follow the setup of Hu et al. (2020), using no fine-tuning. As a document representation, we use the average embedding of the fifth encoder layer (tuned on BUCC development data).
On BUCC (Table 4), MARGE outperforms other unsupervised models by almost 10 points. On Tatoeba (see Appendix), there is significant variation across languages, but overall MARGE performs comparably to XLM-R and significantly better than other pre-trained models. Better results have been achieved on both tasks using labeled bitext for training (Artetxe and Schwenk, 2019), but our results suggest that our pre-training objective learns an effective cross-lingual retrieval function.
|HAN (Miculicich et al., 2018)||-||24.0|
4.2 Document-Level Machine Translation
During pre-training, the model can retrieve evidence documents in different languages to the target—in contrast to mBERT, XLM and mBART where instances are monolingual. We explore how well this pre-training approach learns to translate. We focus on document level translation tasks, and report document-level BLEU scores.444All sentences in a document are concatenated prior to calculating BLEU, using SacreBLEU (Post, 2018). Following Liu et al. (2020), we segment documents into chunks of 512 tokens for training and generation, and then concatenate chunks of the same document.
Zero-Shot Unsupervised Document Translation
Translation offers a direct measure of how well the pre-trained model encoder and decoder work for different languages, and the extent to which the interface between them is language independent. Therefore, in contrast to prior work on unsupervised translation, we do not further fine-tune the model with iterative back-translation (Lample et al., 2017; Artetxe et al., 2017), or bitext in other language pairs (Johnson et al., 2017; Liu et al., 2020).
We measure both translation into English, which compares encoder performance for other languages, and translation out of English, which measures the decoder performance. Generation hyperparameters were minimally tuned on German/English development, and are shared across all translation pairs. We use a beam of size 6 and block repeated n-grams of length 8(Fan et al., 2017).
Results are shown in Table 2. Performance varies considerably by language, but reaches 35.8 for German to English, which is the highest score we are aware of for system trained with no bitext. Performance is also strong for some languages using different scripts, such as Arabic to English. However, some languages work less well, notably Japanese. Generating non-English languages proves harder in all cases, particularly those with non-Latin alphabets, but English to French works well. Future work should explore up-sampling rarer languages during pre-training.
Qualitatively, we note that the translations are often good but less literal translations than the reference. This may cause BLEU scores to underestimate performance.
It is likely that unsupervised performance could be further improved using iterative back-translation using MARGE as an initialization, but we focus here on examining the pre-trained model directly.
Supervised Document Translation
We also evaluate how well our models can be fine-tuned for translation using labeled bitext. To compare with mBART, we use the same English-German and Chinese-English document translation tasks from WMT19 and IWSLT2015. Table 4 show that MARGE and mBART perform similarly, with MARGE performing better on English-German and mBART on Chinese-English. Both outperform baselines by a wide margin.
We evaluate monolingual sequence-to-sequence generation performance on text summarization tasks. We use the MLSum dataset (Scialom et al., 2020) to compare performance in several languages.
Results are shown in Table 5. MARGE outperforms an extractive mBERT model—the extractive oracle performance suggests that extractive models are very competitive on this dataset—and a seq2seq model without pre-training. In some cases, training one model on all languages (train all) improves results. Finally, we explore zero-shot summarization, where the model is trained on all languages except the test language—this model outperforms a strong lead-3 baseline, and even a supervised pointer-generator model on Spanish and Russian. On this domain, we achieve better results with MARGE-NEWS, a version of the model trained only on news.
ROUGE-L scores on MLSum. MARGE generates abstractive summaries that outperform an extractive mBERT model. We also demonstrate zero-shot transfer learning, where the model is trained only on languages it is not trained on, and results from training on all languages.
We measure how well our pre-training task learns paraphrasing on the PAWS-X paraphrase detection dataset (Yang et al., 2019a). Models must determine whether two sentences are paraphrases; examples were constructed adversarially to have high lexical overlap. Models are trained on English, and we test zero-shot transfer to other languages. MARGE edges out a new state of the art (Table 1(b)).
4.5 Question Answering
Question answering offers another document level reasoning task that is easily posed in many languages. We use the MLQA dataset (Lewis et al., 2019b), in which models are trained on the English SQuAD dataset (Rajpurkar et al., 2016) and then tested in other languages.
Results in Table 1(a) show that MARGE achieves competitive performance with XLM-R, setting the state of the art for Chinese, and outperforms other models by a wide margin.
What does the reconstruction model learn?
|Language||Source||Zero-shot Unsupervised Output|
|French||Katherine Coleman Goble Johnson, née le 26 août 1918 à White Sulphur Springs (Virginie-Occidentale) et morte le 24 février 2020 à Newport News (Virginie), est une physicienne, mathématicienne et ingénieure spatiale américaine.Elle contribue aux programmes aéronautiques et spatiaux du National Advisory Committee for Aeronautics (NACA) puis de la National Aeronautics and Space Administration (NASA).||Katherine Coleman Goble Johnson (August 26, 1918 – February 24, 2020) was an American physicist, mathematician, and space engineer. She contributed to the aeronautics and space programs of the National Advisory Committee for Aeronautics (NACA) and later the National Aeronautics and Space Administration (NASA).|
|Spanish||Katherine Coleman Goble Johnson (White Sulphur Springs, Virginia Occidental; 26 de agosto de 1918 - Newport News, Virginia; 24 de febrero de 2020) fue una física, científica espacial y matemática estadounidense que contribuyó a la aeronáutica de los Estados Unidos y sus programas espaciales con la aplicación temprana de las computadoras electrónicas digitales en la NASA. Conocida por su precisión en la navegación astronómica, calculó la trayectoria para el Proyecto Mercury y el vuelo del Apolo 11 a la Luna en 1969.||Katherine Coleman Goble Johnson (August 26, 1918 – February 24, 2020) was an American physicist, space scientist, and mathematician who contributed to the United States aeronautics and its space programs with the early application of digital electronic computers at NASA. Known for her accuracy in astronomical navigation, she calculated the trajectory for Project Mercury and the Apollo 11 flight to the Moon in 1969.|
|German||Katherine G. Johnson (gebürtig Coleman, zwischenzeitlich verheiratet Goble; * 26. August 1918 in White Sulphur Springs, West Virginia; † 24. Februar 2020 in Newport News, Virginia) war eine US-amerikanische Mathematikerin afroamerikanischer Abstammung. Für ihre Beiträge zur Berechnung der Flugbahnen für das Mercury-Programm und den ersten bemannten Flug zum Mond im Rahmen der Apollo-11-Mission wurde sie Ende 2015 mit der Presidential Medal of Freedom ausgezeichnet.||Katherine G. Johnson (née Coleman; August 26, 1918 – February 24, 2020) was an American mathematician. She was awarded the Presidential Medal of Freedom in 2015 for her contributions to the calculation of the flight paths for the Mercury program and the first manned flight to the Moon in the Apollo 11 mission.|
|Italian||Katherine Coleman Goble Johnson (White Sulphur Springs, 26 agosto 1918 – Hampton, 24 febbraio 2020) è stata una matematica, informatica e fisica statunitense. Contribuì alla scienza dell’aeronautica statunitense e ai programmi spaziali, già dal primo utilizzo dei computer elettronici digitali da parte della NASA. Venne molto apprezzata per l’accuratezza che poneva nel calcolo della navigazione spaziale computerizzata e per il lavoro tecnico dirigenziale pluridecennale svolto alla NASA: da quando calcolava le traiettorie delle orbite, paraboliche e iperboliche, le finestre di lancio e i percorsi di ritorno di emergenza per molti voli, al Project Mercury, incluse le prime missioni NASA di John Glenn, Alan Shepard, le traiettorie di inserzione lunare nei voli lunari del programma Apollo, continuando con il lavoro sul programma dello Space Shuttle, infine con la progettazione dei primi piani per la missione su Marte.||Katherine Coleman Goble Johnson (White Sulphur Springs, August 26, 1918 – Hampton, February 24, 2020) was an American mathematician, computer scientist, and physicist. She contributed to the science of the U.S. Air Force and space programs, as early as the first use of digital electronic computers by NASA. She was highly regarded for the accuracy she put into computerized space navigation calculations and for the decades-long technical leadership work she performed at NASA: from calculating orbital trajectories, parabolic and hyperbolic, launch windows, and emergency return paths for many flights, to Project Mercury, including the first NASA missions of John Glenn, Alan Shepard, lunar insertion trajectories in the Apollo lunar flights, continuing work on the Space Shuttle program, and finally designing the initial plans for the Mars mission.|
To build intuitions about what the reconstruction model learns, we examine model outputs for inputs in different languages on the same topic (Table 7).
Even for a fixed topic, the model output varies significantly with the input, showing that it is not simply memorizing text. Almost all facts in the outputs are supported by the input, with few hallucinations—suggesting pre-training has taught the model to translate and paraphrase information from its source, rather than memorize facts in its parameters. However, the outputs are not literal translations of the input—in particular, some important facts from the source are not expressed in the output.
The model was not trained on literal translations, so it is perhaps surprising that the output is often so closely aligned to the input. One possible explanation is that more literal translations represent a mode of a diverse distribution over paraphrases.
What does the retrieval model learn?
Figure 2 shows statistics of the retrieval model. Differences across languages are due to many factors, including the frequency of languages in the corpus, how linguistically related two languages are, and how likely two languages are to cover the same topic. Our pre-training also introduces feedback loops, because if the reconstruction model is unable to translate between two languages, it may train the retrieval model that documents in these languages are less relevant to each other.
All languages retrieve the highest proportion of documents within their own language (represented by the diagonal), but otherwise the retrieved documents tend to be distributed over a number of other languages. There tend to be closer affinities between geographically or linguistically related languages, such as Bulgarian and Russian, or Chinese and Japanese. For some languages, the model fails to retrieve many documents in other languages—particularly Indo-Iranian languages, and those which are the only example of their language family we include (such as Telugu and Thai). For these cases, the pre-training reduces to independent updates for each language, as used in multilingual models such as mBART, mBERT, and XLM.
Overall, MARGE shows strong performance on a wider range of tasks than any previous pre-trained models, and is effective at discriminative and generative tasks in many languages. Results are competitive with less general models, even XLM-R, which was trained with significantly higher pre-training resources. The pre-training task is more closely related to downstream tasks than masked language modeling, allowing pre-trained models to achieve BLEU scores as high as 35.8 for translation. MARGE also broadens the range of known effective pre-training tasks beyond MLMs, which we hope will lead to further exploration and understanding of pre-training objectives.
However, there are several limitations that future work should address. We pre-trained on news and Wikipedia, where simple metadata can be used to constrain the similarity search, improving efficiency and accuracy. Broadening the domains may require approximate nearest neighbor search (Johnson et al., 2019). Learning the retrieval model requires batch sizes greater than one, so model-parallel training would be required to train significantly larger models. Finally, performance is inconsistent across languages, which may be due to feedback loops during training where documents in less well performing languages may learnt to be less relevant, and therefore retrieved less often.
6 Related Work
Since BERT (Devlin et al., 2019), pre-training for NLP has been dominated by variants of masked language models. For example, Yang et al. (2019b) predicts the masked tokens auto-regressively, Dong et al. (2019) multitasks MLM and language modeling objectives, Clark et al. (2020)
trains a discriminator to classify the correctness of MLM samples, andLewis et al. (2019a) and Raffel et al. (2019) use seq2seq models with masked inputs. MARGE departs significantly from these objectives in that the inputs during pre-training are complete, uncorrupted text.
Recent work has shown impressive results on machine translation through bitext mining (Schwenk et al., 2019), in which a retrieval model is used to search for parallel sentences in a large multilingual corpus, which are then used as training data for a machine translation model. A key conceptual difference is that literal bitext is not optimal for our approach, as we hope to learn linguistic information by training on noisy document-level paraphrases. We also learn to retrieve and translate with no manually translated sentences, unlike existing bitext mining methods.
Several attempts have been made to pre-train language-independent representations. One strand uses MLMs on the concatenation of monolingual corpora, relying on parameter sharing to learn cross-lingual representations (Lample and Conneau, 2019; Conneau et al., 2019; Liu et al., 2020). Another strand has trained machine translation systems (McCann et al., 2017; Siddhant et al., 2019), but results in Hu et al. (2020) suggest translation is a less effective pre-training task. We instead pre-train on loose cross-lingual paraphrases.
Language Models with Retrieval
improve MLMs and text generation by learning to retrieve relevant evidence documents.Guu et al. (2018)
perform language modeling by retrieving and editing sentences. kNN-LM(Khandelwal et al., 2019)
shows that language models can be improved with retrieving from the training set, by interpolating a language model with a nearest neighbor classifier. In contrast, we learn retrieval during training but do not require it for inference. Perhaps most relevantly,Liu et al. (2018) generate Wikipedia articles conditioned on a set of evidence documents.
We introduced a new approach to pre-training models for natural language understanding and generation, by using retrieved documents to reconstruct the original. MARGE exhibits strong performance on a range of discriminative and generative tasks in many languages, both with and without fine-tuning. These results establish MARGE as a viable alternative to masked language modeling and provide a step towards pre-trained models that can perform any task with little or no fine-tuning. Future work should scale MARGE to more domains and languages, and study how to more closely align pre-training objectives with different end tasks.
Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. Cited by: §4.2.
- Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics 7, pp. 597–610. Cited by: §4.1, §4.1.
- Electra: pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Cited by: §1, §6.
- Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Cited by: §1, §6.
- BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. External Links: Cited by: §1, §6.
- Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197. Cited by: §6.
- Controllable abstractive summarization. arXiv preprint arXiv:1711.05217. Cited by: §4.2.
- Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics 6, pp. 437–450. Cited by: §6.
- Realm: retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909. Cited by: §1, §2.3, §6.
- XTREME: a massively multilingual multi-task benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080. Cited by: §4.1, §6.
- Billion-scale similarity search with gpus. IEEE Transactions on Big Data. Cited by: §5.
- Google’s multilingual neural machine translation system: enabling zero-shot translation. Transactions of the Association for Computational Linguistics 5, pp. 339–351. Cited by: §4.2.
- Fasttext. zip: compressing text classification models. arXiv preprint arXiv:1612.03651. Cited by: §3.
- Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Cited by: §3.
- Generalization through memorization: nearest neighbor language models. arXiv preprint arXiv:1911.00172. Cited by: §6.
- Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Cited by: §4.2.
- Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. Cited by: §6.
- Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Cited by: §1, §3, §6.
- Mlqa: evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Cited by: §4.5.
- Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401. Cited by: §1, §6.
- Train large, then compress: rethinking model size for efficient training and inference of transformers. arXiv preprint arXiv:2002.11794. Cited by: §3.
- Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198. Cited by: §6.
- Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210. Cited by: §4.2, §4.2, Table 4, §6.
- RoBERTa: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1, §3.
- Learned in translation: contextualized word vectors. In Advances in Neural Information Processing Systems, pp. 6294–6305. Cited by: §6.
- Document-level neural machine translation with hierarchical attention networks. arXiv preprint arXiv:1809.01576. Cited by: Table 4.
- A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771. Cited by: footnote 4.
- Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Cited by: §1, §6.
- Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Cited by: §4.5.
- A primer in bertology: what we know about how bert works. arXiv preprint arXiv:2002.12327. Cited by: footnote 1.
- CCMatrix: mining billions of high-quality parallel sentences on the web. arXiv preprint arXiv:1911.04944. Cited by: §6.
- MLSUM: the multilingual summarization corpus. arXiv preprint arXiv:2004.14900. Cited by: §4.3.
- Evaluating the cross-lingual effectiveness of massively multilingual neural machine translation. arXiv preprint arXiv:1909.00437. Cited by: §6.
- Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.2, §2.3, §3.
- No training required: exploring random encoders for sentence classification. arXiv preprint arXiv:1901.10444. Cited by: §2.2.
- PAWS-x: a cross-lingual adversarial dataset for paraphrase identification. arXiv preprint arXiv:1908.11828. Cited by: §4.4.
- XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1, §6.
- Overview of the third bucc shared task: spotting parallel sentences in comparable corpora. In Proceedings of 11th Workshop on Building and Using Comparable Corpora, pp. 39–42. Cited by: §4.1.
Appendix A Additional Results
Appendix B Pre-training Data