Unsupervised Neural Machine Translation with Generative Language Models Only

10/11/2021 ∙ by Jesse Michael Han, et al. ∙ 4

We show how to derive state-of-the-art unsupervised neural machine translation systems from generatively pre-trained language models. Our method consists of three steps: few-shot amplification, distillation, and backtranslation. We first use the zero-shot translation ability of large pre-trained language models to generate translations for a small set of unlabeled sentences. We then amplify these zero-shot translations by using them as few-shot demonstrations for sampling a larger synthetic dataset. This dataset is distilled by discarding the few-shot demonstrations and then fine-tuning. During backtranslation, we repeatedly generate translations for a set of inputs and then fine-tune a single language model on both directions of the translation task at once, ensuring cycle-consistency by swapping the roles of gold monotext and generated translations when fine-tuning. By using our method to leverage GPT-3's zero-shot translation capability, we achieve a new state-of-the-art in unsupervised translation on the WMT14 English-French benchmark, attaining a BLEU score of 42.1.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent work on generative pre-training has shown that with sufficient data and scale (DBLP:journals/corr/abs-2001-08361; DBLP:journals/corr/abs-2010-14701), large language models (LMs) can learn a diverse suite of tasks without explicit supervision (radford2019language), and that even stronger performance on these tasks can be elicited using few-shot demonstrations (DBLP:conf/nips/BrownMRSKDNSSAA20). While few-shot prompting is flexible and enables strong performance on a diverse suite of NLP tasks to be coaxed out of generatively pre-trained LMs without further fine-tuning, its benefits are most pronounced with larger models, with commensurate training, inference, compute, and data costs. Furthermore, the very generality of the pre-training objective which enables multi-task learning can produce LMs with more knowledge than is immediately apparent, requiring carefully designed prompts to bring out fully. The desire to unlock and amplify these latent abilities while also reducing the cost of few-shot prompting motivates our present work, which allows us to continue fine-tuning our models, obtaining more performance from smaller models and pushing our larger models even further, without resorting to few-shot prompting at test time or any additional supervision at train time.

We target the domain of unsupervised neural machine translation (NMT), which typically involves bootstrapping a weak translation model before amplifying its translation ability via backtranslation

. Recent work in unsupervised NMT has been dominated by large encoder-decoder architectures where the bootstrap is implemented by denoising/autoencoding tasks (

e.g., multilingual Cloze (DBLP:conf/naacl/DevlinCLT19; DBLP:conf/nips/ConneauL19), masked-span prediction (DBLP:journals/jmlr/RaffelSRLNMZLL20; DBLP:conf/naacl/XueCRKASBR21), reconstruction from corrupted inputs (DBLP:conf/emnlp/WangZJLL19; DBLP:journals/tacl/LiuGGLEGLZ20)) intended to produce strong encoders and aligned multilingual representations for decoding. In our present work, we show that generative language modeling alone can implement the entire unsupervised NMT pipeline, and derive state-of-the-art unsupervised NMT systems using only generatively pre-trained language models. We implement the bootstrap by first sampling a small number of zero-shot translations from GPT-3. These are then used as few-shot prompts to sample a larger dataset of synthetic translations. The few-shot prompts are then discarded and the generated samples are distilled by fine-tuning the model on these synthetic data in the zero-shot format. This produces a language model aligned to our translation format and amenable to large-scale backtranslation. By using our method to leverage GPT-3’s zero-shot translation capability, we achieve a new state-of-the-art in unsupervised translation on the WMT14 English-French benchmark, attaining a BLEU score of 42.1.

2 Background and related work

The modern approach to unsupervised neural machine translation typically involves encoder-decoder architectures jointly trained via denoising autoencoding / reconstruction tasks

(DBLP:conf/icml/VincentLBM08; DBLP:conf/nips/ConneauL19; DBLP:journals/tacl/LiuGGLEGLZ20; DBLP:journals/corr/abs-2012-15547; DBLP:journals/jmlr/RaffelSRLNMZLL20; DBLP:conf/naacl/XueCRKASBR21; DBLP:conf/emnlp/WangZJLL19; DBLP:journals/tacl/LiuGGLEGLZ20; DBLP:conf/icml/SongTQLL19) and backtranslation (DBLP:conf/acl/SennrichHB16; DBLP:conf/emnlp/EdunovOAG18; DBLP:journals/corr/abs-1806-04402). This approach to unsupervised NMT is codified by DBLP:conf/iclr/ArtetxeLAC18 and DBLP:conf/iclr/LampleCDR18, although various ideas can be traced back further: unsupervised machine translation was framed as a deciphering task by DBLP:conf/acl/RaviK11 and backtranslation was first introduced for machine translation as a method for data augmentation using target-side monolingual data by DBLP:conf/acl/SennrichHB16

. Denoising autoencoding with a bilingual encoder can be viewed as a kind of latent bilingual lexicon induction, necessary for producing sufficiently aligned embeddings to kick-start backtranslation; such techniques have been extensively studied in the context of machine translation

(DBLP:conf/acl/ArtetxeLA17; DBLP:conf/coling/KlementievTB12; DBLP:conf/acl/VulicM15; DBLP:journals/corr/HuYLSX17; DBLP:conf/nips/GoyalLZZCB16; DBLP:conf/nips/ShenLBJ17).

At the same time, recent work on large-scale generative pre-training has demonstrated that with sufficient data and model scale (DBLP:journals/corr/abs-2001-08361; DBLP:journals/corr/abs-2010-14701), transformer language models begin learning a variety of tasks without explicit supervision (radford2019language) and that even stronger performance can be coaxed from them using few-shot prompts (DBLP:conf/nips/BrownMRSKDNSSAA20). Our present work unifies these lines of research by using generative language modeling to simplify unsupervised NMT even further: we show how with sufficient scale, pre-training, and clever prompting, a single generative language model can implement the entire unsupervised neural machine translation pipeline, avoiding optimizations such as denoising autoencoding, auxiliary / adversarial losses in latent space, or ad-hoc bilingual dictionaries.

Our reliance on large-scale generative pre-training is similar to prior work in unsupervised NMT which uses large-scale language modeling tasks on internet data as part of the bootstrap (DBLP:conf/nips/ConneauL19; DBLP:conf/acl/ConneauKGCWGGOZ20; DBLP:journals/tacl/LiuGGLEGLZ20). The role of few-shot prompting and distillation in our method is related to recent work on unsupervised data augmentation using language models (DBLP:conf/aaai/Anaby-TavorCGKK20; DBLP:journals/corr/abs-2103-00453; DBLP:journals/corr/abs-2003-02245; DBLP:journals/corr/abs-2004-13845; DBLP:journals/corr/abs-2104-07540; DBLP:conf/emnlp/YangMFSBWBCD20) and is also in the same spirit as recent work on self-training and noisy-student training (DBLP:journals/corr/abs-2108-12589; DBLP:journals/corr/abs-2109-06270; DBLP:conf/cvpr/XieLHL20). Recent work on scaling laws for neural machine translation has shown that transformer decoders exhibit more favorable scaling than encoders (DBLP:journals/corr/abs-2109-07740). The few-shot distillation component of our method bears some resemblance to contemporaneous work by DBLP:journals/corr/abs-2109-09193 which uses few-shot prompting for unsupervised data augmentation, though they focus only on inference for text classification rather than generation for sequence-to-sequence tasks like machine translation and do not study the phenomena of self-amplification nor few-shot data efficiency (Section 6) as we do.

3 Backtranslation via language modeling

1:Source monotext ; target monotext ; number of iterations ; number of samples per iteration ; monotext formatter ; bitext formatter ; parameters of language model trained to complete outputs of to outputs of .
2:Final model parameters .
3:for  to  do
4:     
5:     for  to  do
6:         
7:         
8:               
9:

     estimate

by maximizing of for
Algorithm 1 Iterated backtranslation using a single generative language model

Backtranslation was first introduced in the context of machine translation as a method for data augmentation using target-side monolingual data by sampling synthetic source-to-target data from another target-to-source translation model (DBLP:conf/wmt/BojarT11; DBLP:conf/acl/SennrichHB16; DBLP:journals/corr/abs-1804-06189). In our present work, we cast machine translation as a language modeling task and jointly train and sample generations from a single language model for both source-to-target and target-to-source translation.

Given bitext ⟨seq1, seq2⟩ in languages and , we format the translation task as follows:

[L1] <seq1> [[TRANSLATE]] [L2] <seq2>

At test-time, the LM is prompted with [L1] <seq> [[TRANSLATE]] [L2] and we parse a candidate translation <sampledSeq> from the sampled completion. Backtranslation is implemented by reversing the roles of seq and sampledSeq and fine-tuning on the bitext ⟨sampledSeq, seq⟩.

We remark that in contrast to the interpretation of backtranslation as a wake-sleep algorithm (DBLP:journals/corr/abs-1806-04402), where the forwards and backwards translators are trained alternately, we use a single language model for both forwards and backwards translation and train on both directions jointly at every iteration. There are various ways to train a model using backtranslation, e.g.

, completely online (interleaving minibatch gradient updates and sampling) versus offline (backtranslating the entire training dataset at each epoch; potentially re-training the model from scratch after sampling new backtranslations). In practice, we find that data scaling of a model’s optimal test loss and BLEU score quickly saturates on backtranslations from previous versions of the model, and opt for a semi-online setup where we synchronously sample a relatively small number of

- and - pairs before resuming training for a single epoch on the newly sampled data. We refer to this as a single iteration of backtranslation.

Formally, Algorithm 1 describes our implemention of backtranslation using a single generative language model . We assume that has already been trained to complete formatted monotext ([L1] <seq1> [[TRANSLATE]] [L2]) to formatted bitext ([L1] <seq1> [[TRANSLATE]] [L2] <seq2>).

4 The bootstrap: generative pre-training, few-shot amplification, and distillation

Figure 1: Illustration of our bootstrap procedure, which we call few-shot distillation. We use few-shot prompts sampled from GPT-3 to generate an initial dataset of synthetic translations from a generatively pre-trained language model (left). The few-shot examples are then discarded and the synthetic bitext reformatted for fine-tuning on the autoregressive language modeling objective (right).

The modern approach to unsupervised NMT is parametrized by a choice of initialization or bootstrap. The bootstrap has typically relied on some form of unsupervised cross-lingual representation learning, e.g., bilingual dictionaries initialized from unsupervised cross-lingual word embeddings (DBLP:conf/iclr/LampleCDR18; DBLP:conf/iclr/ArtetxeLAC18) or multilingual masked language modeling followed by denoising autoencoding with a shared encoder and decoder (DBLP:conf/nips/ConneauL19).

In Section 3, we formulated iterative backtranslation in terms of language modeling, assuming a language model which has already been trained to follow a particular instruction format for translation. To complete our procedure, we must supply such a language model. Unlike previous work on unsupervised NMT, we use language models from the GPT-3 family (DBLP:conf/nips/BrownMRSKDNSSAA20) which have been generatively pre-trained on a large corpus of Internet data. A key observation from the body of work around GPT-3 is that generative pre-training at scale induces strong in-context metalearning abilities, two special cases of which are (1) instruction following and (2) few-shot prompting: a sufficiently trained large language model benefits from both detailed natural language descriptions of tasks and, when given in-context examples, can achieve strong performance on a diverse suite of tasks (e.g., question-answering, natural language inference, translation.) We implement the bootstrap by exploiting both of these abilities, by using natural language instruction to produce zero-shot translations and few-shot prompting during amplification.

4.1 Few-shot amplification and distillation

It thus remains to adapt our generatively pre-trained models’ few-shot translation ability to the zero-shot format specified in Section 3. We do this in a two-stage process. We first sample a small number of zero-shot translations from GPT-3. Given bitext ⟨srcSeq, tgtSeq⟩ in srcLang and tgtLang, and a stop-sequence <sep>, we use the following format for zero-shot prompting:

  <sep> Given the following passage in <srcLang>: <sep> <srcSeq> <sep>
  a good <tgtLang> translation is: <sep> <tgtSeq> <sep>.

At test-time, we sample a completion until the stop-sequence <sep> is detected; throughout we set <sep> to be \n---\n.

We amplify these zero-shot translations by using them as few-shot prompts to sample a much larger synthetic dataset from a smaller model. We then distill this dataset by discarding the few-shot prompts and fine-tuning on formatted bitext, producing a language model aligned with our task format and amenable to backtranslation. In detail, we implement the bootstrap as follows:

  1. Generatively pre-train a language model on a large corpus of Internet data.

  2. Sample a pool of synthetic target-side translations and target-side translations zero-shot from another language model for few-shot prompting. Using few-shot examples randomly drawn from (resp. ), sample synthetic target-side translations (resp. synthetic source-side translations) from , using the monolingual source-side corpus (resp. target-side corpus ).

  3. Discard the few-shot prompts, reformat the (gold prompt, sampled translation) data as specified in Section 3, and fine-tune the language model on these data.

  4. Reverse all data and continue fine-tuning the language model on the backtranslations (sampled translation, gold prompt).

Why amplify and distill?

While few-shot prompting is flexible and enables strong performance on a diverse suite of NLP tasks to be coaxed out of generatively pre-trained LMs, its benefits are most pronounced with larger models, with commensurate training, inference, compute, and data costs. It is also unclear how to iteratively fine-tune a language model in a way that preserves its few-shot ability while remaining aligned with a zero-shot format like in Section 3. Few-shot amplification allows us to generate data for the bootstrap in an unsupervised fashion, possibly avoiding the overhead of few-shot sampling from GPT-3 itself by few-shot prompting a smaller model , while distillation enables iterative backtranslation.

5 Results

Experimental setup

For our experiments, we focus on the well-studied WMT14 English-French benchmark. In the notation of Algorithm 1, we obtain source and target monotext and by splitting the WMT14 English-French training set in half, each with approximately twenty million examples, and use only the English text from one half and only French text from the other to avoid implicit sentence-level alignment between source and target monotext. At each iteration of backtranslation, we sample one million translations in either direction, i.e,. , and train for one epoch on the newly sampled data. For all of our results, unless otherwise specified, we run 40 iterations of backtranslation after the bootstrap and report BLEU using the final model checkpoint.

To implement the bootstrap, we additionally set aside training examples, and sample English-French (resp. French-English) translations zero-shot from GPT-3 to use as few-shot prompts. During few-shot amplification, we sample four million initial target- and source-side translations respectively using few-shot prompts, i.e., in the notation of Section 4.1, drawing monolingual prompts from as and defined above. We fine-tune for two epochs in the forwards direction (distillation) and for another two epochs in the backwards direction (initial backtranslation). For few-shot prompting, we use in-context examples. In Section 6.3.1 we will see that we can minimize the number of few-shot examples to with little effect on evaluation BLEU score after iterative backtranslation.

We use the same training setup and BPE tokenizer as GPT-3. During fine-tuning, we use a constant learning rate of , where is the pre-training learning rate, a weight decay of , and residual dropout . When sampling during the bootstrap or during backtranslation, we default to using temperature . We ablate other values of in Section 6.1. We also filter all fine-tuning bitext by length, discarding pairs with a source/target length ratio exceeding .

We report BLEU score on the official WMT14 English-French test set with greedy (argmax) sampling and sacreBLEU111Signature: BLEU+case.mixed+numrefs.1+smooth.exp+tok.intl+version.1.2.20. (DBLP:conf/wmt/Post18). In Table 3 we give a comparison to previous work on unsupervised NMT using multi-bleu.perl and the XLM (DBLP:conf/nips/ConneauL19) tokenizer.

5.1 Few-shot self-distillation and backtranslation

small medium large xl
few-shot en-fr 1.15 7.71 13.07 14.28
fr-en 5.04 16.87 20.25 23.0


few-shot
en-fr 1.02 7.36 11.89 13.58
fr-en 4.46 16.13 20.7 22.07


few-shot
en-fr 0.25 2.12 2.68 3.38
fr-en 1.22 5.45 6.14 9.32


distillation
en-fr 0.61 9.51 17.68 22.19
fr-en 4.31 23.67 29.38 31.12


initial backtranslation
en-fr 7.94 29.84 33.59 34.71
fr-en 1.5 23.12 28.58 30.52


after backtranslation
en-fr 30.48 36.53 37.59 39.12
fr-en 27.24 32.15 34.79 35.43

Table 1: English-French (top) and French-English (bottom) test BLEU throughout the few-shot self-distillation bootstrap across multiple model scales.

We first report results using self-distillation, i.e., where during the bootstrap (Section 4) we sample from a single model which is then trained to imitate and then backtranslate its own few-shot prompted generations; for these experiments, the few-shot demonstrations themselves are generated zero-shot by GPT-3. This is then followed by the iterative backtranslation procedure described in Section 3. We apply this methodology to the small, medium, large, and xl models from the GPT-3 family (DBLP:conf/nips/BrownMRSKDNSSAA20), with 125M, 350M, 760M, and 1.3B parameters respectively. Table 1 displays test BLEU throughout our procedure for all model sizes. We see that translation out of English benefits significantly from the backtranslation part of the bootstrap alone. We also see that our models are much stronger at the translation task compared to few-shot prompting after only self-distillation. Finally, all models benefit significantly from iterative backtranslation, with English-French BLEU always converging to a slightly higher value than the reverse direction.

5.2 Distilling self-amplified GPT-3 into smaller models

small medium large xl
distillation en-fr 34.13 36.03 37.21 37.08
fr-en 32.34 34.96 36.12 36.34


initial backtranslation
en-fr 34.71 36.31 38.89 39.05
fr-en 30.95 33.73 35.16 36.51


after backtranslation
en-fr 35.62 37.79 38.91 39.79
fr-en 31.28 34.08 35.57 35.97


after backtranslation (+CC100)
en-fr 39.02 41.31 41.97 42.08
fr-en 33.43 35.69 36.85 37.09

Table 2: English-French (top) and French-English (bottom) test BLEU throughout the bootstrap and after iterative backtranslation, this time using generations from self-amplified GPT-3 for the bootstrap. We observe the best performance by mixing in monotext from the English and French components of the CC100 dataset (DBLP:conf/lrec/WenzekLCCGJG20; DBLP:conf/acl/ConneauKGCWGGOZ20) during backtranslation.

Although we do not apply our full methodology to the 175B parameter GPT-3 model due to compute constraints, we observe that for few-shot distillation, instead of training a model on few-shot samples from itself, we can just as well distill on few-shot samples from a much larger model instead—in this case, the full-size 175B parameter GPT-3 model (henceforth just “GPT-3”). That is, we use GPT-3 to self-amplify its own zero-shot translations to produce an initial dataset for distillation.

We now proceed to apply the same method as in Section 5.1 to all model sizes, but this time using few-shot samples from GPT-3 for the bootstrap. We display the evaluation BLEU scores throughout the bootstrap and after iterative backtranslation in Table 2. Interestingly, the higher-quality samples from GPT-3 appear to saturate the smaller models and they improve very little. Motivated by the possibility that our models are beginning to overfit to the WMT14 English-French training data, we attempt another experiment where 50% of the monotext for backtranslation is sampled from the English and French components of the CC100 dataset (DBLP:conf/acl/ConneauKGCWGGOZ20). The extra monolingual data significantly benefits all model scales, improving English-French BLEU by approximately 3 points compared to iterative backtranslation on WMT data alone. With this setup, the xl attains a new unsupervised state-of-art of 42.1 BLEU on the WMT14 English-French benchmark.

6 Discussion and further ablations

Bias towards English generation

Previous work (DBLP:conf/nips/BrownMRSKDNSSAA20) has shown that after generative pre-training on a corpus of English-dominated Internet text, GPT-3 models are far more capable of translating into English than translating out of English. This is reflected by the disparity between English-French and French-English BLEU scores immediately after few-shot distillation and before backtranslation on the few-shot prompted data. Interestingly, after only two epochs of backtranslation on the relatively scarce few-shot prompted data, this gap is reversed, with all models achieving significantly higher English-French BLEU than French-English BLEU. The data efficiency of the bootstrap suggests that coming out of pre-training, the models are merely misaligned rather than deficient in knowledge about French, and that their latent knowledge about translation out of English can be surfaced using backtranslation. Relatedly, high-quality samples in one language in the previous round of backtranslation lead to higher-quality synthetic bitext for training the reverse direction in the next. This turns the asymmetry towards English generation into an advantage during backtranslation. However, if the initial disparity between the quality of the translation directions is extreme (as with the self-distilled small, which after distillation achieves BLEU for English-French versus BLEU for French-English), then we see that the evaluation BLEU scores for either direction are unstable and oscillates between iterations, though they eventually converge upwards as backtranslation continues.

Comparison to previous work

In Table 3, we compare the BLEU scores attained by our best model (an xl distilled on self-amplified GPT-3 followed by 40 rounds of backtranslation) to prior work in unsupervised neural machine translation on the WMT14 English-French benchmark. To ensure comparability to prior work, we report tokenized BLEU using multi-bleu.perl and the XLM tokenizer. This was used to report the few- and zero-shot performance of GPT-3 in DBLP:conf/nips/BrownMRSKDNSSAA20, which we also include in Table 3 for completeness. We emphasize the improvement of our model compared to zero-shot GPT-3, which was used to initialize the bootstrap.


XLM MASS CUNMT XLM+ CBD xl GPT-3 (fs) GPT-3 (zs)

en-fr 33.4 37.5 37.6 40.2 38.2 41.7 32.6 25.2
fr-en 33.3 34.9 35.2 36.9 35.5 38.0 39.2 21.2
Table 3: Comparison of our best model—an xl distilled on self-amplified GPT-3 followed by 40 rounds of iterative backtranslation—to prior work (DBLP:conf/nips/ConneauL19; DBLP:conf/icml/SongTQLL19; DBLP:conf/naacl/WangBLZ21; DBLP:journals/tacl/KeungSLS20; DBLP:conf/icml/NguyenJN0A21) in unsupervised NMT on the WMT14 English-French benchmark. Bold indicates unsupervised state-of-the-art and underline indicates few-shot state-of-the-art.
Potential data contamination from pre-training

For high-resource language pairs such as English-French, naturally occuring demonstrations of translation are virtually guaranteed to appear in Common Crawl-like datasets; indeed, radford2019language provide examples of English-French parallel text embedded in the WebText corpus. Train/test contamination is also a growing area of concern when training large language models on internet-derived data. The data contamination study conducted by DBLP:conf/nips/BrownMRSKDNSSAA20 found virtually no test set contamination for the WMT translation datasets they considered, including the WMT14 English-French dataset. We emphasize that throughout our entire procedure, no explicit supervision is given for the translation task during pre-training, distillation, or backtranslation.

6.1 Ablating temperature for few-shot distillation

self-distill backtrans. backtrans. backtrans.
en-fr 20.3 34.4 34.7 27.8
fr-en 29.9 29.3 29.6 24.7
en-fr 20.6 33.9 35.1 27.6
fr-en 29.2 28.9 29.9 24.4
en-fr 20.2 34.9 34.6 27.6
fr-en 29.0 29.2 29.2 24.9
Table 4: English-French (top) and French-English (bottom) test BLEU using few-shot prompted samples generated with temperatures throughout the bootstrap. We see that the temperature used for sampling has little effect on evaluation BLEU after few-shot distillation, while high-temperature samples are harmful during the backtranslation part of the bootstrap.

It was shown by DBLP:conf/emnlp/EdunovOAG18 that backtranslation is more effective when the translations are slightly noisy, i.e., sampled with nonzero temperature or via a noised beam search. This motivated our use of the temperature throughout. We ablate this choice of temperature when sampling data for few-shot distillation, and study the effect of using and during the bootstrap using a large model. We display the results in Table 4. We see that lower temperatures lead to marginally higher test BLEU scores during distillation while results in lower test loss and no overfitting after two epochs of training. However, regardless of the temperature of samples used for self-distillation, the differences in both test BLEU and test loss almost vanish after the backtranslation part of the bootstrap when training to backtranslate low temperature samples ( or ).

6.2 Few-shot self-amplification

We observed that few-shot prompting GPT-3 with its own zero-shot translations produced better translations than zero-shot prompting alone. We investigate this further by comparing the BLEU scores of zero-shot translations (sampled using the same prompt described in Section 4) to the BLEU scores of self-amplified few-shot prompted translations (i.e., where the few-shot demonstrations are the zero-shot translations sampled from the same model) for all the model sizes studied in this paper. Our results are displayed in Table 5. We see that self-amplification improves translation quality at all model scales.

small medium large xl GPT-3
zero-shot en-fr 0.57 1.23 1.90 2.84 26.19
fr-en 2.00 13.92 8.14 19.60 25.49
self-amplified en-fr 1.39 8.98 12.46 14.32 29.96
fr-en 5.76 16.75 21.75 23.98 31.75
Table 5: Zero-shot versus few-shot self-amplified test BLEU for all model sizes studied in this paper. For zero-shot generation we use the same prompt format described in Section 4. For self-amplified generation, we use the model’s own zero-shot generations as in-context few-shot examples.

6.3 Using real few-shot examples

small medium large xl
few-shot en-fr 1.09 7.19 11.8 13.35
fr-en 3.86 14.58 20.34 23.01

few-shot
en-fr 1.09 6.83 11.38 13.08
fr-en 4.13 14.86 19.92 22.04

few-shot
en-fr 0.33 1.74 2.34 2.94
fr-en 0.94 4.18 4.64 7.25

distillation
en-fr 0.39 7.63 17.27 19.81
fr-en 3.9 20.29 27.65 30.89

initial backtranslation
en-fr 7.77 24.71 29.64 33.78
fr-en 1.7 18.9 26.61 30.93

after backtranslation
en-fr 31.23 34.42 37.86 39.39
fr-en 27.45 29.96 34.23 34.97

Table 6: English-French (top) and French-English (bottom) test BLEU throughout the few-shot self-distillation bootstrap across multiple model scales, this time using real few-shot examples. We see that performance after backtranslation is equivalent to that reported in Table 1.
small large
distillation en-fr 32.95 36.0
fr-en 32.45 36.29

initial backtranslation
en-fr 36.32 38.72
fr-en 32.43 36.61

after backtranslation
en-fr 36.38 39.36
fr-en 32.66 35.67

after backtranslation (+CC100)
en-fr 39.01 42.03
fr-en 34.17 36.94

Table 7: English-French (top) and French-English (bottom) test BLEU of the small and large models throughout the bootstrap and after iterative backtranslation, where for the bootstrap we use generations from 175B GPT-3 prompted using real few-shot examples. Similarly to Table 2, we observe a boost in final BLEU score when, after the bootstrap, we additionally sample monolingual text from the English and French portions of the CC100 dataset.

So far our results have been completely unsupervised, but few-shot learning is typically studied in the context of semi-supervised learning

(DBLP:journals/csur/WangYKN20), where the few-shot demonstrations are real training data. In this section, we ablate the usage of synthetic few-shot translations in our methodology and reproduce our experiments from Section 5 using real few-shot demonstrations. We observe virtually no difference in BLEU score after iterative backtranslation.

We modify the few-shot prompting described in Section 5 as follows. Rather than sampling zero-shot translations for each half of our held-out pool of N=2048 training examples, we sample from these examples directly during few-shot prompting.

Table 6 displays test BLEU throughout the bootstrap and after iterative backtranslation for the same model sizes studied in Section 5.1. We see that our models converge to the same test BLEU (c.f. Section 5.1). Table 7 displays analogous results when distilling samples from GPT-3 with the small and large models, this time few-shot prompted using real examples. We again see that using real rather than synthetic few-shot demonstrations to sample the initial bootstrap data from GPT-3 has no effect on final BLEU score after iterative backtranslation.

6.3.1 Almost-unsupervised machine translation with three examples only

N=3 N=8 N=16 N=32 N=64 N=128 N=256 N=512 N=1024 N=2048
en-fr 12.6 12.4 12.7 13.1 13.2 13.0 12.7 12.9 12.7 12.8
fr-en 21.5 21.3 22.1 22.4 21.9 22.3 22.1 22.1 22.2 22.1
Table 8: BLEU scores (calculated over 4096 random training examples) for the few-shot prompted translations from a large model, as the total number of available few-shot examples varies from to . We see that has minimal impact on the BLEU score of the sampled translations. Moreover, the difference in BLEU between the models bootstrapped using versus disappears after iterative backtranslation.

Finally, we show that even in the semi-supervised setting, we can minimize the supervision available from few-shot demonstrations with no difference in test BLEU after backtranslation coverges. Table 8 displays the BLEU scores of few-shot sampled translations across various orders of magnitude of N, the number of available few-shot examples. Remarkably, even when N is decreased to 3, there is only a slight negative impact on the BLEU score of the few-shot sampled translations. We do not ablate lower values of in order to maintain the assumption of k=3 distinct in-context examples for few-shot prompting. We then run our entire procedure with a large model, using N=3 real few-shot demonstrations for the bootstrap followed by iterative backtranslation. We observe a final English-French BLEU of and French-English BLEU of , on par with the final BLEU scores reported in Table 6.

7 Conclusion and future directions

We remark that backtranslation, like reinforcement learning, is simply a way of exchanging compute for data. Instead of grounding the model with a reward signal from an environment, however, backtranslation exploits the symmetry of the translation task to ground the model by training it to cross-lingually denoise its own samples. Our present work can be viewed as part of a recent trend towards

data-driven architecture engineering (DBLP:conf/iclr/Rabe0BS21; DBLP:conf/icml/WuRLBGS21; DBLP:journals/corr/abs-2102-07492), where task-specific inductive biases, if any, are engineered into and learned from the training data instead of being hardcoded into the model architecture. In formulating the translation task in terms of language modeling, we see that the input-output inductive bias imposed by an encoder-decoder architecture can be simulated with prompt formatting. Similarly, we see that generative language modeling at sufficient scale combined with clever prompting for automated data generation can attain state-of-the-art results in unsupervised translation, rendering methods intended to produce strong encoders and aligned multilingual representations unnecessary.

Although we have focused solely on the domain of machine translation in this work, our methodology is applicable to any sequence-to-sequence task whose forwards and inverse directions are (1) jointly learnable by an autoregressive decoder-only transformer and (2) amenable to few-shot prompting after large-scale generative pre-training. Backtranslation is simply reverse self-training (DBLP:conf/wmt/BojarT11) and is fundamentally untied to the translation domain; we invite the research community at large to further explore this technique, moving beyond translation and towards applications reflecting the full generality of the transformer architecture.

References