1 Introduction
Machine translation (MT) has made remarkable advances with the advent of deep learning approaches
(Bojar et al., 2016; Wu et al., 2016; Crego et al., 2016; JunczysDowmunt et al., 2016). The progress was largely driven by the encoderdecoder framework (Sutskever et al., 2014; Cho et al., 2014) and typically supplemented with an attention mechanism (Bahdanau et al., 2014; Luong et al., 2015b).Compared to the traditional phrasebased systems (Koehn, 2009), neural machine translation (NMT) requires large amounts of data in order to reach high performance (Koehn and Knowles, 2017). Using NMT in a multilingual setting exacerbates the problem by the fact that given languages translating between all pairs would require parallel training corpora (and models).
In an effort to address the problem, different multilingual NMT approaches have been proposed recently. Luong et al. (2015a); Firat et al. (2016a) proposed to use encoders/decoders that are then intermixed to translate between language pairs. Johnson et al. (2016) proposed to use a single model and prepend special symbols to the source text to indicate the target language, which has later been extended to other text preprocessing approaches (Ha et al., 2017) as well as languageconditional parameter generation for encoders and decoders of a single model (Platanios et al., 2018).
Johnson et al. (2016) also show that a single multilingual system could potentially enable zeroshot translation, , it can translate between language pairs not seen in training. For example, given 3 languages—German (), English (), and French ()—and training parallel data only for (, ) and (, ), at test time, the system could additionally translate between (, ).
Zeroshot translation is an important problem. Solving the problem could significantly improve data efficiency—a single multilingual model would be able to generalize and translate between any of the language pairs after being trained only on parallel corpora. However, performance on zeroshot tasks is often unstable and significantly lags behind the supervised directions. Moreover, attempts to improve zeroshot performance by finetuning Firat et al. (2016b); Sestorain et al. (2018) may negatively impact other directions.
In this work, we take a different approach and aim to improve the training procedure of Johnson et al. (2016). First, we analyze multilingual translation problem from a probabilistic perspective and define the notion of zeroshot consistency that gives insights as to why the vanilla training method may not yield models with good zeroshot performance. Next, we propose a novel training objective and a modified learning algorithm that achieves consistency via agreementbased learning (Liang et al., 2006, 2008) and improves zeroshot translation. Our training procedure encourages the model to produce equivalent translations of parallel training sentences into an auxiliary language (Figure 1) and is provably zeroshot consistent. In addition, we make a simple change to the neural decoder to make the agreement losses fully differentiable.
We conduct experiments on IWSLT17 (Mauro et al., 2017), UN corpus (Ziemski et al., 2016), and Europarl (Koehn, 2017), carefully removing complete pivots from the training corpora. Agreementbased learning results in up to +3 BLEU zeroshot improvement over the baseline, compares favorably (up to +2.4 BLEU) to other approaches in the literature (Cheng et al., 2017; Sestorain et al., 2018), is competitive with pivoting, and does not lose in performance on supervised directions.
2 Related work
A simple (and yet effective) baseline for zeroshot translation is pivoting that chaintranslates, first to a pivot language, then to a target (Cohn and Lapata, 2007; Wu and Wang, 2007; Utiyama and Isahara, 2007). Despite being a pipeline, pivoting gets better as the supervised models improve, which makes it a strong baseline in the zeroshot setting. Cheng et al. (2017) proposed a joint pivoting learning strategy that leads to further improvements.
Lu et al. (2018) and Arivazhagan et al. (2018) proposed different techniques to obtain “neural interlingual” representations that are passed to the decoder. Sestorain et al. (2018) proposed another finetuning technique that uses dual learning (He et al., 2016), where a language model is used to provide a signal for finetuning zeroshot directions.
Another family of approaches is based on distillation (Hinton et al., 2014; Kim and Rush, 2016). Along these lines, Firat et al. (2016b) proposed to fine tune a multilingual model to a specified zeroshotdirection with pseudoparallel data and Chen et al. (2017) proposed a teacherstudent framework. While this can yield solid performance improvements, it also adds multistaging overhead and often does not preserve performance of a single model on the supervised directions. We note that our approach (and agreementbased learning in general) is somewhat similar to distillation at training time, which has been explored for largescale singletask prediction problems (Anil et al., 2018).
A setting harder than zeroshot is that of fully unsupervised translation (Ravi and Knight, 2011; Artetxe et al., 2017; Lample et al., 2017, 2018) in which no parallel data is available for training. The ideas proposed in these works (, bilingual dictionaries (Conneau et al., 2017), backtranslation (Sennrich et al., 2015a) and language models (He et al., 2016)) are complementary to our approach, which encourages agreement among different translation directions in the zeroshot multilingual setting.
3 Background
We start by establishing more formal notation and briefly reviewing some background on encoderdecoder multilingual machine translation from a probabilistic perspective.
3.1 Notation
Languages.
We assume that we are given a collection of languages, , that share a common vocabulary, . A language,
, is defined by the marginal probability
it assigns to sentences (, sequences of tokens from the vocabulary), denoted , whereis the length of the sequence. All languages together define a joint probability distribution,
, over tuples of equivalent sentences.Corpora.
While each sentence may have an equivalent representation in all languages, we assume that we have access to only partial sets of equivalent sentences, which form corpora. In this work, we consider bilingual corpora, denoted , that contain pairs of sentences sampled from and monolingual corpora, denoted , that contain sentences sampled from .
Translation.
Finally, we define a translation task from language to as learning to model the conditional distribution . The set of languages along with translation tasks can be represented as a directed graph with a set of nodes, , that represent languages and edges, , that indicate translation directions. We further distinguish between two disjoint subsets of edges: (i) supervised edges, , for which we have parallel data, and (ii) zeroshot edges, , that correspond to zeroshot translation tasks. Figure 2 presents an example translation graph with supervised edges (, , ) and zeroshot edges (, , ). We will use this graph as our running example.
3.2 Encoderdecoder framework
First, consider a purely bilingual setting, where we learn to translate from a source language, , to a target language, . We can train a translation model by optimizing the conditional loglikelihood of the bilingual data under the model:
(1) 
where
are the estimated parameters of the model.
The encoderdecoder framework introduces a latent sequence, , and represents the model as:
(2) 
where is the encoder that maps a source sequence to a sequence of latent representations, , and the decoder defines .^{1}^{1}1Slightly abusing the notation, we use to denote all parameters of the model: embeddings, encoder, and decoder. Note that is usually deterministic with respect to and accurate representation of the conditional distribution highly depends on the decoder. In neural machine translation, the exact forms of encoder and decoder are specified using RNNs (Sutskever et al., 2014), CNNs (Gehring et al., 2016), and attention (Bahdanau et al., 2014; Vaswani et al., 2017) as building blocks. The decoding distribution, , is typically modeled autoregressively.
3.3 Multilingual neural machine translation
In the multilingual setting, we would like to learn to translate in all directions having access to only few parallel bilingual corpora. In other words, we would like to learn a collection of models, . We can assume that models are independent and choose to learn them by maximizing the following objective:
(3) 
In the statistics literature, this estimation approach is called maximum composite likelihood (Besag, 1975; Lindsay, 1988) as it composes the objective out of (sometimes weighted) terms that represent conditional sublikelihoods (in our example, ). Composite likelihoods are easy to construct and tractable to optimize as they do not require representing the full likelihood, which would involve integrating out variables unobserved in the data (see Appendix A.1).
Johnson et al. (2016) proposed to train a multilingual NMT systems by optimizing a composite likelihood objective (3) while representing all conditional distributions, , with a shared encoder and decoder and using language tags, , to distinguish between translation directions:
(4) 
This approach has numerous advantages including: (a) simplicity of training and the architecture (by slightly changing the training data, we convert a bilingual NMT into a multilingual one), (b) sharing parameters of the model between different translation tasks that may lead to better and more robust representations. Johnson et al. (2016) also show that resulting models seem to exhibit some degree of zeroshot generalization enabled by parameter sharing. However, since we lack data for zeroshot directions, composite likelihood (3) misses the terms that correspond to the zeroshot models, and hence has no statistical guarantees for performance on zeroshot tasks.^{2}^{2}2In fact, since the objective (3) assumes that the models are independent, plausible zeroshot performance would be more indicative of the limited capacity of the model or artifacts in the data (, presence of multiparallel sentences) rather than zeroshot generalization.
4 Zeroshot generalization & consistency
Multilingual MT systems can be evaluated in terms of zeroshot performance, or quality of translation along the directions they have not been optimized for (, due to lack of data). We formally define zeroshot generalization via consistency. [Expected Zeroshot Consistency] Let and be supervised and zeroshot tasks, respectively. Let
be a nonnegative loss function and
be a model with maximum expected supervised loss bounded by someWe call zeroshot consistent with respect to if for some
where as . In other words, we say that a machine translation system is zeroshot consistent if low error on supervised tasks implies a low error on zeroshot tasks in expectation (, the system generalizes). We also note that our notion of consistency somewhat resembles error bounds in the domain adaptation literature (BenDavid et al., 2010).
In practice, it is attractive to have MT systems that are guaranteed to exhibit zeroshot generalization since the access to parallel data is always limited and training is computationally expensive. While the training method of Johnson et al. (2016) does not have guarantees, we show that our proposed approach is provably zeroshot consistent.
5 Approach
We propose a new training objective for multilingual NMT architectures with shared encoders and decoders that avoids the limitations of pure composite likelihoods. Our method is based on the idea of agreementbased learning initially proposed for learning consistent alignments in phrasebased statistical machine translation (SMT) systems (Liang et al., 2006, 2008). In terms of the final objective function, the method ends up being reminiscent of distillation (Kim and Rush, 2016), but suitable for joint multilingual training.
5.1 Agreementbased likelihood
To introduce agreementbased objective, we use the graph from Figure 2 that defines translation tasks between 4 languages (, , , ). In particular, consider the composite likelihood objective (3) for a pair of sentences, :
(5)  
where we introduced latent translations into Spanish () and Russian () and marginalized them out (by virtually summing over all sequences in the corresponding languages). Again, note that this objective assumes independence of and models.
Following Liang et al. (2008), we propose to tie together the single prime and the double prime latent variables, and , to encourage agreement between and on the latent translations. We interchange the sum and the product operations inside the in (5), denote to simplify notation, and arrive at the following new objective function:
(6)  
Next, we factorize each term as:
Assuming ,^{3}^{3}3This means that it is sufficient to condition on a sentence in one of the languages to determine probability of a translation in any other language. the objective (6) decomposes into two terms:
(7)  
We call the expression given in (7) agreementbased likelihood. Intuitively, this objective is the likelihood of observing parallel sentences and having submodels and agree on all translations into and at the same time.
Lower bound.
Summation in the agreement term over (, over possible translations into and in our case) is intractable. Switching back from to notation and using Jensen’s inequality, we lower bound it with crossentropy:^{4}^{4}4Note that expectations in (5.1) are conditional on . Symmetrically, we can have a lower bound with expectations conditional on . In practice, we symmetrize the objective.
(8)  
We can estimate the expectations in the lower bound on the agreement terms by sampling and . In practice, instead of sampling we use greedy, continuous decoding (with a fixed maximum sequence length) that also makes and differentiable with respect to parameters of the model.
5.2 Consistency by agreement
We argue that models produced by maximizing agreementbased likelihood (7) are zeroshot consistent. Informally, consider again our running example from Figure 2. Given a pair of parallel sentences in , agreement loss encourages translations from to and translations from to to coincide. Note that are supervised directions. Therefore, agreement ensures that translations along the zeroshot edges in the graph match supervised translations. Formally, we state it as:
[Agreement Zeroshot Consistency] Let , , and be a collection of languages with and be supervised while be a zeroshot direction. Let be submodels represented by a multilingual MT system. If the expected agreementbased loss, , is bounded by some , then, under some mild technical assumptions on the true distribution of the equivalent translations, the zeroshot crossentropy loss is bounded as follows:
where as . For discussion of the assumptions and details on the proof of the bound, see Appendix A.2. Note that Theorem 5.2 is straightforward to extend from triplets of languages to arbitrary connected graphs, as given in the following corollary.
Agreementbased learning yields zero shot consistent MT models (with respect to the cross entropy loss) for arbitrary translation graphs as long as supervised directions span the graph.
Alternative ways to ensure consistency.
Note that there are other ways to ensure zeroshot consistency, , by finetuning or postprocessing a trained multilingual model. For instance, pivoting through an intermediate language is also zeroshot consistent, but the proof requires stronger assumptions about the quality of the supervised sourcepivot model.^{5}^{5}5Intuitively, we have to assume that sourcepivot model does not assign high probabilities to unlikely translations as the pivottarget model may react to those unpredictably. Similarly, using model distillation (Kim and Rush, 2016; Chen et al., 2017) would be also provably consistent under the same assumptions as given in Theorem 5.2, but for a single, preselected zeroshot direction. Note that our proposed agreementbased learning framework is provably consistent for all zeroshot directions and does not require any postprocessing. For discussion of the alternative approaches and consistency proof for pivoting, see Appendix A.3.
5.3 Agreementbased learning algorithm
Having derived a new objective function (7), we can now learn consistent multilingual NMT models using stochastic gradient method with a couple of extra tricks (Algorithm 1). The computation graph for the agreement loss is given in Figure 3.
Subsampling auxiliary languages.
Computing agreement over all languages for each pair of sentences at training time would be quite computationally expensive (to agree on translations, we would need to encodedecode the source and target sequences times each). However, since the agreement lower bound is a sum over expectations (5.1), we can approximate it by subsampling: at each training step (and for each sample in the minibatch), we pick an auxiliary language uniformly at random and compute stochastic approximation of the agreement lower bound (5.1) for that language only. This stochastic approximation is simple, unbiased, and reduces per step computational overhead for the agreement term from to .^{6}^{6}6In practice, note that there is still a constant factor overhead due to extra encodingdecoding steps to/from auxiliary languages, which is about when training on a single GPU. Parallelizing the model across multiple GPUs would easily compensate this overhead.
Overview of the agreement loss computation.
Given a pair of parallel sentences, and , and an auxiliary language, say , an estimate of the lower bound on the agreement term (5.1) is computed as follows. First, we concatenate language tags to both and and encode the sequences so that both can be translated into (the encoding process is depicted in Figure 3A). Next, we decode each of the encoded sentences and obtain auxiliary translations, and , depicted as blue blocks in Figure 3B. Note that we now can treat pairs and as new parallel data for and .
Finally, using these pairs, we can compute two logprobability terms (Figure 3B):
(9)  
using encodingdecoding with teacher forcing (same way as typically done for the supervised directions). Crucially, note that corresponds to a supervised direction, , while corresponds to zeroshot, . We want each of the components to (i) improve the zeroshot direction while (ii) minimally affecting the supervised direction. To achieve (i), we use continuous decoding, and for (ii) we use stopgradientbased protection of the supervised directions. Both techniques are described below.
Greedy continuous decoding.
In order to make and differentiable with respect to (hence, continuous decoding), at each decoding step , we treat the output of the RNN, , as the key and use dotproduct attention over the embedding vocabulary, , to construct :
(10) 
In other words, auxiliary translations, and , are fixed length sequences of differentiable embeddings computed in a greedy fashion.
Protecting supervised directions.
Algorithm 1 scales agreement losses by a small coefficient
. We found experimentally that training could be sensitive to this hyperparameter since the agreement loss also affects the supervised submodels. For example, agreement of
(supervised) and (zeroshot) may push the former towards a worse translation, especially at the beginning of training. To stabilize training, we apply the stop_gradient operator to the log probabilities and samples produced by the supervised submodels before computing the agreement terms (9), to zeroout the corresponding gradient updates.6 Experiments
We evaluate agreementbased training against baselines from the literature on three public datasets that have multiparallel evaluation data that allows assessing zeroshot performance. We report results in terms of the BLEU score (Papineni et al., 2002) that was computed using mtevalv13a.perl.
6.1 Datasets
UN corpus.
Following the setup introduced in Sestorain et al. (2018), we use two datasets, UNcorpus1 and UNcorpus2, derived from the United Nations Parallel Corpus (Ziemski et al., 2016). UNcorpus1 consists of data in 3 languages, , , , where UNcorpus2 has as the 4th language. For training, we use parallel corpora between and the rest of the languages, each about 1M sentences, subsampled from the official training data in a way that ensures no multiparallel training data. The dev and test sets contain 4,000 sentences and are all multiparallel.
Europarl v7^{7}^{7}7http://www.statmt.org/europarl/.
We consider the following languages: , , , . For training, we use parallel data between and the rest of the languages (about 1M sentences per corpus), preprocessed to avoid multiparallel sentences, as was also done by Cheng et al. (2017) and Chen et al. (2017) and described below. The dev and test sets contain 2,000 multiparallel sentences.
Iwslt17^{8}^{8}8https://sites.google.com/site/iwsltevaluation2017/TEDtasks.
We use data from the official multilingual task: 5 languages (, , , , ), 20 translation tasks of which 4 zeroshot ( and ) and the rest 16 supervised. Note that this dataset has a significant overlap between parallel corpora in the supervised directions (up to 100K sentence pairs per direction). This implicitly makes the dataset multiparallel and defeats the purpose of zeroshot evaluation (Dabre et al., 2017). To avoid spurious effects, we also derived IWSLT17 dataset from the original one by restricting supervised data to only and removing overlapping pivoting sentences. We report results on both the official and preprocessed datasets.
Preprocessing.
To properly evaluate systems in terms of zeroshot generalization, we preprocess Europarl and IWSLT to avoid multilingual parallel sentences of the form sourcepivottarget, where sourcetarget is a zeroshot direction. To do so, we follow Cheng et al. (2017); Chen et al. (2017) and randomly split the overlapping pivot sentences of the original sourcepivot and pivottarget corpora into two parts and merge them separately with the nonoverlapping parts for each pair. Along with each parallel training sentence, we save information about source and target tags, after which all the data is combined and shuffled. Finally, we use a shared multilingual subword vocabulary (Sennrich et al., 2015b) on the training data (with 32K merge ops), separately for each dataset. Data statistics are provided in Appendix A.5.
6.2 Training and evaluation
Additional details on the hyperparameters can be found in Appendix A.4.
Models.
We use a smaller version of the GNMT architecture (Wu et al., 2016)
in all our experiments: 512dimensional embeddings (separate for source and target sides), 2 bidirectional LSTM layers of 512 units each for encoding, and GNMTstyle, 4layer, 512unit LSMT decoder with residual connections from the 2nd layer onward.
Training.
We trained the above model using the standard method of Johnson et al. (2016) and using our proposed agreementbased training (Algorithm 1). In both cases, the model was optimized using Adafactor (Shazeer and Stern, 2018) on a machine with 4 P100 GPUs for up to 500K steps, with early stopping on the dev set.
Sestorain et al. (2018)  Our baselines  
PBSMT  NMT0  Dual0  Basic  Pivot  Agree  
61.26  51.93  —  56.58  56.58  56.36  
50.09  40.56  —  44.27  44.27  44.80  
59.89  51.58  —  55.70  55.70  55.24  
52.22  43.33  —  46.46  46.46  46.17  
Supervised (avg.)  55.87  46.85  —  50.75  50.75  50.64 
52.44  20.29  36.68  34.75  38.10  37.54  
49.79  19.01  39.19  37.67  40.84  40.02  
Zeroshot (avg.)  51.11  19.69  37.93  36.21  39.47  38.78 
Source: https://openreview.net/forum?id=ByecAoAqK7. Sestorain et al. (2018) Our baselines PBSMT NMT0 Dual0 Basic Pivot Agree 61.26 47.51 44.30 55.15 55.15 54.30 50.09 36.70 34.34 43.42 43.42 42.57 43.25 30.45 29.47 36.26 36.26 35.89 59.89 48.56 45.55 54.35 54.35 54.33 52.22 40.75 37.75 45.55 45.55 45.87 52.59 39.35 37.96 45.52 45.52 44.67 Supervised (avg.) 53.22 40.55 36.74 46.71 46.71 46.27 52.44 25.85 34.51 34.73 35.93 36.02 49.79 22.68 37.71 38.20 39.51 39.94 39.69 9.36 24.55 26.29 27.15 28.08 49.61 26.26 33.23 33.43 37.17 35.01 36.48 9.35 22.76 23.88 24.99 25.13 43.37 22.43 26.49 28.52 30.06 29.53 Zeroshot (avg.) 45.23 26.26 29.88 30.84 32.47 32.29
Previous work  Our baselines  
Soft  Distill  Basic  Pivot  Agree  
—  —  34.69  34.69  33.80  
—  —  23.06  23.06  22.44  
31.40  —  33.87  33.87  32.55  
31.96  —  34.77  34.77  34.53  
26.55  —  29.06  29.06  29.07  
—  —  33.67  33.67  33.30  
Supervised (avg.)  —  —  31.52  31.52  30.95 
—  —  18.23  20.14  20.70  
—  —  20.28  26.50  22.45  
30.57  33.86  27.99  32.56  30.94  
—  —  27.12  32.96  29.91  
23.79  27.03  21.36  25.67  24.45  
—  —  18.57  19.86  19.15  
Zeroshot (avg.)  —  —  22.25  26.28  24.60 
Soft pivoting (Cheng et al., 2017). Distillation (Chen et al., 2017).
Previous work  Our baselines  
SOTA  CPG  Basic  Pivot  Agree  
Supervised (avg.)  24.10  19.75  24.63  24.63  23.97 
Zeroshot (avg.)  20.55  11.69  19.86  19.26  20.58 
Table 2 from Dabre et al. (2017). Table 2 from Platanios et al. (2018). Basic Pivot Agree Supervised (avg.) 28.72 28.72 29.17 Zeroshot (avg.) 12.61 17.68 15.23
Evaluation.
We focus our evaluation mainly on zeroshot performance of the following methods:
(a) Basic, which stands for directly evaluating a multilingual GNMT model after standard training Johnson et al. (2016).
(b) Pivot, which performs pivotingbased inference using a multilingual GNMT model (after standard training); often regarded as goldstandard.
(c) Agree, which applies a multilingual GNMT model trained with agreement losses directly to zeroshot directions.
To ensure a fair comparison in terms of model capacity, all the techniques above use the same multilingual GNMT architecture described in the previous section. All other results provided in the tables are as reported in the literature.
Implementation.
All our methods were implemented using TensorFlow
(Abadi et al., 2016) on top of tensor2tensor library (Vaswani et al., 2018). Our code will be made publicly available.^{9}^{9}9www.cs.cmu.edu/~mshediva/code/6.3 Results on UN Corpus and Europarl
UN Corpus.
Tables 2 and 2 show results on the UNCorpus datasets. Our approach consistently outperforms Basic and Dual0, despite the latter being trained with additional monolingual data (Sestorain et al., 2018). We see that models trained with agreement perform comparably to Pivot, outperforming it in some cases, , when the target is Russian, perhaps because it is quite different linguistically from the English pivot.
Furthermore, unlike Dual0, Agree maintains high performance in the supervised directions (within 1 BLEU point compared to Basic), indicating that our agreementbased approach is effective as a part of a single multilingual system.
Europarl.
Table 3 shows the results on the Europarl corpus. On this dataset, our approach consistently outperforms Basic by 23 BLEU points but lags a bit behind Pivot on average (except on where it is better). Cheng et al. (2017)^{10}^{10}10We only show their best zeroresource result in the table since some of their methods require direct parallel data. and Chen et al. (2017) have reported zeroresource results on a subset of these directions and our approach outperforms the former but not the latter on these pairs. Note that both Cheng et al. (2017) and Chen et al. (2017) train separate models for each language pair and the approach of Chen et al. (2017) would require training models to encompass all the pairs. In contrast, we use a single multilingual architecture which has more limited model capacity (although in theory, our approach is also compatible with using separate models for each direction).
6.4 Analysis of IWSLT17 zeroshot tasks
Table 5 presents results on the original IWSLT17 task. We note that because of the large amount of data overlap and presence of many supervised translation pairs (16) the vanilla training method (Johnson et al., 2016) achieves very high zero shot performance, even outperforming Pivot. While our approach gives small gains over these baselines, we believe the dataset’s pecularities make it not reliable for evaluating zeroshot generalization.
On the other hand, on our proposed preprocessed IWSLT17 that eliminates the overlap and reduces the number of supervised directions (8), there is a considerable gap between the supervised and zeroshot performance of Basic. Agree performs better than Basic and is slightly worse than Pivot.
6.5 Small data regime
To better understand the dynamics of different methods in the small data regime, we also trained all our methods on subsets of the Europarl for 200K steps and evaluated on the dev set. The training set size varied from 50 to 450K parallel sentences. From Figure 4, Basic tends to perform extremely poorly while Agree
is the most robust (also in terms of variance across zeroshot directions). We see that
Agree generally upperbounds Pivot, except for the pair, perhaps due to fewer cascading errors along these directions.7 Conclusion
In this work, we studied zeroshot generalization in the context of multilingual neural machine translation. First, we introduced the concept of zeroshot consistency that implies generalization. Next, we proposed a provably consistent agreementbased learning approach for zeroshot translation. Empirical results on three datasets showed that agreementbased learning results in up to +3 BLEU zeroshot improvement over the Johnson et al. (2016) baseline, compares favorably to other approaches in the literature (Cheng et al., 2017; Sestorain et al., 2018), is competitive with pivoting, and does not lose in performance on supervised directions.
We believe that the theory and methodology behind agreementbased learning could be useful beyond translation, especially in multimodal settings. For instance, it could be applied to tasks such as crosslingual natural language inference Conneau et al. (2018), styletransfer (Shen et al., 2017; Fu et al., 2017; Prabhumoye et al., 2018), or multilingual image or video captioning. Another interesting future direction would be to explore different handengineered or learned data representations, which one could use to encourage models to agree on during training (, make translation models agree on latent semantic parses, summaries, or potentially other data representations available at training time).
Acknowledgments
We thank Ian Tenney and Anthony Platanios for many insightful discussions, Emily Pitler for the helpful comments on the early draft of the paper, and anonymous reviewers for careful reading and useful feedback.
References

Abadi et al. (2016)
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey
Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al.
2016.
Tensorflow: a system for largescale machine learning.
In OSDI, volume 16, pages 265–283.  Anil et al. (2018) Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E Dahl, and Geoffrey E Hinton. 2018. Large scale distributed neural network training through online distillation. arXiv preprint arXiv:1804.03235.
 Arivazhagan et al. (2018) Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2018. The missing ingredient in zeroshot neural machine translation. OpenReview.net.
 Artetxe et al. (2017) Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041.
 Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
 BenDavid et al. (2010) Shai BenDavid, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine learning, 79(12):151–175.
 Besag (1975) Julian Besag. 1975. Statistical analysis of nonlattice data. The statistician, pages 179–195.
 Bojar et al. (2016) Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In ACL 2016 FIRST CONFERENCE ON MACHINE TRANSLATION (WMT16), pages 131–198. The Association for Computational Linguistics.
 Chen et al. (2017) Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacherstudent framework for zeroresource neural machine translation. arXiv preprint arXiv:1705.00753.
 Cheng et al. (2017) Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivotbased neural machine translation. In Proceedings of IJCAI.
 Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
 Cohn and Lapata (2007) Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multiparallel corpora. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 728–735.
 Conneau et al. (2017) Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.
 Conneau et al. (2018) Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. arXiv preprint arXiv:1809.05053.
 Crego et al. (2016) Josep Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, et al. 2016. Systran’s pure neural machine translation systems. arXiv preprint arXiv:1610.05540.
 Dabre et al. (2017) Raj Dabre, Fabien Cromieres, and Sadao Kurohashi. 2017. Kyoto university mt system description for iwslt 2017. Proc. of IWSLT, Tokyo, Japan.
 Firat et al. (2016a) Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multiway, multilingual neural machine translation with a shared attention mechanism. arXiv preprint arXiv:1601.01073.
 Firat et al. (2016b) Orhan Firat, Baskaran Sankaran, Yaser AlOnaizan, Fatos T Yarman Vural, and Kyunghyun Cho. 2016b. Zeroresource translation with multilingual neural machine translation. arXiv preprint arXiv:1606.04164.
 Fu et al. (2017) Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2017. Style transfer in text: Exploration and evaluation. arXiv preprint arXiv:1711.06861.
 Gehring et al. (2016) Jonas Gehring, Michael Auli, David Grangier, and Yann N Dauphin. 2016. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344.
 Ha et al. (2017) ThanhLe Ha, Jan Niehues, and Alexander Waibel. 2017. Effective strategies in zeroshot neural machine translation. arXiv preprint arXiv:1711.07893.
 He et al. (2016) Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and WeiYing Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828.
 Hinton et al. (2014) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Dark knowledge. Presented as the keynote in BayLearn, 2.
 Johnson et al. (2016) Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s multilingual neural machine translation system: enabling zeroshot translation. arXiv preprint arXiv:1611.04558.
 JunczysDowmunt et al. (2016) Marcin JunczysDowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? a case study on 30 translation directions. arXiv preprint arXiv:1610.01108.
 Kim and Rush (2016) Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947.
 Koehn (2017) Philip Koehn. 2017. Europarl: A parallel corpus for statistical machine translation.
 Koehn (2009) Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press.
 Koehn and Knowles (2017) Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872.
 Lample et al. (2017) Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.
 Lample et al. (2018) Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrasebased & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755.
 Liang et al. (2006) Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 104–111. Association for Computational Linguistics.
 Liang et al. (2008) Percy S Liang, Dan Klein, and Michael I Jordan. 2008. Agreementbased learning. In Advances in Neural Information Processing Systems, pages 913–920.
 Lindsay (1988) Bruce G Lindsay. 1988. Composite likelihood methods. Contemporary mathematics, 80(1):221–239.
 Lu et al. (2018) Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. 2018. A neural interlingua for multilingual machine translation. arXiv preprint arXiv:1804.08198.
 Luong et al. (2015a) MinhThang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multitask sequence to sequence learning. arXiv preprint arXiv:1511.06114.
 Luong et al. (2015b) MinhThang Luong, Hieu Pham, and Christopher D Manning. 2015b. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025.
 Mauro et al. (2017) Cettolo Mauro, Federico Marcello, Bentivogli Luisa, Niehues Jan, Stüker Sebastian, Sudoh Katsuitho, Yoshino Koichiro, and Federmann Christian. 2017. Overview of the iwslt 2017 evaluation campaign. In International Workshop on Spoken Language Translation, pages 2–14.
 Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
 Platanios et al. (2018) Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. arXiv preprint arXiv:1808.08493.
 Prabhumoye et al. (2018) Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through backtranslation. arXiv preprint arXiv:1804.09000.
 Ravi and Knight (2011) Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 12–21. Association for Computational Linguistics.
 Sennrich et al. (2015a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.
 Sennrich et al. (2015b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
 Sestorain et al. (2018) Lierni Sestorain, Massimiliano Ciaramita, Christian Buck, and Thomas Hofmann. 2018. Zeroshot dual machine translation. arXiv preprint arXiv:1805.10338.
 Shazeer and Stern (2018) Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235.
 Shen et al. (2017) Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from nonparallel text by crossalignment. In Advances in Neural Information Processing Systems, pages 6830–6841.

Sutskever et al. (2014)
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
Sequence to sequence learning with neural networks.
In Advances in neural information processing systems, pages 3104–3112.  Utiyama and Isahara (2007) Masao Utiyama and Hitoshi Isahara. 2007. A comparison of pivot methods for phrasebased statistical machine translation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 484–491.
 Vaswani et al. (2018) Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, et al. 2018. Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416.
 Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008.
 Wu and Wang (2007) Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrasebased statistical machine translation. Machine Translation, 21(3):165–181.
 Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
 Ziemski et al. (2016) Michal Ziemski, Marcin JunczysDowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1. 0. In LREC.
Appendix A Appendices
a.1 Complete likelihood
Given a set of conditional models, , we can write out the full likelihood over equivalent translations, , as follows:
(11) 
where is the normalizing constant and denotes all edges in the graph (Figure 5). Given only bilingual parallel corpora, for , we can observe only certain pairs of variables. Therefore, the loglikelihood of the data can be written as:
(12) 
Here, the outer sum iterates over available corpora. The middle sum iterates over parallel sentences in a corpus. The most inner sum marginalizes out unobservable sequences, denoted , which are sentences equivalent under this model to and in languages other than and . Note that due to the innermost summation, computing the loglikelihood is intractable.
We claim the following. Maximizing the full loglikelihood yields zeroshot consistent models (Definition 4).
Proof.
To better understand why this is the case, let us consider example in Figure 5 and compute the loglikelihood of :
Note that the terms that encourage agreement on the translation into are colored in green (similarly, terms that encourage agreement on the translation into are colored in blue). Since all other terms are probabilities and bounded by 1, we have:
In other words, the full log likelihood lowerbounds the agreement objective (up to a constant ). Since optimizing for agreement leads to consistency (Theorem 5.2), and maximizing the full likelihood would necessarily improve the agreement, the claim follows. ∎
Note that the other terms in the full likelihood also have a nontrivial purpose: (a) the terms , , , , encourage the model to correctly reconstruct and when backtranslating from unobserved languages, and , and (b) terms , enforce consistency between the latent representations. In other words, full likelihood accounts for a combination of agreement, backtranslation, and latent consistency.
a.2 Proof of agreement consistency
The statement of Theorem 5.2 mentions an assumption on the true distribution of the equivalent translations. The assumption is as follows. Let be the ground truth conditional distribution that specifies the probability of to be a translation of and into language , given that are correct translations of each other in languages and , respectively. We assume:
This assumption means that, even though there might be multiple equivalent translations, there must be not too many of them (implied by the lower bound) and none of them must be much more preferable than the rest (implied by the upper bound). Given this assumption, we can prove the following simple lemma. Let be one of the supervised directions, . Then the following holds:
Proof.
First, using Jensen’s inequality, we have:
The bound on the supervised direction implies that
To bound the second term, we use Assumption A.2:
Putting these together yields the bound. ∎
Proof.
By assumption, the agreementbased loss is bounded by . Therefore, expected crossentropy on all supervised terms, , is bounded by . Moreover, the agreement term (which is part of the objective) is also bounded:
Expanding this expectation, we have:
Combining that with Lemma A.2, we have:
Since by Assumption A.2, and are some constants, as . ∎
a.3 Consistency of distillation and pivoting
As we mentioned in the main text of the paper, distillation (Chen et al., 2017) and pivoting yield zeroshot consistent models. Let us understand why this is the case.
In our notation, given and as supervised directions, distillation optimizes a KLdivergence between and , where the latter is a zeroshot model and the former is supervised. Noting that KLdivergence lowerbounds crossentropy, it is a loser bound on the agreeement loss. Hence, by ensuring that KL is low, we also ensure that the models agree, which implies consistency (a more formal proof would exactly follow the same steps as the proof of Theorem 5.2).
To prove consistency of pivoting, we need an additional assumption on the quality of the sourcepivot model.
Let be the sourcepivot model. We assume the following bound holds for each pair of equivalent translations, :
where is some constant.
[Pivoting consistency] Given the conditions of Theorem 5.2 and Assumption A.3, pivoting is zeroshot consistent.
Proof.
We can bound the expected error on pivoting as follows (using Jensen’s inequality and the conditions from our assumptions):
∎
a.4 Details on the models and training
Architecture.
All our NMT models used the GNMT (Wu et al., 2016) architecture with Luong attention (Luong et al., 2015b)
, 2 bidirectional encoder, and 4layer decoder with residual connections. All hidden layers (including embeddings) had 512 units. Additionally, we used separate embeddings on the encoder and decoder sides as well as tied weights of the softmax that produced logits with the decoderside (, target) embeddings. Standard dropout of 0.2 was used on all hidden layers. Most of the other hyperparameters we set to default in the T2T
(Vaswani et al., 2018) library for the text2text type of problems.Training and hyperparameters.
We scaled agreement terms in the loss by . The training was done using Adafactor (Shazeer and Stern, 2018) optimizer with 10,000 burnin steps at 0.01 learning rate and further standard square root decay (with the default settings for the decay from the T2T library). Additionally, implemented agreement loss as a subgraph as a loss was not computed if was set to 0. This allowed us to start training multilingual NMT models in the burnin mode using the composite likelihood objective and then switch on agreement starting some point during optimization (typically, after the first 100K iterations; we also experimented with 0, 50K, 200K, but did not notice any difference in terms of final performance). Since the agreement subgraph was not computed during the initial training phase, it tended to accelerate training of agreement models.
a.5 Details on the datasets
Statistics of the IWSLT17 and IWSLT17 datasets are summarized in Table 6. UNCorpus and and Europarl datasets were exactly as described by Sestorain et al. (2018) and Chen et al. (2017); Cheng et al. (2017), respectively.
Corpus  Directions  Train  Dev (dev2010)  Test (tst2010) 
IWSLT17  206k  888  1568  
205k  923  1567  
0  1001  1567  
201k  912  1677  
206k  888  1568  
231K  929  1566  
237k  1003  1777  
220k  914  1678  
205k  923  1567  
231k  929  1566  
205k  1001  1669  
0  914  1643  
0  1001  1779  
237k  1003  1777  
233k  1001  1669  
206k  913  1680  
201k  912  1677  
220k  914  1678  
0  914  1643  
206k  913  1680  
IWSLT17  124k  888  1568  
0  923  1567  
0  1001  1567  
0  912  1677  
124k  888  1568  
139k  929  1566  
155k  1003  1777  
128k  914  1678  
0  923  1567  
139k  929  1566  
0  1001  1669  
0  914  1643  
0  1001  1779  
155k  1003  1777  
0  1001  1669  
0  913  1680  
0  912  1677  
128k  914  1678  
0  914  1643  
0  913  1680 
Comments
There are no comments yet.