Consistency by Agreement in Zero-shot Neural Machine Translation

04/04/2019 ∙ by Maruan Al-Shedivat, et al. ∙ Google Carnegie Mellon University 16

Generalization and reliability of multilingual translation often highly depend on the amount of available parallel data for each language pair of interest. In this paper, we focus on zero-shot generalization---a challenging setup that tests models on translation directions they have not been optimized for at training time. To solve the problem, we (i) reformulate multilingual translation as probabilistic inference, (ii) define the notion of zero-shot consistency and show why standard training often results in models unsuitable for zero-shot tasks, and (iii) introduce a consistent agreement-based training method that encourages the model to produce equivalent translations of parallel sentences in auxiliary languages. We test our multilingual NMT models on multiple public zero-shot translation benchmarks (IWSLT17, UN corpus, Europarl) and show that agreement-based learning often results in 2-3 BLEU zero-shot improvement over strong baselines without any loss in performance on supervised translation directions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine translation (MT) has made remarkable advances with the advent of deep learning approaches

(Bojar et al., 2016; Wu et al., 2016; Crego et al., 2016; Junczys-Dowmunt et al., 2016). The progress was largely driven by the encoder-decoder framework (Sutskever et al., 2014; Cho et al., 2014) and typically supplemented with an attention mechanism (Bahdanau et al., 2014; Luong et al., 2015b).

Compared to the traditional phrase-based systems (Koehn, 2009), neural machine translation (NMT) requires large amounts of data in order to reach high performance (Koehn and Knowles, 2017). Using NMT in a multilingual setting exacerbates the problem by the fact that given languages translating between all pairs would require parallel training corpora (and models).

In an effort to address the problem, different multilingual NMT approaches have been proposed recently. Luong et al. (2015a); Firat et al. (2016a) proposed to use encoders/decoders that are then intermixed to translate between language pairs. Johnson et al. (2016) proposed to use a single model and prepend special symbols to the source text to indicate the target language, which has later been extended to other text preprocessing approaches (Ha et al., 2017) as well as language-conditional parameter generation for encoders and decoders of a single model (Platanios et al., 2018).

Figure 1: Agreement-based training of a multilingual NMT. At training time, given English-French () and English-German () parallel sentences, the model not only is trained to translate between the pair but also to agree on translations into a third language.

Johnson et al. (2016) also show that a single multilingual system could potentially enable zero-shot translation, , it can translate between language pairs not seen in training. For example, given 3 languages—German (), English (), and French ()—and training parallel data only for (, ) and (, ), at test time, the system could additionally translate between (, ).

Zero-shot translation is an important problem. Solving the problem could significantly improve data efficiency—a single multilingual model would be able to generalize and translate between any of the language pairs after being trained only on parallel corpora. However, performance on zero-shot tasks is often unstable and significantly lags behind the supervised directions. Moreover, attempts to improve zero-shot performance by fine-tuning Firat et al. (2016b); Sestorain et al. (2018) may negatively impact other directions.

In this work, we take a different approach and aim to improve the training procedure of Johnson et al. (2016). First, we analyze multilingual translation problem from a probabilistic perspective and define the notion of zero-shot consistency that gives insights as to why the vanilla training method may not yield models with good zero-shot performance. Next, we propose a novel training objective and a modified learning algorithm that achieves consistency via agreement-based learning (Liang et al., 2006, 2008) and improves zero-shot translation. Our training procedure encourages the model to produce equivalent translations of parallel training sentences into an auxiliary language (Figure 1) and is provably zero-shot consistent. In addition, we make a simple change to the neural decoder to make the agreement losses fully differentiable.

We conduct experiments on IWSLT17 (Mauro et al., 2017), UN corpus (Ziemski et al., 2016), and Europarl (Koehn, 2017), carefully removing complete pivots from the training corpora. Agreement-based learning results in up to +3 BLEU zero-shot improvement over the baseline, compares favorably (up to +2.4 BLEU) to other approaches in the literature (Cheng et al., 2017; Sestorain et al., 2018), is competitive with pivoting, and does not lose in performance on supervised directions.

2 Related work

A simple (and yet effective) baseline for zero-shot translation is pivoting that chain-translates, first to a pivot language, then to a target (Cohn and Lapata, 2007; Wu and Wang, 2007; Utiyama and Isahara, 2007). Despite being a pipeline, pivoting gets better as the supervised models improve, which makes it a strong baseline in the zero-shot setting. Cheng et al. (2017) proposed a joint pivoting learning strategy that leads to further improvements.

Lu et al. (2018) and Arivazhagan et al. (2018) proposed different techniques to obtain “neural interlingual” representations that are passed to the decoder. Sestorain et al. (2018) proposed another fine-tuning technique that uses dual learning (He et al., 2016), where a language model is used to provide a signal for fine-tuning zero-shot directions.

Another family of approaches is based on distillation (Hinton et al., 2014; Kim and Rush, 2016). Along these lines, Firat et al. (2016b) proposed to fine tune a multilingual model to a specified zero-shot-direction with pseudo-parallel data and Chen et al. (2017) proposed a teacher-student framework. While this can yield solid performance improvements, it also adds multi-staging overhead and often does not preserve performance of a single model on the supervised directions. We note that our approach (and agreement-based learning in general) is somewhat similar to distillation at training time, which has been explored for large-scale single-task prediction problems (Anil et al., 2018).

A setting harder than zero-shot is that of fully unsupervised translation (Ravi and Knight, 2011; Artetxe et al., 2017; Lample et al., 2017, 2018) in which no parallel data is available for training. The ideas proposed in these works (, bilingual dictionaries (Conneau et al., 2017), backtranslation (Sennrich et al., 2015a) and language models (He et al., 2016)) are complementary to our approach, which encourages agreement among different translation directions in the zero-shot multilingual setting.

3 Background

We start by establishing more formal notation and briefly reviewing some background on encoder-decoder multilingual machine translation from a probabilistic perspective.

3.1 Notation

Languages.

We assume that we are given a collection of languages, , that share a common vocabulary, . A language,

, is defined by the marginal probability

it assigns to sentences (, sequences of tokens from the vocabulary), denoted , where

is the length of the sequence. All languages together define a joint probability distribution,

, over -tuples of equivalent sentences.

Corpora.

While each sentence may have an equivalent representation in all languages, we assume that we have access to only partial sets of equivalent sentences, which form corpora. In this work, we consider bilingual corpora, denoted , that contain pairs of sentences sampled from and monolingual corpora, denoted , that contain sentences sampled from .

Translation.

Finally, we define a translation task from language to as learning to model the conditional distribution . The set of languages along with translation tasks can be represented as a directed graph with a set of nodes, , that represent languages and edges, , that indicate translation directions. We further distinguish between two disjoint subsets of edges: (i) supervised edges, , for which we have parallel data, and (ii) zero-shot edges, , that correspond to zero-shot translation tasks. Figure 2 presents an example translation graph with supervised edges (, , ) and zero-shot edges (, , ). We will use this graph as our running example.

En

Es

Fr

Ru

Figure 2: Translation graph: Languages (nodes), parallel corpora (solid edges), and zero-shot directions (dotted edges).

3.2 Encoder-decoder framework

First, consider a purely bilingual setting, where we learn to translate from a source language, , to a target language, . We can train a translation model by optimizing the conditional log-likelihood of the bilingual data under the model:

(1)

where

are the estimated parameters of the model.

The encoder-decoder framework introduces a latent sequence, , and represents the model as:

(2)

where is the encoder that maps a source sequence to a sequence of latent representations, , and the decoder defines .111Slightly abusing the notation, we use to denote all parameters of the model: embeddings, encoder, and decoder. Note that is usually deterministic with respect to and accurate representation of the conditional distribution highly depends on the decoder. In neural machine translation, the exact forms of encoder and decoder are specified using RNNs (Sutskever et al., 2014), CNNs (Gehring et al., 2016), and attention (Bahdanau et al., 2014; Vaswani et al., 2017) as building blocks. The decoding distribution, , is typically modeled autoregressively.

3.3 Multilingual neural machine translation

In the multilingual setting, we would like to learn to translate in all directions having access to only few parallel bilingual corpora. In other words, we would like to learn a collection of models, . We can assume that models are independent and choose to learn them by maximizing the following objective:

(3)

In the statistics literature, this estimation approach is called maximum composite likelihood (Besag, 1975; Lindsay, 1988) as it composes the objective out of (sometimes weighted) terms that represent conditional sub-likelihoods (in our example, ). Composite likelihoods are easy to construct and tractable to optimize as they do not require representing the full likelihood, which would involve integrating out variables unobserved in the data (see Appendix A.1).

Johnson et al. (2016) proposed to train a multilingual NMT systems by optimizing a composite likelihood objective (3) while representing all conditional distributions, , with a shared encoder and decoder and using language tags, , to distinguish between translation directions:

(4)

This approach has numerous advantages including: (a) simplicity of training and the architecture (by slightly changing the training data, we convert a bilingual NMT into a multilingual one), (b) sharing parameters of the model between different translation tasks that may lead to better and more robust representations. Johnson et al. (2016) also show that resulting models seem to exhibit some degree of zero-shot generalization enabled by parameter sharing. However, since we lack data for zero-shot directions, composite likelihood (3) misses the terms that correspond to the zero-shot models, and hence has no statistical guarantees for performance on zero-shot tasks.222In fact, since the objective (3) assumes that the models are independent, plausible zero-shot performance would be more indicative of the limited capacity of the model or artifacts in the data (, presence of multi-parallel sentences) rather than zero-shot generalization.

4 Zero-shot generalization & consistency

Multilingual MT systems can be evaluated in terms of zero-shot performance, or quality of translation along the directions they have not been optimized for (, due to lack of data). We formally define zero-shot generalization via consistency. [Expected Zero-shot Consistency] Let and be supervised and zero-shot tasks, respectively. Let

be a non-negative loss function and

be a model with maximum expected supervised loss bounded by some

We call zero-shot consistent with respect to if for some

where as . In other words, we say that a machine translation system is zero-shot consistent if low error on supervised tasks implies a low error on zero-shot tasks in expectation (, the system generalizes). We also note that our notion of consistency somewhat resembles error bounds in the domain adaptation literature (Ben-David et al., 2010).

In practice, it is attractive to have MT systems that are guaranteed to exhibit zero-shot generalization since the access to parallel data is always limited and training is computationally expensive. While the training method of Johnson et al. (2016) does not have guarantees, we show that our proposed approach is provably zero-shot consistent.

5 Approach

We propose a new training objective for multilingual NMT architectures with shared encoders and decoders that avoids the limitations of pure composite likelihoods. Our method is based on the idea of agreement-based learning initially proposed for learning consistent alignments in phrase-based statistical machine translation (SMT) systems (Liang et al., 2006, 2008). In terms of the final objective function, the method ends up being reminiscent of distillation (Kim and Rush, 2016), but suitable for joint multilingual training.

5.1 Agreement-based likelihood

To introduce agreement-based objective, we use the graph from Figure 2 that defines translation tasks between 4 languages (, , , ). In particular, consider the composite likelihood objective (3) for a pair of sentences, :

(5)

where we introduced latent translations into Spanish () and Russian () and marginalized them out (by virtually summing over all sequences in the corresponding languages). Again, note that this objective assumes independence of and models.

Following Liang et al. (2008), we propose to tie together the single prime and the double prime latent variables, and , to encourage agreement between and on the latent translations. We interchange the sum and the product operations inside the in (5), denote to simplify notation, and arrive at the following new objective function:

(6)

Next, we factorize each term as:

Assuming ,333This means that it is sufficient to condition on a sentence in one of the languages to determine probability of a translation in any other language. the objective (6) decomposes into two terms:

(7)

We call the expression given in (7) agreement-based likelihood. Intuitively, this objective is the likelihood of observing parallel sentences and having sub-models and agree on all translations into and at the same time.

Lower bound.

Summation in the agreement term over (, over possible translations into and in our case) is intractable. Switching back from to notation and using Jensen’s inequality, we lower bound it with cross-entropy:444Note that expectations in (5.1) are conditional on . Symmetrically, we can have a lower bound with expectations conditional on . In practice, we symmetrize the objective.

(8)

We can estimate the expectations in the lower bound on the agreement terms by sampling and . In practice, instead of sampling we use greedy, continuous decoding (with a fixed maximum sequence length) that also makes and differentiable with respect to parameters of the model.

5.2 Consistency by agreement

We argue that models produced by maximizing agreement-based likelihood (7) are zero-shot consistent. Informally, consider again our running example from Figure 2. Given a pair of parallel sentences in , agreement loss encourages translations from to and translations from to to coincide. Note that are supervised directions. Therefore, agreement ensures that translations along the zero-shot edges in the graph match supervised translations. Formally, we state it as:

[Agreement Zero-shot Consistency] Let , , and be a collection of languages with and be supervised while be a zero-shot direction. Let be sub-models represented by a multilingual MT system. If the expected agreement-based loss, , is bounded by some , then, under some mild technical assumptions on the true distribution of the equivalent translations, the zero-shot cross-entropy loss is bounded as follows:

where as . For discussion of the assumptions and details on the proof of the bound, see Appendix A.2. Note that Theorem 5.2 is straightforward to extend from triplets of languages to arbitrary connected graphs, as given in the following corollary.

Agreement-based learning yields zero shot consistent MT models (with respect to the cross entropy loss) for arbitrary translation graphs as long as supervised directions span the graph.

Alternative ways to ensure consistency.

Note that there are other ways to ensure zero-shot consistency, , by fine-tuning or post-processing a trained multilingual model. For instance, pivoting through an intermediate language is also zero-shot consistent, but the proof requires stronger assumptions about the quality of the supervised source-pivot model.555Intuitively, we have to assume that source-pivot model does not assign high probabilities to unlikely translations as the pivot-target model may react to those unpredictably. Similarly, using model distillation (Kim and Rush, 2016; Chen et al., 2017) would be also provably consistent under the same assumptions as given in Theorem 5.2, but for a single, pre-selected zero-shot direction. Note that our proposed agreement-based learning framework is provably consistent for all zero-shot directions and does not require any post-processing. For discussion of the alternative approaches and consistency proof for pivoting, see Appendix A.3.

1.07

0:  Architecture (GNMT), agreement coefficient ()
1:  Initialize:
2:  while not (converged or step limit reached) do
3:     Get a mini-batch of parallel src-tgt pairs,
4:     Supervised loss:
5:     Auxiliary languages:
6:     Auxiliary translations:      
7:     Agreement log-probabilities:      
8:     Apply stop-gradients to supervised and
9:     Total loss:
10:     Update:
11:  end while
11:  
Algorithm 1 Agreement-based M-NMT training
Figure 3: A. Computation graph for the encoder. The representations depend on the input sequence and the target language tag. B. Computation graph for the agreement loss. First, encode source and target sequences with the auxiliary language tags. Next, decode from both and using continuous greedy decoder. Finally, evaluate log probabilities, and , and compute a sample estimate of the agreement loss.

5.3 Agreement-based learning algorithm

Having derived a new objective function (7), we can now learn consistent multilingual NMT models using stochastic gradient method with a couple of extra tricks (Algorithm 1). The computation graph for the agreement loss is given in Figure 3.

Subsampling auxiliary languages.

Computing agreement over all languages for each pair of sentences at training time would be quite computationally expensive (to agree on translations, we would need to encode-decode the source and target sequences times each). However, since the agreement lower bound is a sum over expectations (5.1), we can approximate it by subsampling: at each training step (and for each sample in the mini-batch), we pick an auxiliary language uniformly at random and compute stochastic approximation of the agreement lower bound (5.1) for that language only. This stochastic approximation is simple, unbiased, and reduces per step computational overhead for the agreement term from to .666In practice, note that there is still a constant factor overhead due to extra encoding-decoding steps to/from auxiliary languages, which is about when training on a single GPU. Parallelizing the model across multiple GPUs would easily compensate this overhead.

Overview of the agreement loss computation.

Given a pair of parallel sentences, and , and an auxiliary language, say , an estimate of the lower bound on the agreement term (5.1) is computed as follows. First, we concatenate language tags to both and and encode the sequences so that both can be translated into (the encoding process is depicted in Figure 3A). Next, we decode each of the encoded sentences and obtain auxiliary translations, and , depicted as blue blocks in Figure 3B. Note that we now can treat pairs and as new parallel data for and .

Finally, using these pairs, we can compute two log-probability terms (Figure 3B):

(9)

using encoding-decoding with teacher forcing (same way as typically done for the supervised directions). Crucially, note that corresponds to a supervised direction, , while corresponds to zero-shot, . We want each of the components to (i) improve the zero-shot direction while (ii) minimally affecting the supervised direction. To achieve (i), we use continuous decoding, and for (ii) we use stop-gradient-based protection of the supervised directions. Both techniques are described below.

Greedy continuous decoding.

In order to make and differentiable with respect to (hence, continuous decoding), at each decoding step , we treat the output of the RNN, , as the key and use dot-product attention over the embedding vocabulary, , to construct :

(10)

In other words, auxiliary translations, and , are fixed length sequences of differentiable embeddings computed in a greedy fashion.

Protecting supervised directions.

Algorithm 1 scales agreement losses by a small coefficient

. We found experimentally that training could be sensitive to this hyperparameter since the agreement loss also affects the supervised sub-models. For example, agreement of

(supervised) and (zero-shot) may push the former towards a worse translation, especially at the beginning of training. To stabilize training, we apply the stop_gradient operator to the log probabilities and samples produced by the supervised sub-models before computing the agreement terms (9), to zero-out the corresponding gradient updates.

6 Experiments

We evaluate agreement-based training against baselines from the literature on three public datasets that have multi-parallel evaluation data that allows assessing zero-shot performance. We report results in terms of the BLEU score (Papineni et al., 2002) that was computed using mteval-v13a.perl.

6.1 Datasets

UN corpus.

Following the setup introduced in Sestorain et al. (2018), we use two datasets, UNcorpus-1 and UNcorpus-2, derived from the United Nations Parallel Corpus (Ziemski et al., 2016). UNcorpus-1 consists of data in 3 languages, , , , where UNcorpus-2 has as the 4th language. For training, we use parallel corpora between and the rest of the languages, each about 1M sentences, sub-sampled from the official training data in a way that ensures no multi-parallel training data. The dev and test sets contain 4,000 sentences and are all multi-parallel.

Europarl v7777http://www.statmt.org/europarl/.

We consider the following languages: , , , . For training, we use parallel data between and the rest of the languages (about 1M sentences per corpus), preprocessed to avoid multi-parallel sentences, as was also done by Cheng et al. (2017) and Chen et al. (2017) and described below. The dev and test sets contain 2,000 multi-parallel sentences.

Iwslt17888https://sites.google.com/site/iwsltevaluation2017/TED-tasks.

We use data from the official multilingual task: 5 languages (, , , , ), 20 translation tasks of which 4 zero-shot ( and ) and the rest 16 supervised. Note that this dataset has a significant overlap between parallel corpora in the supervised directions (up to 100K sentence pairs per direction). This implicitly makes the dataset multi-parallel and defeats the purpose of zero-shot evaluation (Dabre et al., 2017). To avoid spurious effects, we also derived IWSLT17 dataset from the original one by restricting supervised data to only and removing overlapping pivoting sentences. We report results on both the official and preprocessed datasets.

Preprocessing.

To properly evaluate systems in terms of zero-shot generalization, we preprocess Europarl and IWSLT to avoid multi-lingual parallel sentences of the form source-pivot-target, where source-target is a zero-shot direction. To do so, we follow Cheng et al. (2017); Chen et al. (2017) and randomly split the overlapping pivot sentences of the original source-pivot and pivot-target corpora into two parts and merge them separately with the non-overlapping parts for each pair. Along with each parallel training sentence, we save information about source and target tags, after which all the data is combined and shuffled. Finally, we use a shared multilingual subword vocabulary (Sennrich et al., 2015b) on the training data (with 32K merge ops), separately for each dataset. Data statistics are provided in Appendix A.5.

6.2 Training and evaluation

Additional details on the hyperparameters can be found in Appendix A.4.

Models.

We use a smaller version of the GNMT architecture (Wu et al., 2016)

in all our experiments: 512-dimensional embeddings (separate for source and target sides), 2 bidirectional LSTM layers of 512 units each for encoding, and GNMT-style, 4-layer, 512-unit LSMT decoder with residual connections from the 2nd layer onward.

Training.

We trained the above model using the standard method of Johnson et al. (2016) and using our proposed agreement-based training (Algorithm 1). In both cases, the model was optimized using Adafactor (Shazeer and Stern, 2018) on a machine with 4 P100 GPUs for up to 500K steps, with early stopping on the dev set.

Sestorain et al. (2018) Our baselines
PBSMT NMT-0 Dual-0 Basic Pivot Agree
61.26 51.93 56.58 56.58 56.36
50.09 40.56 44.27 44.27 44.80
59.89 51.58 55.70 55.70 55.24
52.22 43.33 46.46 46.46 46.17
Supervised (avg.) 55.87 46.85 50.75 50.75 50.64
52.44 20.29 36.68 34.75 38.10 37.54
49.79 19.01 39.19 37.67 40.84 40.02
Zero-shot (avg.) 51.11 19.69 37.93 36.21 39.47 38.78

Source: https://openreview.net/forum?id=ByecAoAqK7. Sestorain et al. (2018) Our baselines PBSMT NMT-0 Dual-0 Basic Pivot Agree 61.26 47.51 44.30 55.15 55.15 54.30 50.09 36.70 34.34 43.42 43.42 42.57 43.25 30.45 29.47 36.26 36.26 35.89 59.89 48.56 45.55 54.35 54.35 54.33 52.22 40.75 37.75 45.55 45.55 45.87 52.59 39.35 37.96 45.52 45.52 44.67 Supervised (avg.) 53.22 40.55 36.74 46.71 46.71 46.27 52.44 25.85 34.51 34.73 35.93 36.02 49.79 22.68 37.71 38.20 39.51 39.94 39.69 9.36 24.55 26.29 27.15 28.08 49.61 26.26 33.23 33.43 37.17 35.01 36.48 9.35 22.76 23.88 24.99 25.13 43.37 22.43 26.49 28.52 30.06 29.53 Zero-shot (avg.) 45.23 26.26 29.88 30.84 32.47 32.29

Table 1: Results on UNCorpus-1.
Table 2: Results on UNCorpus-2.
Previous work Our baselines
Soft Distill Basic Pivot Agree
34.69 34.69 33.80
23.06 23.06 22.44
31.40 33.87 33.87 32.55
31.96 34.77 34.77 34.53
26.55 29.06 29.06 29.07
33.67 33.67 33.30
Supervised (avg.) 31.52 31.52 30.95
18.23 20.14 20.70
20.28 26.50 22.45
30.57 33.86 27.99 32.56 30.94
27.12 32.96 29.91
23.79 27.03 21.36 25.67 24.45
18.57 19.86 19.15
Zero-shot (avg.) 22.25 26.28 24.60

Soft pivoting (Cheng et al., 2017). Distillation (Chen et al., 2017).

Table 3: Zero-shot results on Europarl. Note that Soft and Distill are not multilingual systems.
Previous work Our baselines
SOTA CPG Basic Pivot Agree
Supervised (avg.) 24.10 19.75 24.63 24.63 23.97
Zero-shot (avg.) 20.55 11.69 19.86 19.26 20.58

Table 2 from Dabre et al. (2017). Table 2 from Platanios et al. (2018). Basic Pivot Agree Supervised (avg.) 28.72 28.72 29.17 Zero-shot (avg.) 12.61 17.68 15.23

Table 4: Results on the official IWSLT17 multilingual task.
Table 5: Results on our proposed IWSLT17.

Evaluation.

We focus our evaluation mainly on zero-shot performance of the following methods:
(a) Basic, which stands for directly evaluating a multilingual GNMT model after standard training Johnson et al. (2016).
(b) Pivot, which performs pivoting-based inference using a multilingual GNMT model (after standard training); often regarded as gold-standard.
(c) Agree, which applies a multilingual GNMT model trained with agreement losses directly to zero-shot directions.

To ensure a fair comparison in terms of model capacity, all the techniques above use the same multilingual GNMT architecture described in the previous section. All other results provided in the tables are as reported in the literature.

Implementation.

All our methods were implemented using TensorFlow

(Abadi et al., 2016) on top of tensor2tensor library (Vaswani et al., 2018). Our code will be made publicly available.999www.cs.cmu.edu/~mshediva/code/

6.3 Results on UN Corpus and Europarl

UN Corpus.

Tables 2 and 2 show results on the UNCorpus datasets. Our approach consistently outperforms Basic and Dual-0, despite the latter being trained with additional monolingual data (Sestorain et al., 2018). We see that models trained with agreement perform comparably to Pivot, outperforming it in some cases, , when the target is Russian, perhaps because it is quite different linguistically from the English pivot.

Furthermore, unlike Dual-0, Agree maintains high performance in the supervised directions (within 1 BLEU point compared to Basic), indicating that our agreement-based approach is effective as a part of a single multilingual system.

Europarl.

Table 3 shows the results on the Europarl corpus. On this dataset, our approach consistently outperforms Basic by 2-3 BLEU points but lags a bit behind Pivot on average (except on where it is better). Cheng et al. (2017)101010We only show their best zero-resource result in the table since some of their methods require direct parallel data. and Chen et al. (2017) have reported zero-resource results on a subset of these directions and our approach outperforms the former but not the latter on these pairs. Note that both Cheng et al. (2017) and Chen et al. (2017) train separate models for each language pair and the approach of Chen et al. (2017) would require training models to encompass all the pairs. In contrast, we use a single multilingual architecture which has more limited model capacity (although in theory, our approach is also compatible with using separate models for each direction).

Figure 4: BLEU on the dev set for Agree and the baselines trained on smaller subsets of the Europarl corpus.

6.4 Analysis of IWSLT17 zero-shot tasks

Table 5 presents results on the original IWSLT17 task. We note that because of the large amount of data overlap and presence of many supervised translation pairs (16) the vanilla training method (Johnson et al., 2016) achieves very high zero shot performance, even outperforming Pivot. While our approach gives small gains over these baselines, we believe the dataset’s pecularities make it not reliable for evaluating zero-shot generalization.

On the other hand, on our proposed preprocessed IWSLT17 that eliminates the overlap and reduces the number of supervised directions (8), there is a considerable gap between the supervised and zero-shot performance of Basic. Agree performs better than Basic and is slightly worse than Pivot.

6.5 Small data regime

To better understand the dynamics of different methods in the small data regime, we also trained all our methods on subsets of the Europarl for 200K steps and evaluated on the dev set. The training set size varied from 50 to 450K parallel sentences. From Figure 4, Basic tends to perform extremely poorly while Agree

is the most robust (also in terms of variance across zero-shot directions). We see that

Agree generally upper-bounds Pivot, except for the pair, perhaps due to fewer cascading errors along these directions.

7 Conclusion

In this work, we studied zero-shot generalization in the context of multilingual neural machine translation. First, we introduced the concept of zero-shot consistency that implies generalization. Next, we proposed a provably consistent agreement-based learning approach for zero-shot translation. Empirical results on three datasets showed that agreement-based learning results in up to +3 BLEU zero-shot improvement over the Johnson et al. (2016) baseline, compares favorably to other approaches in the literature (Cheng et al., 2017; Sestorain et al., 2018), is competitive with pivoting, and does not lose in performance on supervised directions.

We believe that the theory and methodology behind agreement-based learning could be useful beyond translation, especially in multi-modal settings. For instance, it could be applied to tasks such as cross-lingual natural language inference Conneau et al. (2018), style-transfer (Shen et al., 2017; Fu et al., 2017; Prabhumoye et al., 2018), or multilingual image or video captioning. Another interesting future direction would be to explore different hand-engineered or learned data representations, which one could use to encourage models to agree on during training (, make translation models agree on latent semantic parses, summaries, or potentially other data representations available at training time).

Acknowledgments

We thank Ian Tenney and Anthony Platanios for many insightful discussions, Emily Pitler for the helpful comments on the early draft of the paper, and anonymous reviewers for careful reading and useful feedback.

References

Appendix A Appendices

a.1 Complete likelihood

Given a set of conditional models, , we can write out the full likelihood over equivalent translations, , as follows:

(11)

where is the normalizing constant and denotes all edges in the graph (Figure 5). Given only bilingual parallel corpora, for , we can observe only certain pairs of variables. Therefore, the log-likelihood of the data can be written as:

(12)

Here, the outer sum iterates over available corpora. The middle sum iterates over parallel sentences in a corpus. The most inner sum marginalizes out unobservable sequences, denoted , which are sentences equivalent under this model to and in languages other than and . Note that due to the inner-most summation, computing the log-likelihood is intractable.

We claim the following. Maximizing the full log-likelihood yields zero-shot consistent models (Definition 4).

Proof.

To better understand why this is the case, let us consider example in Figure 5 and compute the log-likelihood of :

Note that the terms that encourage agreement on the translation into are colored in green (similarly, terms that encourage agreement on the translation into are colored in blue). Since all other terms are probabilities and bounded by 1, we have:

In other words, the full log likelihood lower-bounds the agreement objective (up to a constant ). Since optimizing for agreement leads to consistency (Theorem 5.2), and maximizing the full likelihood would necessarily improve the agreement, the claim follows. ∎

Note that the other terms in the full likelihood also have a non-trivial purpose: (a) the terms , , , , encourage the model to correctly reconstruct and when back-translating from unobserved languages, and , and (b) terms , enforce consistency between the latent representations. In other words, full likelihood accounts for a combination of agreement, back-translation, and latent consistency.

Figure 5: Probabilistic graphical model for a multilingual system with four languages . Variables can only be observed only in pairs (shaded in the graph).

a.2 Proof of agreement consistency

The statement of Theorem 5.2 mentions an assumption on the true distribution of the equivalent translations. The assumption is as follows. Let be the ground truth conditional distribution that specifies the probability of to be a translation of and into language , given that are correct translations of each other in languages and , respectively. We assume:

This assumption means that, even though there might be multiple equivalent translations, there must be not too many of them (implied by the lower bound) and none of them must be much more preferable than the rest (implied by the upper bound). Given this assumption, we can prove the following simple lemma. Let be one of the supervised directions, . Then the following holds:

Proof.

First, using Jensen’s inequality, we have:

The bound on the supervised direction implies that

To bound the second term, we use Assumption A.2:

Putting these together yields the bound. ∎

Now, using Lemma A.2, we can prove Theorem 5.2.

Proof.

By assumption, the agreement-based loss is bounded by . Therefore, expected cross-entropy on all supervised terms, , is bounded by . Moreover, the agreement term (which is part of the objective) is also bounded:

Expanding this expectation, we have:

Combining that with Lemma A.2, we have:

Since by Assumption A.2, and are some constants, as . ∎

a.3 Consistency of distillation and pivoting

As we mentioned in the main text of the paper, distillation (Chen et al., 2017) and pivoting yield zero-shot consistent models. Let us understand why this is the case.

In our notation, given and as supervised directions, distillation optimizes a KL-divergence between and , where the latter is a zero-shot model and the former is supervised. Noting that KL-divergence lower-bounds cross-entropy, it is a loser bound on the agreeement loss. Hence, by ensuring that KL is low, we also ensure that the models agree, which implies consistency (a more formal proof would exactly follow the same steps as the proof of Theorem 5.2).

To prove consistency of pivoting, we need an additional assumption on the quality of the source-pivot model.

Let be the source-pivot model. We assume the following bound holds for each pair of equivalent translations, :

where is some constant.

[Pivoting consistency] Given the conditions of Theorem 5.2 and Assumption A.3, pivoting is zero-shot consistent.

Proof.

We can bound the expected error on pivoting as follows (using Jensen’s inequality and the conditions from our assumptions):

a.4 Details on the models and training

Architecture.

All our NMT models used the GNMT (Wu et al., 2016) architecture with Luong attention (Luong et al., 2015b)

, 2 bidirectional encoder, and 4-layer decoder with residual connections. All hidden layers (including embeddings) had 512 units. Additionally, we used separate embeddings on the encoder and decoder sides as well as tied weights of the softmax that produced logits with the decoder-side (, target) embeddings. Standard dropout of 0.2 was used on all hidden layers. Most of the other hyperparameters we set to default in the T2T

(Vaswani et al., 2018) library for the text2text type of problems.

Training and hyperparameters.

We scaled agreement terms in the loss by . The training was done using Adafactor (Shazeer and Stern, 2018) optimizer with 10,000 burn-in steps at 0.01 learning rate and further standard square root decay (with the default settings for the decay from the T2T library). Additionally, implemented agreement loss as a subgraph as a loss was not computed if was set to 0. This allowed us to start training multilingual NMT models in the burn-in mode using the composite likelihood objective and then switch on agreement starting some point during optimization (typically, after the first 100K iterations; we also experimented with 0, 50K, 200K, but did not notice any difference in terms of final performance). Since the agreement subgraph was not computed during the initial training phase, it tended to accelerate training of agreement models.

a.5 Details on the datasets

Statistics of the IWSLT17 and IWSLT17 datasets are summarized in Table 6. UNCorpus and and Europarl datasets were exactly as described by Sestorain et al. (2018) and Chen et al. (2017); Cheng et al. (2017), respectively.

Corpus Directions Train Dev (dev2010) Test (tst2010)
IWSLT17 206k 888 1568
205k 923 1567
0 1001 1567
201k 912 1677
206k 888 1568
231K 929 1566
237k 1003 1777
220k 914 1678
205k 923 1567
231k 929 1566
205k 1001 1669
0 914 1643
0 1001 1779
237k 1003 1777
233k 1001 1669
206k 913 1680
201k 912 1677
220k 914 1678
0 914 1643
206k 913 1680
IWSLT17 124k 888 1568
0 923 1567
0 1001 1567
0 912 1677
124k 888 1568
139k 929 1566
155k 1003 1777
128k 914 1678
0 923 1567
139k 929 1566
0 1001 1669
0 914 1643
0 1001 1779
155k 1003 1777
0 1001 1669
0 913 1680
0 912 1677
128k 914 1678
0 914 1643
0 913 1680
Table 6: Data statistics for IWSLT17 and IWSLT17. Note that training data in IWSLT17 was restricted to only directions and cleaned from complete pivots through , which also reduced the number of parallel sentences in each supervised direction.