Learning a universal NLP model that supports multiple languages is usually a necessity in practical systems, such as multilingual machine translation, and multilingual sentence understanding (dong-etal-2015-multi; zoph2016multisource; firat2016multiway; wu2016googles; johnson2017googles; lee2017fully; firat2016zeroresource; gu2018universal; lewis2020mlqa; lample2019crosslingual; conneau2020unsupervised). Transformer (vaswani2017attention)
is the most widely used neural network architecture in natural language processing. To handle sentences from different languages, people design ways to provide the “language” signal into the multilingual Transformer. Language embedding(tan2019multilingual; lample2019crosslingual; huang2019unicoder; chi2019crosslingual; liu2020multilingual; tang2020multilingual) is a popular choice which views each language as a symbol with a learnable embedding vector. Previous works provide two approaches to using the language symbol: attaching it to the beginning of the sentence or adding its embedding to the word embedding at each position. Using such information, Transformers try to learn the word meanings in the corresponding language and obtain the contextual word representations accordingly.
In this work, we revisit the use of language embedding in Transformer and find that the current approaches may be ineffective in learning multilingual representations. To show the problem clearly, we study the interaction between word embedding and language embedding in the Transformer layers and find that in both approaches, the word-language correlation will be computed in the self-attention module. We question whether this word-language correlation is useful in capturing semantic relations of words at different positions in the sentence. According to our empirical study, we observe that this correlation seems to reflect the popularity of a word appearing in a language to a certain extent. Obviously, such popularity cannot reflect whether two words have a strong semantic relationship in a language.
The analysis inspires us to think further about the proper way to encode “language” in a multilingual Transformer model. As the vocabulary is shared (i.e., one word-unit corresponds to one embedding vector), one word-unit may appear in multiple languages and have different meanings. We hope that with the language encoding, the Transformer model can receive language-specific word meaning and learn the contextual word representation based on that. Motivated by this, we propose a novel language encoding called Cross-lingual Language Projection (XLP). Instead of viewing “languages” as vectors, we view different “languages” as different projection functions, e.g., linear transformation matrice with learnable parameters. Given any sentence, XLP projects the word embeddings into language-specific semantic space using the corresponding projection function. Then the Transformer takes the projected word embeddings as input, calculates word-word correlations in the language-specific semantic space, and obtains the representation of the words and the sentence. See Figure1 for an illustration.
XLP is conceptually simple and easy to implement with barely additional computation overhead but shows promising performance gain on a wide range of multilingual applications. To be specific, XLP gains 1.2% of accuracy improvement on the zero-shot cross-lingual task XNLI (conneau2018xnli) comparing to the previous state-of-the-art pre-trained model. XLP also shows better performance on multilingual translation tasks. It consistently improves the BLEU scores on translating different languages to English compared to the baseline models.
2.1 Attention Module
The attention module (vaswani2017attention) is formulated as querying a dictionary with key-value pairs, e.g., , where
is the dimensionality of the hidden representations, and(Query), (Key), (Value) are specified as the hidden representations of the previous layer. The multi-head variant of the attention module is popularly used which allows the model to jointly attend to the information from different representation sub-spaces, and is defined as
where , , , and are learnable project matrices, is the number of heads. and are the dimensionalities of Key and Value.
The self-attention module is one of the key components in Transformer and BERT encoder (devlin2019bert). For simplicity, we use the single-head self-attention module and set for a demonstration. We denote as the input to the self-attention module in the -th layer, where is the length of the sequence and each vector is the contextual representation of the token at position . is the output of the attention module. Then, the self-attention module can be written as
As we can see, in any sentence, the self-attention module calculates the correlation between information at different positions, and use the correlation (i.e., attention) to obtain the contextual representation of each word by considering its surroundings.
2.2 Language Encoding in Multilingual Transformer
When training a multilingual model, a shared vocabulary (of words or sub-words) covering all the languages is firstly prepared. A learnable word embedding is assigned to each word in the vocabulary. Then a Transformer model, which takes the word embeddings (and the positional embeddings) as input, is optimized using pre-defined objective functions on the multilingual training data. For example, in multilingual machine translation, an encoder-decoder Transformer is trained to maximize the conditional log-likelihood of the target sentence given the source sentence using translation pairs in all the languages.
However, we usually need to feed the model with an additional signal in terms of which language the sentence comes from. Sometimes, such information is essential. For example, in multilingual machine translation, the model can generate the translation results only if we provide the name of the target language we are requesting. To encode such information, previous works design a specific symbol for each language with a learnable embedding vector. There are generally two approaches to use language embeddings. The first approach (attaching approach for short) attaches the corresponding symbol to the beginning of the sentence (wu2016googles; johnson2017googles; liu2020multilingual; tang2020multilingual). The second approach (additive approach for short) adds the language embedding to the word embedding at each position (tan2019multilingual; lample2019crosslingual; huang2019unicoder; chi2019crosslingual). With the help of language embeddings, the model receives the “language” information explicitly from the input and learns the sentence representations through Transformer layers.
3 Cross-Lingual Language Projection
3.1 Revisiting Language Embedding
We are interested in the role of the language embeddings in learning multilingual representations through Transformer layers. Assume that there are languages in the multilingual data corpus. Following the notations in Section 2.1, we denote as the language embedding for the -th language, where . Denote as a sentence in the -th language, where each is the word embedding. It is easy to show that when either applying the attaching approach or the additive approach, the self-attention (Eq.2) will calculate the correlation between word and language embeddings through dot-product. We show this explicitly for the additive approach while the analysis and results for the attaching approach are similar.
In the additive approach, the input111Usually, positional embedding is another term that will be added to the word embedding in the input. We omit the positional embedding for a better illustration of how language embeddings interact with words. The conclusions will not change if we take the positional embedding together into consideration. Recent works also show that positional embedding is not an essential term in the input, see (ke2020rethinking; shaw2018self). to the Transformer model is . Then, in the self-attention module of the first Transformer layer, the correlation term in Eq.2 can be expanded as:
It can be seen that there are four terms in the expansion: word word, language word, word language and language language correlations. The first term characterizes the relationships between a pair of words, and the language embedding involves in the calculation of the other three terms. It is obvious to see that the last term is redundant since the value is a constant for every . Adding the same constant to each dimension of the input will not change the output of the Softmax function.
The two terms in the middle calculate the correlations between word embedding and language embedding. In particular, it can be seen from Eq. 3, any word will calculate with the same language embedding in sentence . We argue such correlations cannot reflect how two words in language correlate (e.g., have similar meanings) with each other. To study the function of the two terms, we download the officially released XLM15 model (lample2019crosslingual), and calculate those values on sampled sentences from the Wikipedia data corpus. We showcase the results in Figure 2. Empirically, we observe that the values seem to reflect the popularity of a word appearing in a language to a certain extent, e.g., “the” and “of” have relatively large dot-products with the language embedding of “English”. But obviously, word-frequency correlation might not reflect the word-semantic correlation well.
3.2 From Language Embedding to Language Projection
The discussion above reveals some issues in the previous approaches and further motivates us to think about a better way to encode language information into a multilingual Transformer model. Note that the model uses a shared vocabulary, and a word-unit (or sub-word) may appear in multiple languages. We argue that for any language, the language encoding should provide the Transformer with language-specific meanings of the words in a sentence.
Our idea is to use language projection instead of language embedding, which can provide language-specific word representations by projecting the word embedding to language-specific semantic space. We denote as the projection function for the -th language. For any sentence from language , we first project each word to which characterizes the semantic meaning of in language . After the projection, the language-specific word embedding will be used as input to the Transformer model instead of . Mathematically, we have the following form in the self-attention calculation:
We call our method Cross-lingual Language Projection (XLP for short). Linear projection is a popularly used semantic projection in many previous works (vaswani2017attention; conneau2017word) and we use linear projection function in XLP. We define , where is a matrix. Then Eq.4 becomes
It is worth noting that introducing can be viewed as a decoupling of the original projection matrices in the self-attention module. Essentially, learns language-specific projection to transform the word embedding to the language-specific semantic space. At the same time, and still learn to project the semantic information to proper subspace as in the standard monolingual Transformer.
In Figure 1, we illustrate XLP and compare it with the language embedding (the additive approach). It can be seen that using the language embedding is equivalent to shifting the word embedding space by a language-specific bias. In contrast, our proposed language projection projects the word embedding to language-specific semantic space. Thus, the self-attention module can obtain the language-specific word correlations and learn more efficiently.
3.3 Implementation and Discussions
Incorporate XLP with the positional embedding. Positional encoding is an essential component in Transformer since other main components of the model are entirely invariant to sequence order. The absolute positional encoding is the most popularly used one, which provides each position an embedding vector. The positional embedding is added to the word embedding, which is found significantly helpful at learning the contextual representations of words at different positions. In XLP, the language projection is only applied to the word embedding. That being said, for any sentence, we first project the word embedding using XLP and then add it to the positional embedding.
The increase of parameters and efficiency. The language embedding approaches use -dimentional vectors where is the number of languages and is the embedding dimension. XLP uses matrice, which is slightly larger than previous approaches. Taking XLM15 (lample2019crosslingual) architecture as an example, the newly introduced language projection parameters are about 15M, which is only about 6% of the 250M parameters in XLM15. Since XLP doesn’t relate to the number of Transformer layers, the increase of parameters can be ignored when using deeper models. In terms of efficiency, XLP only needs an additional linear transformation in the input layer, which introduces barely computational overhead comparing to the stacked Transformer layers.
We conduct extensive experiments on multilingual language understanding and multilingual machine translation tasks to verify our proposed XLP. The codes are implemented based on fairseq222https://github.com/pytorch/fairseq/ (ott2019fairseq) and XLM333https://github.com/facebookresearch/XLM/ in PyTorch (paszke2019pytorch). Models are trained on 16/8 NVIDIA Tesla V100 GPUs with mixed-precision (micikevicius2018mixed) for the multilingual language understanding / machine translation tasks respectively.
4.1 Multilingual Language Understanding
|Cross-Lingual Transfer (Fine-tune the multilingual model on English training set)|
|Translate-Train (Fine-tune the multilingual model on each training set)|
Model configurations. For a fair comparison, we implement XLP based on XLM15 (lample2019crosslingual) (250M parameters) architecture. Specifically, XLP consists of 12 Transformer encoder layers. For each layer, the dimensions of hidden representation and feed-forward layer are set to 1024 and 4096, respectively. The number of heads in the attention module is set to 8. XLM15 model supports 15 languages, and we use 15 projection matrices with shape 10241024 in XLP.
Pre-training. Following lample2019crosslingual, we use Wikipedia corpus (15 languages) for pre-training. The detailed descriptions of the dataset can be found in Appendix A. We perform a couple of consecutive pre-processing steps following lample2019crosslingual: normalizing, lower-casing, tokenizing the texts by Moses decoder444https://github.com/moses-smt/mosesdecoder (koehn-etal-2007-moses) and finally applying the byte pair encoding (BPE)555https://github.com/glample/fastBPE (sennrich2016neural) with the same codes (size 80k) and vocabulary (size 95k) from the XLM15. We use masked language modeling (devlin2019bert) as the training objective. The model is trained for 750k steps. The batch size is set to 64 per GPU, which is the same as the XLM (lample2019crosslingual). The gradient is accumulated every 4 optimization steps. Detailed description of the settings are presented in Appendix B.1.
Fine-tuning. We use XNLI (Cross-lingual Natural Language Inference) (conneau2018xnli) as the downstream evaluation benchmark to compare our proposed model with the baselines. Given a premise sentence and a hypothesis sentence in a language, the goal of the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). XNLI dataset contains 15 languages, including low-resource languages such as Swahili and Urdu.
Following lample2019crosslingual, we evaluate the pre-trained models on the XNLI tasks in two settings. The first setting is called Cross-Lingual Transfer, in which we fine-tune the model on the English training set and evaluate it on the test sets of all languages. The second setting is called Translate-Train, in which we fine-tune and evaluate the model on the dataset of each language, respectively. For all the downstream experiments, we strictly follow lample2019crosslingual
for the hyperparameter configuration and search space using the official script666https://github.com/facebookresearch/XLM/#fine-tune-your-xlm-model-on-cross-lingual-classification-xnli.
Results. The fine-tuning performance on XNLI is presented in Table 1. Languages are ordered according to the resource magnitude. We compare our model with five baselines: an LSTM-based model (conneau2018xnli), a supervised model trained using translation pairs (artetxe2019massively), the multilingual BERT model (mBERT) (devlin2019bert) and officially released XLM models (XLM15, XLM100). For all baselines, we use the number reported in their original papers.
It can be easily seen that XLP outperforms all baselines significantly. In the Cross-Lingual Transfer setting, XLP obtains 72.7% averaged accuracy, which outperforms the XLM (lample2019crosslingual) and mBERT (devlin2019bert) by 1.2% and 6.4% respectively. When we fine-tune the pre-trained model on each language respectively (Translate-Train), XLP still achieves 1.8% improvement over XLM.
Moreover, XLP obtains a more balanced performance over the 15 languages. For high-resource languages, our approach is competitive (in English) or slightly better (in French, Greek) than previous works. For extremely low-resource languages such as Swahili and Urdu, XLP outperforms XLM by a significant margin (2.5% and 1.5%). All the results indicate that our proposed XLP can help the model learn the multilingual sentence representation better.
4.2 Multilingual Machine Translation
|Language Embedding (attaching)||30.5||33.7||40.1||41.9||23.3||18.5||31.3|
|Language Embedding (additive)||30.6||33.8||40.1||42.0||23.2||18.4||31.4|
Model Configurations. Following vaswani2017attention, we use a 6-layer encoder-decoder-based Transformer in the machine translation tasks. The dimension of the hidden representation and the feed-forward layer is set to 512 and 1024 respectively. The number of the heads in the attention module is set to 4. We evaluate two language embedding approaches described in Section 2.2 as our baseline, the attaching approach and the additive approach. All experiments use the same training and inference configurations. Detailed description of the settings are presented in Appendix B.2.
Datasets. We collect 6 languages English translation pairs from IWSLT evaluation campaign777https://wit3.fbk.eu/2014-01 (IWSLT 2014). The details about the datasets can be found in Appendix A. All the sentences are first tokenized with Moses tokenizer and then segmented into subword symbols using BPE. We learn the BPE merge operations across all the languages by setting the size of the BPE codes to 30000 and obtain a joint vocabulary with size 38413.
Training and Inference. We concatenate all the datasets to train a universal multilingual translation model. The mini-batch size is set to 4096 tokens per GPU. We use Adam (kingma2017adam) as the optimizer, and set the hyperparameter to 1e-8 and (
) to (0.9, 0.98). The peak learning rate is set to 5e-4 with a 4k-step warm-up stage followed by an inverse square-root learning rate scheduler. We set the dropout probability to 0.3 and weight decay to 1e-4. Label smoothed cross-entropy is used as the objective function by setting(szegedy2015rethinking)
. The number of training epoch is set to 180. We evaluate the translation quality by tokenized BLEU with sacreBLEU888https://github.com/mjpost/sacrebleu (post-2018-call).
Results. The multilingual translation results are presented in Table 2. First, our proposed XLP consistently outperforms the language embedding approaches (attaching or additive) and achieves an average 0.5 BLEU improvement comparing to the language embedding approaches. Besides, the experimental results also show that XLP works better on the low-resource TR EN dataset (1.0 BLEU improvement). The overall comparison of the multilingual translation task also demonstrates the effectiveness of our proposed XLP.
Training Efficiency. By using the language projection, the Transformer model can receive concrete language-specific semantic information as input, which makes the model easier to train. To show this, we study the validation loss curves of XLP and the baselines.
In cross-lingual language model pre-training, since XLM (lample2019crosslingual) did not release any intermediate model checkpoints, we pre-trained both the XLM and the XLP using the same pre-training hyperparameters and check the intermediate model performance. In Figure 2(a), we plot the pre-training validation loss of XLM and XLP for the first 350K steps. In multilingual machine translation, we compare the validation loss curve of XLP and previous language embedding approaches. The result is shown in Figure 2(b). It can be easily seen that our proposed XLP can use fewer steps to achieve comparable validation loss than previous approaches.
Cross-Lingual Transfer Gap. The XTREME benchmark (hu2020xtreme)
proposed to use the cross-lingual transfer gap to evaluate the multilingual models. The gap is measured by the difference between the performance on the English data and the averaged performance on all other language data. In such a way, we can estimate how powerful the multilingual model is when transferring its knowledge from English to other languages. As shown in Table3, in terms of this metric, our proposed XLP significantly outperforms the mBERT, XLM100 and XLM15 by 5.7%, 2.7% and 1.3% respectively on XNLI.
To better understand the learned language projections in XLP, we design methods to visualize them. Through the experiments, we empirically verify that the language-specific projections indeed encode the language information and help the model efficiently capture the language-specific word correlations, while language embeddings do not.
First, we investigate whether the language encoding approaches learn language-specific information. The high-level idea is that if a language encoding approach learns languages specific information, the embeddings of a word processed by the language encoding approach should be different for different languages. To show this, we compare the released XLM15 model with our pre-trained XLP model for 15 languages. We first sample some English words (sub-words) from the multilingual vocabulary. For each word, we process the word embedding by different language encoding approaches (additive v.s. projection) and obtain language-specific embeddings of the same word. We then calculate the cosine similarity of these embeddings to see whether they are similar, which forms a similarity matrix for each model.
As shown in Figure 3(a), the elements in the similarity matrix of XLM are surprisingly high, while for XLP in Figure 3(b), the element values are quite diverse. This demonstrates that our language projection indeed projects the word embeddings to different semantic spaces as we expect.
Taking one step further, we investigate whether the language encoding can help the model capture the word correlations. We select three topics first and then select three English words in each topic: [happy, glad, sad], [car, plane, bike] and [meat, food, rice]. We process the word embeddings into “English” with different language encoding approaches (additive v.s. projection), and then compute the cosine similarity of the processed words. The word similarities are presented in Figure 4(a) and 4(b). From the two figures, we can see that for XLP, the words in the same topic have strong correlations, while for words in different topics, the correlations are weak. However, in XLM, word pairs seem to have similar correlations. This suggests our proposed XLP captures the correlation of words better.
5 Related Work
Before the development of the Transformer model, Google built the first multilingual neural machine translation system (GMNMT)(johnson2017googles) based on LSTM networks (hochreiter1997long), and introduced the language symbol with the attaching approach. Soon, this method is adopted to the Transformer-based multilingual translation system (liu2020multilingual; tang2020multilingual). XLM (lample2019crosslingual) proposed to add the language embedding to word embedding at each position for multilingual language understanding tasks. huang2019unicoder; chi2019crosslingual follow this language encoding approach further and develop more self-supervised objective functions.
There are several works studying the language-specific and language-agnostic parameters in a universal multilingual model. To build a multilingual machine translation system, early works aimed to increase the shared model capacity from the separated bilingual models to enhance the cross-lingual transfer (dong-etal-2015-multi; zoph2016multisource; lee2017fully; firat2016multiway). After the success of the first universal multilingual machine translation system (wu2016googles; johnson2017googles), emergent works (blackwood2018multilingual; V_zquez_2019; escolano2020multilingual) started to investigate which components should be shared between languages and which components should be kept as language-specific in a universal multilingual model.
Recently, an interesting concurrent work (zhang2021share) provided a comprehensive empirical study on the language-specific capacity in a universal multilingual machine translation model. This work suggests that using mixed language-specific and language-agnostic parameters in every sub-layers of the Transformer model is a better choice, letting the model learn to control the shared capacity by itself. However, this work does not take the language encodings into consideration but focuses on the upper Transformer layers. Being complementary to zhang2021share, we investigate the previous language embedding approaches and propose to use the language-specific projections for better language encoding.
In this paper, we revisit the use of language embedding in the multilingual Transformer and identify several problems in the existing formulations. We propose a new approach called Cross-lingual Language Projection (XLP) to address the issues. XLP uses language-specific transformations to project the word embeddings into language-specific semantic space, which achieves the purpose of appropriately encoding “language” in a multilingual Transformer. Extensive experiments demonstrate that the multilingual Transformer models using our proposed XLP consistently outperform those with previous language embedding approaches on multilingual language understanding and machine translation benchmarks.
Appendix A Datasets
Following lample2019crosslingual, we use Wikipedia in 15 languages as the pre-training data corpus, whose size is roughly 42 GB. The dataset statistics are listed in Table 4. We use WikiExtractor 999https://github.com/attardi/wikiextractor to extract raw sentences and perform a couple of consecutive pre-processing steps following lample2019crosslingual: normalizing, lower-casing, tokenizing the texts by Moses decoder101010https://github.com/moses-smt/mosesdecoder (koehn-etal-2007-moses) (Stanford Word Segmentor 111111https://nlp.stanford.edu/software/segmenter.html for Chinese and PyThaiNLP 121212https://github.com/PyThaiNLP/pythainlp for Thai) and finally apply the byte pair encoding (BPE)131313https://github.com/glample/fastBPE (sennrich2016neural) with the same codes (size 80000) and vocabulary (size 95000) from the XLM15.
|ISO code||Language||Language Samples (M)||Size (GiB)|
XNLI (Cross-lingual Natural Language Inference) benchmark (conneau2018xnli)
is a cross-lingual extension of the NLI task. Given a premise sentence and a hypothesis sentence in a language, the goal of the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The XNLI dataset is constructed by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, making 112,500 annotated pairs in total. For each language, we have 2490 samples for validation and 5010 samples for test.
Following lample2019crosslingual, we evaluate the pre-trained models on the XNLI tasks in two settings. The first setting is called Cross-Lingual Transfer, in which we fine-tune the model on the English training set and evaluate it on the test sets of all languages. The second setting is called Translate-Train, in which we fine-tune and evaluate the model on the dataset of each language, respectively.
a.3 Multilingual Machine Translation
For pre-processing, all the sentences are first tokenized with Moses tokenizer and then segmented into subword symbols using Byte Pair Encoding (BPE). Note that we use the Stanford Word Segmenter to tokenize the sentences in Chinese. We learn the BPE merge operations across all the languages by setting the size of the BPE codes to 30000 and obtain a joint vocabulary with size 38413.
Appendix B Training Configurations
b.1 Multilingual Language Understanding
We use several competitive baselines for comparison: (1) conneau2018xnli: the baseline approach from the XNLI benchmark which is based on the LSTM (hochreiter1997long); (2) artetxe2019massively: a supervised approach which uses 223 million parallel sentences; (3) mBERT (devlin2019bert): the multilingual BERT which is pre-trained with masked language modeling (MLM) on Wikipedia in 102 languages; (4) XLM (lample2019crosslingual): the MLM pre-trained multilingual model which uses language embedding to encode language information. XLM15 (250M parameters) is pre-trained on Wikipedia in 15 languages, while XLM100 (570M parameters) is pre-trained on Wikipedia in 100 languages.
|Learning Rate Decay||Inverse Sqrt||Inverse Sqrt|
|Adam(, )||(0.9, 0.98)||(0.9, 0.999)|
Model Configurations and Training Details.
The overall settings are summarized in Table 6. To compare with XLM15, we build our model as a 12-layer Transformer. For each layer, the dimensions of hidden representation and feed-forward layer are set to 1024 and 4096, respectively. The number of heads in the attention module is set to 8. XLM15 model supports 15 languages, and we use 15 projection matrices with shape 10241024 in XLP.
We use masked language modeling as the objective of pre-training. We train the model for 750k steps. The batch size is set to 64 per GPU. Due to the limit of the GPU memory, we accumulate gradients every 4 optimization steps. Models are trained on 16 NVIDIA Tesla V100 GPUs with mixed-precision. Thus, the effective batch size is 4096, which is the same as the XLM15. The maximum sequence length is 256. The masked probability is set to 0.15, with replacing 80% of the masked positions by [MASK], 10% by randomly sampled words, and keep the remaining 10% unchanged. We use Adam (kingma2017adam) as the optimizer, and set the hyperparameter to 1e-8 and () to (0.9, 0.98). The peak learning rate is set to 1e-4 with a 16k-step warm-up stage followed by an inverse square-root learning rate scheduler. The dropout probability and the weight decay parameter are set to 0.1 and 1e-4 respectively.
During fine-tuning for the XNLI task, We search the learning rates (from 1e-6 to 8e-6) and batch size (8 or 16). For XLP, we fix the language-specific projection weights during fine-tuning. We use two ways to evaluate the pre-trained models: Cross-Lingual Transfer and Translate-Train, as described in Appendix A.2.
b.2 Multilingual Machine Translation
We use two proposed baselines for comparison: (1) Language Embedding (attaching): we encode the language information by attaching a language-specific token at the beginning of the sentences, as stated in Section 2.2. (2) Language Embedding (additive): we encode the language information by adding the word embedding at each position with the language embedding, as stated in Section 2.2.
|Layers for Encoder / Decoder||6 / 6|
|Training Iterations||180 epoch|
|Learning Rate Decay||Inverse Sqrt|
|Tokens per batch||4096|
|Adam(, )||(0.9, 0.98)|
|Beam Search Size||5|
Model Configurations and Hyperparameters.
The overall settings are summarized in Table 7. Following vaswani2017attention, we use a 6-layer encoder-decoder-based Transformer in the machine translation tasks. The dimension of the hidden representation and the feed-forward layer is set to 512 and 1024 respectively. The number of the heads in the attention module is set to 4.
For the multilingual model training, the mini-batch size is set to 4096 tokens per GPU. We use Adam (kingma2017adam) as the optimizer, and set the hyperparameter to 1e-8 and () to (0.9, 0.98). The peak learning rate is set to 5e-4 with a 4k-step warm-up stage followed by an inverse square-root learning rate scheduler. We set the dropout probability to 0.3, and weight decay to 1e-4. Label smoothed cross entropy is used as the objective function by setting (szegedy2015rethinking). The total training epoch is set to 180. During inference, we decode with beam search and set the beam size to 5 for all the languages. Length penalty is set to 1.2. We evaluate the translation quality by tokenized BLEU with sacreBLEU 151515https://github.com/mjpost/sacrebleu (post-2018-call).