Regularized Context Gates on Transformer for Machine Translation

08/29/2019 ∙ by Xintong Li, et al. ∙ 0

Context gates are effective to control the contributions from the source and target contexts in the recurrent neural network (RNN) based neural machine translation (NMT). However, it is challenging to extend them into the advanced Transformer architecture, which is more complicated than RNN. This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer. In addition, to further reduce the bias problem in the gate mechanism, this paper proposes a regularization method to guide the learning of the gates with supervision automatically generated using pointwise mutual information. Extensive experiments on 4 translation datasets demonstrate that the proposed model obtains an averaged gain of 1.0 BLEU score over strong Transformer baseline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An essence to modeling translation is how to learn an effective context from a sentence pair. Statistical machine translation (SMT) models the source context from the source-side of a translation model and models the target context from a target-side language model koehn2003statistical; koehn2009statistical; chiang2005hierarchical. These two models are trained independently. On the contrary, neural machine translation (NMT) advocates a unified manner to jointly learn source and target context using an encoder-decoder framework with an attention mechanism, leading to substantial gains over SMT in translation quality sutskever2014sequence; bahdanau2014neural; gehring2017convolutional; vaswani2017attention. Prior work on attention mechanism (luong2015effective; liu2016neural; mi2016supervised; chen2018syntax; li2018target; elbayad2018pervasive) have shown a better source context representation is helpful to translation performance.

Figure 1: A running example to raise the context control problem. Both original and context gated Transformer obtain an unfaithful translation by wrongly translate “tī qíu” into “play golf” because referring too much target context. By regularizing the context gates, the purposed method corrects the translation of “tī qíu” into “play soccer”. The light font denotes the target words to be translated in the future. For original Transformer, the source and target context are added directly without any rebalancing.

However, a standard NMT system is incapable of effectively controlling the contributions from source and target contexts (he2018layer) to deliver highly adequate translations as shown in Figure 1. As a result, tu2017context carefully designed context gates to dynamically control the influence from source and target contexts and observed significant improvements in the recurrent neural network (RNN) based NMT. Although Transformer vaswani2017attention delivers significant gains over RNN for translation, but there are still one third translation errors related with context control problem as described in Section 3.3. Obviously, it is feasible to extend the context gates in RNN based NMT into Transformer, but an obstacle to accomplish this goal is the complicated architecture in Transformer, where source and target words are tightly coupled. Thus, it is challenging to put context gates into practice in Transformer.

In this paper, under the Transformer architecture, we firstly provide a way to define the source and target contexts and then obtain our model by combining both source and target contexts with context gates, which actually induces a probabilistic model indicating whether the next generated word is contributed from the source or target sentence (li2019word). In our preliminary experiments, this model only achieves modest gains over Transformer, because the context selection error reduction are very limited as described in Section 3.3

. To further address this issue, we propose a probabilistic model whose loss function is derived from external supervision as regularization for the context gates. This probabilistic model is jointly trained with the context gates in NMT. As it is too costly to manually annotate this supervision for a large-scale training corpus, we instead propose a simple yet effective method to automatically generate supervision using pointwise mutual information, inspired by word collocation  

bouma2009normalized. In this way, the resulting NMT model is capable of controlling the contributions from source and target contexts effectively.

We conduct extensive experiments on 4 benchmark datasets, and experimental results demonstrate that the proposed gated model obtains an averaged improvement of 1.0 BLEU point over corresponding strong Transformer baselines. In addition, we design a novel analysis to show that the improvement of translation performance is indeed caused by relieving the problem of wrongly focusing on source or target context.

2 Methodology

Given a source sentence and a target sentence

, our proposed model is defined by the following conditional probability under the Transformer architecture: 

111Throughout this paper, a variable in bold font such as denotes a sequence while regular font such as x denotes an element which may be a scalar

, vector

or matrix .

(1)

where denotes a prefix of with length , and denotes the th layer context in the decoder with layers which is obtained from the representation of and

, i.e., the top layer hidden representation of

, similar to the original Transformer. To finish the overall definition of our model in equation 1, we will expand the definition based on context gates in the following subsections.

2.1 Context Gated Transformer

To develop context gates for our model, it is necessary to define the source and target contexts at first. Unlike the case in RNN, the source sentence and the target prefix are tightly coupled in our model, and thus it is not trivial to define the source and target contexts.

Suppose the source and target contexts at each layer are denoted by and . We recursively define them from as follows. 222For the base case, is word embedding of .

(2)

where is functional composition, denotes multiple head attention with q as query, k as key, v as value, and as a residual network he2016deep, is layer normalization ba2016layer, and all parameters are removed for simplicity.

In order to control the contributions from source or target side, we define by introducing a context gate to combine and as following:

(3)

with

(4)

where ff denotes a feedforward neural network, denotes concatenation,

denotes a sigmoid function, and

denotes an element-wise multiplication. is a vector (tu2017context reported that a gating vector is better than a gating scalar). Note that each component in actually induces a probabilistic model indicating whether the next generated word is mainly contributed from the source () or target sentence (), as shown in Figure 1.

Remark

It is worth mentioning that our proposed model is similar to the standard Transformer with boiling down to replacing a residual connection with a high way connection 

(srivastava2015highway): if we replace in equation 3 by , the proposed model is reduced to Transformer.

2.2 Regularization of Context Gates

In our preliminary experiments, we found learning context gates from scratch cannot effectively reduce the context selection errors as described in Section 3.3.

To address this issue, we propose a regularization method to guide the learning of context gates by external supervision which is a binary number representing whether is contributed from either source () or target sentence (). Formally, the training objective is defined as follows:

(5)

where is a context gate defined in equation 4 and

is a hyperparameter to be tuned in experiments. Note that we only regularize the gates during the training, but we skip the regularization during inference.

Because golden are inaccessible for each word in the training corpus, we ideally have to manually annotate it. However, it is costly for human to label such a large scale dataset. Instead, we propose an automatic method to generate its value in practice in the next subsection.

2.3 Generating Supervision

To decide whether is contributed from the source () or target sentence ((li2019word), a metric to measure the correlation between a pair of words ( or for ) is first required. This is closely related to a well-studied problem, i.e., word collocation liu2009collocation, and we simply employ the pointwise mutual information (PMI) to measure the correlation between a word pair following bouma2009normalized:

(6)

where and are word counts, is the co-occurrence count of words and , and is the normalizer, i.e., the total number of all possible pairs. To obtain the context gates, we define two types of PMI according to different including two scenarios as follows.

PMI in the Bilingual Scenario

For each parallel sentence pair in training set, is added by one if both and .

PMI in the Monolingual Scenario

In translation scenario, only the words in the preceding context of a target word should be considered. So for any target sentence in training set, is added by one if both and .

Given the two kinds of PMI for a bilingual sentence , each for each is defined as follows,

(7)

where is a binary function valued by 1 if is true and 0 otherwise. In equation 7, we employ strategy to measure the correlation between and a sentence ( or ). Indeed, it is similar to use the average strategy, but we did not find its gains over in our experiments.

Models params ZHEN ENDE DEEN FREN
MT05 MT06 MT08
RNN based NMT 84 30.6 31.1 23.2
tu2017context 88 34.1 34.8 26.2
vaswani2017attention 65 27.3
ma2018bag 36.8 35.9 27.6
zhao2018addressing 43.9 44.0 33.3
cheng2018towards 44.0 44.4 34.9
Transformer 74 46.9 47.4 38.3 27.4 32.2 36.8
This Work Context Gates 92 47.1 47.6 39.1 27.9 32.5 37.7
Regularized Context Gates 92 47.7 48.3 39.7 28.1 33.0 38.3
Table 1: Translation performances (BLEU). The RNN based NMT (bahdanau2014neural) is reported from the baseline model in tu2017context. “params” shows the number of parameters of models when training ZHEN except vaswani2017attention is for ENDE tasks.

3 Experiments

The proposed methods are evaluated on NIST ZHEN 333LDC2000T50, LDC2002L27, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004T07, WMT14 ENDE 444WMT14: http://www.statmt.org/wmt14/, IWSLT14 DEEN 555IWSLT14: http://workshop2014.iwslt.org/ and IWSLT17 FREN 666IWSLT17: http://workshop2017.iwslt.org/ tasks. To make our NMT models capable of open-vocabulary translation, all datasets are preprocessed with Byte Pair Encoding (sennrich2015neural). All proposed methods are implemented on top of Transformer vaswani2017attention which is the state-of-the-art NMT system. Case-insensitive BLEU score (papineni2002bleu) is used to evaluate translation quality of ZHEN, DEEN and FREN. For the fair comparison with the related work, ENDE is evaluated with case-sensitive BLEU score. Setup details are described in Appendix A.

0.1 0.5 1 2 10
BLEU 32.7 32.6 33.0 32.7 32.6
  • Results are measured on DEEN task.

Table 2: Translation performance over different regularization coefficient .

3.1 Tuning Regularization Coefficient

In the beginning of our experiments, we tune the regularization coefficient on the DEEN task. Table 2 shows the robustness of , because the translation performance only fluctuates slightly over various . In particular, the best performance is achieved when , which is the default setting throughout this paper.

3.2 Translation Performance

Table 1 shows the translation quality of our methods in BLEU. Our observations are as follows:

1) The performance of our implementation of Transformer is slightly higher than vaswani2017attention, which indicates we are in fair comparison.

2) The proposed Context Gates achieves modest improvement over the baseline. As we mentioned in Section 2.1, the structure of RNN based NMT is quite different from Transformer. Therefore, naively introducing the gate mechanism to Transformer without adaptation does not obtain similar gains as it does in RNN based NMT.

3) The proposed Regularized Context Gates improves nearly 1.0 BLEU score over the baseline and outperforms all existing related work. This indicates that the regularization can make context gates more effective on relieving the context control problem as discussed following.

3.3 Error Analysis

To explain the success of Regularized Context Gates, we analyze the error rates of translation and context selection. Given a sentence pair and , the forced decoding translation error is defined as , where and v denotes any token in the vocabulary. The context selection error is defined as , where is defined in equation 7. Note that a context selection error must be a translation error but the opposite is not true. The example shown in Figure 1 also demonstrates a context selection error indicating the translation error is related with the bad context selection.

Models FER CER
Transformer 40.5 13.8 33.9
Context Gates 40.5 13.7 33.7
Regularized Context Gates 40.0 13.4 33.4
  • Results are measured on NIST08 of ZHEN task.

Table 3: Forced decoding translation error rate (FER), context selection error rate (CER) and the proportion of context selection errors over forced decoding translation errors () of the original and context gated Transformer with or without regularization.

As shown in Table 3, the Regularized Context Gates significantly reduce the translation error by avoiding the context selection error. The Context Gates are also able to avoid few context selection error but cannot make a notable improvement in translation performance. It is worth to note that there are approximately one third translation error is related with context selection error. The Regularized Context Gates indeed alleviate this serious problem by effectively rebalancing of source and target context for translation.

4 Conclusions

This paper transplants context gates from RNN based NMT to Transformer to control the source and target context for translation. We find that context gates only modestly improves the translation quality of Transformer, because learning context gates freely from scratch is more challenging for Transformer with the complicated structure than for RNN. Based on this observation, we propose a regularization method to guide the learning of context gates with an effective way to generate supervision from training data. Experimental results show regularized context gates can significantly improve translation performances over different translation tasks even though the context control problem is only slightly relieved. In the future, we believe more work on alleviating context control problem has potential to improve translation performance as quantified in Table 3.

References

Appendix A Details of Data and Implementation

The training data for ZHEN task consists of 1.8M sentence pairs. The development set is chosen as NIST02 and test sets are NIST05, 06, 08. For ENDE task, its training data contains 4.6M sentences pairs. Both FREN and DEEN tasks contain around 0.2M sentence pairs. For ZHEN and ENDE tasks, the joint vocabulary is built with 32K BPE merge operations, and for DEEN and FREN tasks it is built with 16K merge operations.

Our implementation of context gates and the regularization are based on Transformer, implemented by THUMT (zhang2017thumt). For ZHEN and ENDE tasks, only the sentences of length up to 256 tokens are used with no more than tokens in a batch. The dimension of both word embeddings and hidden size are 512. Both encoder and decoder have 6 layers and adopt multi-head attention with 8 heads. For FREN and DEEN tasks, we use a smaller model with 4 layers and 4 heads, and both the embedding size and the hidden size is 256. The training batch contains no more than tokens. For all tasks, the beam size for decoding is 4, and the loss function is optimized with Adam, where , and .

Appendix B Statistics of Context Gates

Models Mean Variance
Context Gates 0.38 0.10
Regularized Context Gates 0.51 0.13
  • Results are measured on NIST08 of ZHEN task.

Table 4: Mean and variance of context gates

Table 4 summarizes the mean and variance of each context gate (every dimension of the context gate vectors) over the NIST08 test set. It shows that learning context gates freely from scratch tends to pay more attention to target context (0.38 0.5), which means the model tends to trust its language model more than the source context, and we call this context imbalance bias of the freely learned context gate. Specifically this bias will make the translation unfaithful for some source tokens. As shown in Table 4, the Regularized Context Gates demonstrates more balanced behavior (0.510.5) over source and target context with similar variance.

Appendix C Regularization in Different Layers

To investigate the sensitivity of choosing different layers for regularization, we only regularize context gate in each single layer. Table 5 shows that there is no significant performance difference, but all single layer regularized context gate models are slightly inferior to the model which regularizes all the gates. Moreover, since nearly no computation overhead is introduced and for design simplicity, we adopt regularizing all the layers.

Layers N/A 1 2 3 4 ALL
BLEU 32.5 32.8 32.7 32.5 32.3 33.0
  • Results are measured on DEEN task.

Table 5: Regularize context gates on different layers.“N/A” indicates regularization is not added. “ALL” indicates regularization is added to all the layers.

Appendix D Effects on Long Sentences

In tu2017context, context gates alleviate the problem of long sentence translation of attentional RNN based system bahdanau2014neural. We follow tu2017context and compare the translation performances according to different lengths of sentence. As shown in Figure 2, we find Context Gates does not improve the translation of long sentences, but translate short sentences better. Fortunately, the Regularized Context Gates indeed significantly improves the translation for both short sentences and long sentences.

Figure 2: Translation performance on NIST08 test set with respect to different lengths of source sentence. Regularized Context Gates significantly improves the translation of short and long sentences.