DeepAI
Log In Sign Up

Context-Aware Learning for Neural Machine Translation

03/12/2019
by   Sébastien Jean, et al.
NYU college
0

Interest in larger-context neural machine translation, including document-level and multi-modal translation, has been growing. Multiple works have proposed new network architectures or evaluation schemes, but potentially helpful context is still sometimes ignored by larger-context translation models. In this paper, we propose a novel learning algorithm that explicitly encourages a neural translation model to take into account additional context using a multilevel pair-wise ranking loss. We evaluate the proposed learning algorithm with a transformer-based larger-context translation system on document-level translation. By comparing performance using actual and random contexts, we show that a model trained with the proposed algorithm is more sensitive to the additional context.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/19/2020

Diving Deep into Context-Aware Neural Machine Translation

Context-aware neural machine translation (NMT) is a promising direction ...
10/24/2020

Context-aware Decoder for Neural Machine Translation using a Target-side Document-Level Language Model

Although many context-aware neural machine translation models have been ...
01/17/2023

HanoiT: Enhancing Context-aware Translation via Selective Context

Context-aware neural machine translation aims to use the document-level ...
10/30/2019

Fill in the Blanks: Imputing Missing Sentences for Larger-Context Neural Machine Translation

Most neural machine translation systems still translate sentences in iso...
10/19/2022

A baseline revisited: Pushing the limits of multi-segment models for context-aware translation

This paper addresses the task of contextual translation using multi-segm...
01/05/2022

SMDT: Selective Memory-Augmented Neural Document Translation

Existing document-level neural machine translation (NMT) models have suf...
08/19/2020

Transformer based Multilingual document Embedding model

One of the current state-of-the-art multilingual document embedding mode...

1 Introduction

Despite its rapid adoption by academia and industry and its recent success (see, e.g., Hassan et al., 2018), neural machine translation has been found largely incapable of exploiting additional context other than the current source sentence. This incapability stems from the fact that larger-context machine translation systems tend to ignore additional context, such as previous sentences and associated images. Much of recent efforts have gone into building a novel network architecture that can better exploit additional context however without much success (Elliott, 2018; Grönroos et al., 2018; Läubli et al., 2018).

In this paper, we approach the problem of larger-context neural machine translation from the perspective of “learning” instead. We propose to explicitly encourage the model to exploit additional context by assigning a higher log-probability to a translation paired with a correct context than to that paired with an incorrect one. We design this regularization term to be applied at token, sentence and batch levels to cope with the fact that the benefit from additional context may differ from one level to another.

Our experiments on document-level translation using a modified transformer (Voita et al., 2018) reveal that the model trained using the proposed learning algorithm is indeed sensitive to the context, contrarily to some previous works Elliott (2018). We also see a small improvement in terms of overall quality (measured in BLEU). These two observations together suggest that the proposed approach is a promising direction toward building an effective larger-context neural translation model.

2 Background: Larger-Context Neural Machine Translation

A larger-context neural machine translation system extends upon the conventional neural machine translation system by incorporating the context , beyond a source sentence , when translating into a sentence in the target language. In the case of multimodal machine translation, this additional context is an image which the source sentence describes. In the case of document-level machine translation, the additional context may include other sentences in a document in which the source sentence appears. Such a larger-context neural machine translation system consists of an encoder that encodes the additional context

into a set of vector representations that are combined with those extracted from the source sentence

by the original encoder . These vectors are then used by the decoder to compute the conditional distribution over the target sequences in the autoregressive paradigm, i.e.,

where is a collection of all the parameters in the neural translation model. and

are often implemented as neural networks, such as recurrent networks with attention 

(Bahdanau et al., 2015), convolutional networks (Gehring et al., 2017) and self-attention (Vaswani et al., 2017).

Training is often done by maximizing the log-likelihood given a set of training triplets . The log-likelihood is defined as

(1)

Once training is done, it is a standard practice to use beam search to find a translation that approximately maximizes

3 Existing approaches to
             larger-context neural translation

Existing approaches to larger-context neural machine translation have mostly focused on either modifying the input or the network architecture. Tiedemann and Scherrer (2017) concatenate the previous source sentence to the current source sentence, which was followed by Bawden et al. (2018) who also concatenate the previous target sentence. Grönroos et al. (2018) explore various concatenation strategies when the additional context is an image. Other groups have proposed various modifications to the existing neural translation systems (Jean et al., 2017; Wang et al., 2017; Voita et al., 2018; Zhang et al., 2018; Miculicich et al., 2018; Maruf and Haffari, 2018; Tu et al., 2018) in the case of document-level translation, while using usual maximum likelihood learning. Zheng et al. (2018) on the other hand introduces a discriminator that forces the network to improve signal-to-noise ratio in the additional context. In parallel, there have been many proposals on novel network architectures for multi-modal translation (Calixto et al., 2017; Caglayan et al., 2017; Ma et al., 2017; Libovickỳ and Helcl, 2017). In personalized translation, Michel and Neubig (2018) bias the output distribution according to the context. All these previous efforts are clearly distinguished from our work in that our approach focuses entirely on a learning algorithm and is agnostic to the underlying network architecture.

4 Learning to use the context

In this paper, we focus on “learning” rather than a network architecture. Instead of coming up with a new architecture that facilitates larger-context translation, our goal is to come up with a learning algorithm that can be used with any underlying larger-context neural machine translation system.

4.1 Neutral, useful and harmful context

To do so, we first notice that by the law of total probability,

(2)

As such, over the entire distribution of contexts given a source , the additional context is overall “neutral”.

When the context is used, there are two cases. First, the context may be “useful”. In this case, the model can assign a better probability to a correct target token when the context was provided than when it was not: . On the other hand, the additional context can certainly be used harmfully: .

Although these “neutral”, “useful” and “harmful” behaviours are defined at the token level, we can easily extend them to various levels by defining the following score functions:

(token)
(sent.)
(data)

4.2 Context regularization

With these scores defined at three different levels, we propose to regularize learning to encourage a neural translation system to prefer using the context in a useful way. Our regularization term works at all three levels–tokens, sentences and the entire data– and is based on a margin ranking loss (Collobert et al., 2011):

(3)

where , and are the regularization strengths at the data-, sentence- and token-level. , and are corresponding margin values.

The proposed regularization term explicitly encourages the usefulness of the additional context at all the levels. We use the margin ranking loss to only lightly bias the model to use the context in a useful way but not necessarily force it to fully rely on the context, as it is expected that most of the necessary information is already contained in the source and that the additional context only provides a little complementary information.

4.3 Estimating context-less scores

It is not trivial to compute the score when the context was missing based on Eq. (2), as it requires (1) the access to and (2) the intractable marginalization over all possible . In this paper, we explore the simplest strategy of approximating with the data distribution of sentences .

We assume that the context is independently distributed from the source , i.e., and that the context follows the data distribution. This allows us to approximate the expectation by uniformly selecting training contexts at random:

where is the -th sample.

A better estimation of

is certainly possible. One such approach would be to use a larger-context recurrent language model by Wang and Cho (2016). Another possible approach is to use an off-the-shelf retrieval engine to build a non-parametric sampler. We leave the investigation of these alternatives to the future.

4.4 An intrinsic evaluation metric

The conditions for “neutral”, “useful” and “harmful” context also serve as bases on which we can build an intrinsic evaluation metric of a larger-context neural machine translation system. We propose this metric by observing that, for a well-trained larger-context translation system,

while it would be 0 for a larger-context model that completely ignores the additional context. We compute this metric over the validation set using the sample-based approximation scheme from above. Alternatively, we may compute the difference in BLEU () over the validation or test data. These metrics are complementary to others that evaluate specific discourse phenomena on specially designed test sets Bawden et al. (2018).

BLEU
Context Context-Aware Reg. Normal Context-Marginalized
(a) 29.16 (29.62) - -
(b) 29.23 (29.65) 29.23 (29.65) 0
(c) 29.34 (29.63) 28.94 (29.23) 0.40
(d) 29.91 (30.13) 26.17 (25.82) 3.74
Table 1: We report the BLEU scores with the correctly paired context as well as with the incorrectly paired context (context-marginalized). Context-marginalized BLEU scores are averaged over three randomly selected contexts. BLEU scores on the validation set are presented within parentheses. Instead of omitting the context, we give a random context to make the number of parameters match with the larger-context model.

5 Experimental Settings

Data

We use EnRu parallel data from OpenSubtitles2018 Lison et al. (2018) and choose the same training data subset of 2M examples as Voita et al. (2018) did. We build a joint vocabulary of BPE subword tokens between the source and target languages using 32k merge operations Sennrich et al. (2016).

Context-less score estimation

We simply shuffle the context in each minibatch to create random context per example.

Figure 1: Cumulative BLEU scores on the validation set sorted by the sentence-level score difference according to the larger-context model.

Models

We build a larger-context variant of the base transformer (Vaswani et al., 2017) that takes as input both the current and previous sentences, similarly to that by Voita et al. (2018). Each of the current and previous sentences is independently encoded by a common 6-layer transformer encoder. The final representation of each token in the current sentence is obtained by attending over the final token representations from the past sentence and combining the outputs from the current and past sentences nonlinearly. We use the same decoder from a standard transformer, and share all the word embedding matrices. See the appendix for the detailed description.

We use Adam with an initial step size of

to train each model. We evaluate the model every half epoch using greedy decoding and halve the learning rate when the BLEU score on the development does not improve for five consecutive evaluations, following

Denkowski and Neubig (2017). Based on the BLEU score on the validation set during the preliminary experiments, we set the coefficients and margins of the proposed regularization term (3) to , , and . Models are evaluated with a beam size of 5, adjusting scores according to length Wu et al. (2016).

6 Result and Analysis

In Table 1, we present the translation quality (in BLEU) of the four variants. We make a number of observations. First, the use of previous sentence (c) does not improve over the baseline (a–b) when the larger-context model was trained only to maximize the log-likelihood (1). We furthermore see that the translation quality of the larger-context model only marginally degrades even when the incorrectly paired previous sentence was given instead (), implying that this model largely ignores the previous sentence.

Second, we observe that the larger-context model improves upon the baselines, trained either without any additional context (a) or with purely random context (b), when it was trained with the proposed regularization term (d). The evaluation metric is also significantly larger than , suggesting the effectiveness of the proposed regularization term in encouraging the model to focus on the additional context.

In Fig. 1, we contrast the translation qualities (measured in BLEU) between having the correctly paired (LC) and incorrectly paired (LC+Rand) previous sentences. The sentences in the validation set were sorted according to the difference , and we report the cumulative BLEU scores. The gap is large for those sentences that were deemed by the larger-context model to benefit from the additional context. This match between the score difference (which uses the reference translation) and the actual translation quality further confirms the validity of the proposed approach.

7 Conclusion

We proposed a novel regularization term for encouraging a larger-context machine translation model to focus more on the additional context using a multi-level pair-wise ranking loss. The proposed learning approach is generally applicable to any network architecture. Our empirical evaluation demonstrates that a larger-context translation model trained by the proposed approach indeed becomes more sensitive to the additional context and outperforms a context-less baseline. We believe this work is an encouraging first step toward developing a better context-aware learning algorithm for larger-context machine translation. We identify three future directions; (1) a better context distribution , (2) efficient evaluation of the context-less scores, and (3) evaluation using other tasks, such as multi-modal translation.

Acknowledgments

SJ thanks NSERC. KC thanks support by AdeptMind, eBay, TenCent, NVIDIA and CIFAR. This work was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Electronics (Improving Deep Learning using Latent Structure).

References

  • Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR).
  • Bawden et al. (2018) Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1304–1313.
  • Caglayan et al. (2017) Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Marc Masana, Luis Herranz, and Joost van de Weijer. 2017. Lium-cvc submissions for wmt17 multimodal translation task. In Proceedings of the Second Conference on Machine Translation, pages 432–439.
  • Calixto et al. (2017) Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1913–1924.
  • Collobert et al. (2011) Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch.

    Journal of Machine Learning Research

    , 12(Aug):2493–2537.
  • Denkowski and Neubig (2017) Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 18–27.
  • Elliott (2018) Desmond Elliott. 2018. Adversarial evaluation of multimodal machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2974–2978. Association for Computational Linguistics.
  • Gehring et al. (2017) Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning, pages 1243–1252.
  • Grönroos et al. (2018) Stig-Arne Grönroos, Benoit Huet, Mikko Kurimo, Jorma Laaksonen, Bernard Merialdo, Phu Pham, Mats Sjöberg, Umut Sulubacak, Jörg Tiedemann, Raphael Troncy, et al. 2018. The memad submission to the wmt18 multimodal translation task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 603–611.
  • Hassan et al. (2018) Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567.
  • Jean et al. (2017) Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? arXiv preprint arXiv:1704.05135.
  • Läubli et al. (2018) Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791–4796.
  • Libovickỳ and Helcl (2017) Jindřich Libovickỳ and Jindřich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 196–202.
  • Lison et al. (2018) Pierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).
  • Ma et al. (2017) Mingbo Ma, Dapeng Li, Kai Zhao, and Liang Huang. 2017. Osu multimodal machine translation system report. In Proceedings of the Second Conference on Machine Translation, pages 465–469.
  • Maruf and Haffari (2018) Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1275–1284.
  • Michel and Neubig (2018) Paul Michel and Graham Neubig. 2018. Extreme adaptation for personalized neural machine translation. In The 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia.
  • Miculicich et al. (2018) Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954.
  • Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725.
  • Tiedemann and Scherrer (2017) Jörg Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82–92.
  • Tu et al. (2018) Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association of Computational Linguistics, 6:407–420.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008.
  • Voita et al. (2018) Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1264–1274.
  • Wang et al. (2017) Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2826–2831.
  • Wang and Cho (2016) Tian Wang and Kyunghyun Cho. 2016.

    Larger-context language modelling with recurrent neural network.

    In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1319–1329.
  • Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
  • Zhang et al. (2018) Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542.
  • Zheng et al. (2018) Zaixiang Zheng, Shujian Huang, Zewei Sun, Rongxiang Weng, Xin-Yu Dai, and Jiajun Chen. 2018. Learning to discriminate noises for incorporating external information in neural machine translation. arXiv preprint arXiv:1810.10317.

Appendix A Larger-Context Transformer

A shared 6-layer transformer encoder is used to independently encode an additional context and a source sentence .

Using as queries (), a multi-head attention mechanism attends to as key-values (). The input and output are merged through a gate.111

Current gate values are unbounded, but it may be preferable to apply a sigmoid function to restrict the range between 0 and 1.

The final source representation is obtained through a feed-forward module (FF) used in typical transformer layers.

We use a standard 6-layer transformer decoder which attends to .