Nematus: a Toolkit for Neural Machine Translation

03/13/2017 ∙ by Rico Sennrich, et al. ∙ 0

We present Nematus, a toolkit for Neural Machine Translation. The toolkit prioritizes high translation accuracy, usability, and extensibility. Nematus has been used to build top-performing submissions to shared translation tasks at WMT and IWSLT, and has been used to train systems for production environments.



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories


Preprocessed Python functions and docstrings for automated code documentation (code2doc) and automated code generation (doc2code) tasks.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural Machine Translation (NMT) [Bahdanau et al.2015, Sutskever et al.2014] has recently established itself as a new state-of-the art in machine translation. We present Nematus111available at, a new toolkit for Neural Machine Translation.

Nematus has its roots in the dl4mt-tutorial.222 We found the codebase of the tutorial to be compact, simple and easy to extend, while also producing high translation quality. These characteristics make it a good starting point for research in NMT. Nematus has been extended to include new functionality based on recent research, and has been used to build top-performing systems to last year’s shared translation tasks at WMT [Sennrich et al.2016] and IWSLT [Junczys-Dowmunt and Birch2016].

Nematus is implemented in Python, and based on the Theano framework

[Theano Development Team2016]

. It implements an attentional encoder–decoder architecture similar to DBLP:journals/corr/BahdanauCB14. Our neural network architecture differs in some aspect from theirs, and we will discuss differences in more detail. We will also describe additional functionality, aimed to enhance usability and performance, which has been implemented in Nematus.

2 Neural Network Architecture

Nematus implements an attentional encoder–decoder architecture similar to the one described by DBLP:journals/corr/BahdanauCB14, but with several implementation differences. The main differences are as follows:

  • We initialize the decoder hidden state with the mean of the source annotation, rather than the annotation at the last position of the encoder backward RNN.

  • We implement a novel conditional GRU with attention.

  • In the decoder, we use a feedforward hidden layer with non-linearity rather than a maxout

    before the softmax layer.

  • In both encoder and decoder word embedding layers, we do not use additional biases.

  • Compared to Look, Generate, Update decoder phases in DBLP:journals/corr/BahdanauCB14, we implement Look, Update, Generate which drastically simplifies the decoder implementation (see Table 1).

  • Optionally, we perform recurrent Bayesian dropout [Gal2015].

  • Instead of a single word embedding at each source position, our input representations allows multiple features (or “factors”) at each time step, with the final embedding being the concatenation of the embeddings of each feature [Sennrich and Haddow2016].

  • We allow tying of embedding matrices [Press and Wolf2017, Inan et al.2016].

RNNSearch [Bahdanau et al.2015] Nematus (DL4MT)
Phase Output - Input Phase Output - Input
Look Look
Generate Update
Update Generate
Table 1: Decoder phase differences

We will here describe some differences in more detail:

Given a source sequence of length and a target sequence of length , let be the annotation of the source symbol at position , obtained by concatenating the forward and backward encoder RNN hidden states, , and be the decoder hidden state at position .

decoder initialization

DBLP:journals/corr/BahdanauCB14 initialize the decoder hidden state with the last backward encoder state.

with as trained parameters.333All the biases are omitted for simplicity. We use the average annotation instead:

conditional GRU with attention

Nematus implements a novel conditional GRU with attention, cGRU. A cGRU uses its previous hidden state , the whole set of source annotations and the previously decoded symbol in order to update its hidden state , which is further used to decode symbol at position ,

Our conditional GRU layer with attention mechanism, cGRU, consists of three components: two GRU state transition blocks and an attention mechanism ATT in between. The first transition block, , combines the previous decoded symbol and previous hidden state in order to generate an intermediate representation with the following formulations:

where is the target word embedding matrix, is the proposal intermediate representation, and being the reset and update gate activations. In this formulation, , , , , , are trained model parameters;

is the logistic sigmoid activation function.

The attention mechanism, ATT, inputs the entire context set C along with intermediate hidden state

in order to compute the context vector

as follows:

where is the normalized alignment weight between source symbol at position and target symbol at position and are the trained model parameters.

Finally, the second transition block, , generates , the hidden state of the , by looking at intermediate representation and context vector with the following formulations:

similarly, being the proposal hidden state, and being the reset and update gate activations with the trained model parameters .

Note that the two GRU blocks are not individually recurrent, recurrence only occurs at the level of the whole cGRU layer. This way of combining RNN blocks is similar to what is referred in the literature as deep transition RNNs [Pascanu et al.2014, Zilly et al.2016] as opposed to the more common stacked RNNs [Schmidhuber1992, El Hihi and Bengio1995, Graves2013].

deep output

Given , , and

, the output probability

is computed by a softmax activation, using an intermediate representation .

are the trained model parameters.

3 Training Algorithms

By default, the training objective in Nematus is cross-entropy minimization on a parallel training corpus. Training is performed via stochastic gradient descent, or one of its variants with adaptive learning rate (Adadelta


, RmsProp

[Tieleman and Hinton2012], Adam [Kingma and Ba2014]).

Additionally, Nematus supports minimum risk training (MRT) [Shen et al.2016]

to optimize towards an arbitrary, sentence-level loss function. Various MT metrics are supported as loss function, including smoothed sentence-level

Bleu [Chen and Cherry2014], METEOR [Denkowski and Lavie2011], BEER [Stanojevic and Sima’an2014]

, and any interpolation of implemented metrics.

To stabilize training, Nematus supports early stopping based on cross entropy, or an arbitrary loss function defined by the user.

4 Usability Features

Figure 1: Search graph visualisation for DEEN translation of "Hallo Welt!" with beam size 3.

In addition to the main algorithms to train and decode with an NMT model, Nematus includes features aimed towards facilitating experimentation with the models, and their visualisation. Various model parameters are configurable via a command-line interface, and we provide extensive documentation of options, and sample set-ups for training systems.

Nematus provides support for applying single models, as well as using multiple models in an ensemble – the latter is possible even if the model architectures differ, as long as the output vocabulary is the same. At each time step, the probability distribution of the ensemble is the geometric average of the individual models’ probability distributions. The toolkit includes scripts for beam search decoding, parallel corpus scoring and n-best-list rescoring.

Nematus includes utilities to visualise the attention weights for a given sentence pair, and to visualise the beam search graph. An example of the latter is shown in Figure 1. Our demonstration will cover how to train a model using the command-line interface, and showing various functionalities of Nematus, including decoding and visualisation, with pre-trained models.444Pre-trained models for 8 translation directions are available at

5 Conclusion

We have presented Nematus, a toolkit for Neural Machine Translation. We have described implementation differences to the architecture by DBLP:journals/corr/BahdanauCB14; due to the empirically strong performance of Nematus, we consider these to be of wider interest.

We hope that researchers will find Nematus an accessible and well documented toolkit to support their research. The toolkit is by no means limited to research, and has been used to train MT systems that are currently in production [WIPO2016].

Nematus is available under a permissive BSD license.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 645452 (QT21), 644333 (TraMOOC), 644402 (HimL) and 688139 (SUMMA).


  • [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR).
  • [Chen and Cherry2014] Boxing Chen and Colin Cherry. 2014. A Systematic Comparison of Smoothing Techniques for Sentence-Level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362–367, Baltimore, Maryland, USA.
  • [Denkowski and Lavie2011] Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85–91, Edinburgh, Scotland.
  • [El Hihi and Bengio1995] Salah El Hihi and Yoshua Bengio. 1995.

    Hierarchical Recurrent Neural Networks for Long-Term Dependencies.

    In Nips, volume 409.
  • [Gal2015] Yarin Gal. 2015. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv e-prints.
  • [Graves2013] Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
  • [Inan et al.2016] Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. CoRR, abs/1611.01462.
  • [Junczys-Dowmunt and Birch2016] Marcin Junczys-Dowmunt and Alexandra Birch. 2016. The University of Edinburgh’s systems submission to the MT task at IWSLT. In The International Workshop on Spoken Language Translation (IWSLT), Seattle, USA.
  • [Kingma and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • [Pascanu et al.2014] Razvan Pascanu, Çağlar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to Construct Deep Recurrent Neural Networks. In International Conference on Learning Representations 2014 (Conference Track).
  • [Press and Wolf2017] Ofir Press and Lior Wolf. 2017. Using the Output Embedding to Improve Language Models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Valencia, Spain.
  • [Schmidhuber1992] Jürgen Schmidhuber. 1992. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234–242.
  • [Sennrich and Haddow2016] Rico Sennrich and Barry Haddow. 2016. Linguistic Input Features Improve Neural Machine Translation. In Proceedings of the First Conference on Machine Translation, Volume 1: Research Papers, pages 83–91, Berlin, Germany.
  • [Sennrich et al.2016] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh Neural Machine Translation Systems for WMT 16. In Proceedings of the First Conference on Machine Translation, Volume 2: Shared Task Papers, pages 368–373, Berlin, Germany.
  • [Shen et al.2016] Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany.
  • [Stanojevic and Sima’an2014] Milos Stanojevic and Khalil Sima’an. 2014. BEER: BEtter Evaluation as Ranking. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 414–419, Baltimore, Maryland, USA.
  • [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112, Montreal, Quebec, Canada.
  • [Theano Development Team2016] Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688.
  • [Tieleman and Hinton2012] Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5 - rmsprop.
  • [WIPO2016] WIPO. 2016. WIPO Develops Cutting-Edge Translation Tool For Patent Documents, Oct.
  • [Zeiler2012] Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
  • [Zilly et al.2016] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. 2016. Recurrent highway networks. arXiv preprint arXiv:1607.03474.