code-docstring-corpus
Preprocessed Python functions and docstrings for automated code documentation (code2doc) and automated code generation (doc2code) tasks.
view repo
We present Nematus, a toolkit for Neural Machine Translation. The toolkit prioritizes high translation accuracy, usability, and extensibility. Nematus has been used to build top-performing submissions to shared translation tasks at WMT and IWSLT, and has been used to train systems for production environments.
READ FULL TEXT VIEW PDF
In this paper, we present nmtpy, a flexible Python toolkit based on Thea...
read it
We present Sockeye 2, a modernized and streamlined version of the Sockey...
read it
We present Marian, an efficient and self-contained Neural Machine Transl...
read it
OpenNMT is an open-source toolkit for neural machine translation (NMT). ...
read it
We present NMT-Keras, a flexible toolkit for training deep learning mode...
read it
Video-guided machine translation as one of multimodal neural machine
tra...
read it
Digitization has led to smart, connected technologies be an integral par...
read it
Preprocessed Python functions and docstrings for automated code documentation (code2doc) and automated code generation (doc2code) tasks.
Neural Machine Translation (NMT) [Bahdanau et al.2015, Sutskever et al.2014] has recently established itself as a new state-of-the art in machine translation. We present Nematus111available at https://github.com/rsennrich/nematus, a new toolkit for Neural Machine Translation.
Nematus has its roots in the dl4mt-tutorial.222https://github.com/nyu-dl/dl4mt-tutorial We found the codebase of the tutorial to be compact, simple and easy to extend, while also producing high translation quality. These characteristics make it a good starting point for research in NMT. Nematus has been extended to include new functionality based on recent research, and has been used to build top-performing systems to last year’s shared translation tasks at WMT [Sennrich et al.2016] and IWSLT [Junczys-Dowmunt and Birch2016].
Nematus is implemented in Python, and based on the Theano framework
[Theano Development Team2016]. It implements an attentional encoder–decoder architecture similar to DBLP:journals/corr/BahdanauCB14. Our neural network architecture differs in some aspect from theirs, and we will discuss differences in more detail. We will also describe additional functionality, aimed to enhance usability and performance, which has been implemented in Nematus.
Nematus implements an attentional encoder–decoder architecture similar to the one described by DBLP:journals/corr/BahdanauCB14, but with several implementation differences. The main differences are as follows:
We initialize the decoder hidden state with the mean of the source annotation, rather than the annotation at the last position of the encoder backward RNN.
We implement a novel conditional GRU with attention.
In the decoder, we use a feedforward hidden layer with non-linearity rather than a maxout
before the softmax layer.
In both encoder and decoder word embedding layers, we do not use additional biases.
Compared to Look, Generate, Update decoder phases in DBLP:journals/corr/BahdanauCB14, we implement Look, Update, Generate which drastically simplifies the decoder implementation (see Table 1).
Optionally, we perform recurrent Bayesian dropout [Gal2015].
Instead of a single word embedding at each source position, our input representations allows multiple features (or “factors”) at each time step, with the final embedding being the concatenation of the embeddings of each feature [Sennrich and Haddow2016].
We allow tying of embedding matrices [Press and Wolf2017, Inan et al.2016].
RNNSearch [Bahdanau et al.2015] | Nematus (DL4MT) | ||
---|---|---|---|
Phase | Output - Input | Phase | Output - Input |
Look | Look | ||
Generate | Update | ||
Update | Generate |
We will here describe some differences in more detail:
Given a source sequence of length and a target sequence of length , let be the annotation of the source symbol at position , obtained by concatenating the forward and backward encoder RNN hidden states, , and be the decoder hidden state at position .
DBLP:journals/corr/BahdanauCB14 initialize the decoder hidden state with the last backward encoder state.
with as trained parameters.333All the biases are omitted for simplicity. We use the average annotation instead:
Nematus implements a novel conditional GRU with attention, cGRU. A cGRU uses its previous hidden state , the whole set of source annotations and the previously decoded symbol in order to update its hidden state , which is further used to decode symbol at position ,
Our conditional GRU layer with attention mechanism, cGRU, consists of three components: two GRU state transition blocks and an attention mechanism ATT in between. The first transition block, , combines the previous decoded symbol and previous hidden state in order to generate an intermediate representation with the following formulations:
where is the target word embedding matrix, is the proposal intermediate representation, and being the reset and update gate activations. In this formulation, , , , , , are trained model parameters;
is the logistic sigmoid activation function.
The attention mechanism, ATT, inputs the entire context set C along with intermediate hidden state
in order to compute the context vector
as follows:where is the normalized alignment weight between source symbol at position and target symbol at position and are the trained model parameters.
Finally, the second transition block, , generates , the hidden state of the , by looking at intermediate representation and context vector with the following formulations:
similarly, being the proposal hidden state, and being the reset and update gate activations with the trained model parameters .
Note that the two GRU blocks are not individually recurrent, recurrence only occurs at the level of the whole cGRU layer. This way of combining RNN blocks is similar to what is referred in the literature as deep transition RNNs [Pascanu et al.2014, Zilly et al.2016] as opposed to the more common stacked RNNs [Schmidhuber1992, El Hihi and Bengio1995, Graves2013].
Given , , and
, the output probability
is computed by a softmax activation, using an intermediate representation .are the trained model parameters.
By default, the training objective in Nematus is cross-entropy minimization on a parallel training corpus. Training is performed via stochastic gradient descent, or one of its variants with adaptive learning rate (Adadelta
[Zeiler2012], RmsProp
[Tieleman and Hinton2012], Adam [Kingma and Ba2014]).Additionally, Nematus supports minimum risk training (MRT) [Shen et al.2016]
to optimize towards an arbitrary, sentence-level loss function. Various MT metrics are supported as loss function, including smoothed sentence-level
Bleu [Chen and Cherry2014], METEOR [Denkowski and Lavie2011], BEER [Stanojevic and Sima’an2014], and any interpolation of implemented metrics.
To stabilize training, Nematus supports early stopping based on cross entropy, or an arbitrary loss function defined by the user.
In addition to the main algorithms to train and decode with an NMT model, Nematus includes features aimed towards facilitating experimentation with the models, and their visualisation. Various model parameters are configurable via a command-line interface, and we provide extensive documentation of options, and sample set-ups for training systems.
Nematus provides support for applying single models, as well as using multiple models in an ensemble – the latter is possible even if the model architectures differ, as long as the output vocabulary is the same. At each time step, the probability distribution of the ensemble is the geometric average of the individual models’ probability distributions. The toolkit includes scripts for beam search decoding, parallel corpus scoring and n-best-list rescoring.
Nematus includes utilities to visualise the attention weights for a given sentence pair, and to visualise the beam search graph. An example of the latter is shown in Figure 1. Our demonstration will cover how to train a model using the command-line interface, and showing various functionalities of Nematus, including decoding and visualisation, with pre-trained models.444Pre-trained models for 8 translation directions are available at http://statmt.org/rsennrich/wmt16_systems/
We have presented Nematus, a toolkit for Neural Machine Translation. We have described implementation differences to the architecture by DBLP:journals/corr/BahdanauCB14; due to the empirically strong performance of Nematus, we consider these to be of wider interest.
We hope that researchers will find Nematus an accessible and well documented toolkit to support their research. The toolkit is by no means limited to research, and has been used to train MT systems that are currently in production [WIPO2016].
Nematus is available under a permissive BSD license.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 645452 (QT21), 644333 (TraMOOC), 644402 (HimL) and 688139 (SUMMA).
Hierarchical Recurrent Neural Networks for Long-Term Dependencies.
In Nips, volume 409.
Comments
There are no comments yet.