Neural Machine Translation in Linear Time

by   Nal Kalchbrenner, et al.

We present a novel neural network for processing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field. The resulting network has two core properties: it runs in time that is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time. We find that the latent alignment structure contained in the representations reflects the expected alignment between the tokens.


Joint Source-Target Self Attention with Locality Constraints

The dominant neural machine translation models are based on the encoder-...

Towards Linear Time Neural Machine Translation with Capsule Networks

In this study, we first investigate a novel capsule network with dynamic...

Fully Character-Level Neural Machine Translation without Explicit Segmentation

Most existing machine translation systems operate at the level of words,...

Towards Neural Machine Translation with Latent Tree Attention

Building models that take advantage of the hierarchical structure of lan...

Plan, Attend, Generate: Character-level Neural Machine Translation with Planning in the Decoder

We investigate the integration of a planning mechanism into an encoder-d...

Residual Shuffle-Exchange Networks for Fast Processing of Long Sequences

Attention is a commonly used mechanism in sequence processing, but it is...

A Convolutional Encoder Model for Neural Machine Translation

The prevalent approach to neural machine translation relies on bi-direct...

Code Repositories


Speech-to-Text-WaveNet : End-to-end sentence level English speech recognition based on DeepMind's WaveNet and tensorflow

view repo



view repo

1 Introduction

In neural language modelling, a neural network estimates a distribution over sequences of words or characters that belong to a given language

(Bengio et al., 2003). In neural machine translation, the network estimates a distribution over sequences in the target language conditioned on a given sequence in the source language. In the latter case the network can be thought of as composed of two sub-networks, a source network that processes the source sequence into a representation and a target network that uses the representation of the source to generate the target sequence (Kalchbrenner and Blunsom, 2013)

Recurrent neural networks (RNN) are powerful sequence models (Hochreiter and Schmidhuber, 1997) and are widely used in language modelling (Mikolov et al., 2010), yet they have a potential drawback. RNNs have an inherently serial structure that prevents them from being run in parallel along the sequence length. Forward and backward signals in a RNN also need to traverse the full distance of the serial path to reach from one point to another in the sequence. The larger the distance the harder it is to learn dependencies between the points (Hochreiter et al., 2001).

A number of neural architectures have been proposed for modelling translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014; Kalchbrenner et al., 2016a; Kaiser and Bengio, 2016). These networks either have running time that is super linear in the length of the source and target sequences, or they process the source sequence into a constant size representation, burdening the model with a memorization step. Both of these drawbacks grow more severe as the length of the sequences increases.

We present a neural translation model, the ByteNet, and a neural language model, the ByteNet Decoder, that aim at addressing these drawbacks. The ByteNet uses convolutional neural networks with dilation for both the source network and the target network. The ByteNet connects the source and target networks via stacking and unfolds the target network dynamically to generate variable length output sequences. We view the ByteNet as an instance of a wider family of sequence-mapping architectures that stack the sub-networks and use dynamic unfolding. The sub-networks themselves may be convolutional or recurrent. The ByteNet with recurrent sub-networks may be viewed as a strict generalization of the RNN Enc-Dec network (Sutskever et al., 2014; Cho et al., 2014) (Sect. 4). The ByteNet Decoder has the same architecture as the target network in the ByteNet. In contrast to neural language models based on RNNs (Mikolov et al., 2010) or on feed-forward networks (Bengio et al., 2003; Arisoy et al., 2012), the ByteNet Decoder is based on a novel convolutional structure designed to capture a very long range of past inputs.

The ByteNet has a number of beneficial computational and learning properties. From a computational perspective, the network has a running time that is linear in the length of the source and target sequences (up to a constant where is the size of the desired dependency field). The computation in the source network during training and decoding and in the target network during training can also be run efficiently in parallel along the strings – by definition this is not possible for a target network during decoding (Sect. 2). From a learning perspective, the representation of the source string in the ByteNet is resolution preserving; the representation sidesteps the need for memorization and allows for maximal bandwidth between the source and target networks. In addition, the distance traversed by forward and backward signals between any input and output tokens in the networks corresponds to the fixed depth of the networks and is largely independent of the distance between the tokens. Dependencies over large distances are connected by short paths and can be learnt more easily.

We deploy ByteNets on raw sequences of characters. We evaluate the ByteNet Decoder on the Hutter Prize Wikipedia task; the model achieves 1.33 bits/character showing that the convolutional language model is able to outperform the previous best results obtained with recurrent neural networks. Furthermore, we evaluate the ByteNet on raw character-level machine translation on the English-German WMT benchmark. The ByteNet achieves a score of 18.9 and 21.7 BLEU points on, respectively, the 2014 and the 2015 test sets; these results approach the best results obtained with other neural translation models that have quadratic running time (Chung et al., 2016b; Wu et al., 2016a). We use gradient-based visualization (Simonyan et al., 2013) to reveal the latent structure that arises between the source and target sequences in the ByteNet. We find the structure to mirror the expected word alignments between the source and target sequences.

Figure 1: The architecture of the ByteNet. The target network (blue) is stacked on top of the source network (red). The target network generates the variable-length target sequence using dynamic unfolding. The ByteNet Decoder is the target network of the ByteNet.

2 Neural Translation Model

Given a string from a source language, a neural translation model estimates a distribution over strings

of a target language. The distribution indicates the probability of a string

being a translation of . A product of conditionals over the tokens in the target leads to a tractable formulation of the distribution:


Each conditional factor expresses complex and long-range dependencies among the source and target tokens. The strings are usually sentences of the respective languages; the tokens are words or, as in the present case, characters. The network that models is composed of two sub-networks, a source network that processes the source string into a representation and a target network that uses the source representation to generate the target string (Kalchbrenner and Blunsom, 2013). The target network functions as a language model for the target language.

A neural translation model has some basic properties. The target network is autoregressive in the target tokens and the network is sensitive to the ordering of the tokens in the source and target strings. It is also useful for the model to be able to assign a non-zero probability to any string in the target language and retain an open vocabulary.

2.1 Desiderata

Beyond these basic properties the definition of a neural translation model does not determine a unique neural architecture, so we aim at identifying some desiderata. (i) The running time of the network should be linear in the length of the source and target strings. This is more pressing the longer the strings or when using characters as tokens. The use of operations that run in parallel along the sequence length can also be beneficial for reducing computation time. (ii) The size of the source representation should be linear in the length of the source string, i.e. it should be resolution preserving, and not have constant size. This is to avoid burdening the model with an additional memorization step before translation. In more general terms, the size of a representation should be proportional to the amount of information it represents or predicts. A related desideratum concerns the path traversed by forward and backward signals in the network between a (source or target) input token and a predicted output token. Shorter paths whose length is decoupled from the sequence distance between the two tokens have the potential to better propagate the signals (Hochreiter et al., 2001) and to let the network learn long-range dependencies more easily.

Figure 2: Dynamic unfolding in the ByteNet architecture. At each step the target network is conditioned on the source representation for that step, or simply on no representation for steps beyond the source length. The decoding ends when the target network produces an end-of-sequence (EOS) symbol.

3 ByteNet

We aim at building neural language and translation models that capture the desiderata set out in Sect. 2.1. The proposed ByteNet architecture is composed of a target network that is stacked on a source network and generates variable-length outputs via dynamic unfolding. The target network, referred to as the ByteNet Decoder, is a language model that is formed of one-dimensional convolutional layers that use dilation (Sect. 3.3) and are masked (Sect. 3.2). The source network processes the source string into a representation and is formed of one-dimensional convolutional layers that use dilation but are not masked. Figure 1 depicts the two networks and their combination in the ByteNet.

3.1 Dynamic Unfolding

To accommodate source and target sequences of different lengths, the ByteNet uses dynamic unfolding. The source network builds a representation that has the same width as the source sequence. At each step the target network takes as input the corresponding column of the source representation until the target network produces the end-of-sequence symbol. The source representation is zero-padded on the fly: if the target network produces symbols beyond the length of the source sequence, the corresponding conditioning column is set to zero. In the latter case the predictions of the target network are conditioned on source and target representations from previous steps. Figure 

2 represents the dynamic unfolding process.

3.2 Masked One-dimensional Convolutions

Given a target string the target network embeds each of the first tokens via a look-up table (the tokens

serve as targets for the predictions). The resulting embeddings are concatenated into a tensor of size

where is the number of inner channels in the network. The target network applies masked one-dimensional convolutions (van den Oord et al., 2016b) to the embedding tensor that have a masked kernel of size . The masking ensures that information from future tokens does not affect the prediction of the current token. The operation can be implemented either by zeroing out some of the weights on a wider kernel of size or by padding the output map.

Figure 3: Left: Residual block with ReLUs (He et al., 2015) adapted for decoders. Right: Residual Multiplicative Block adapted for decoders and corresponding expansion of the MU (Kalchbrenner et al., 2016b).

3.3 Dilation

The masked convolutions use dilation to increase the receptive field of the target network (Chen et al., 2014; Yu and Koltun, 2015). Dilation makes the receptive field grow exponentially in terms of the depth of the networks, as opposed to linearly. We use a dilation scheme whereby the dilation rates are doubled every layer up to a maximum rate (for our experiments ). The scheme is repeated multiple times in the network always starting from a dilation rate of 1 (van den Oord et al., 2016a; Kalchbrenner et al., 2016b).

3.4 Residual Blocks

Each layer is wrapped in a residual block that contains additional convolutional layers with filters of size 1 (He et al., 2015). We adopt two variants of the residual blocks, one with ReLUs, which is used in the machine translation experiments, and one with Multiplicative Units (Kalchbrenner et al., 2016b), which is used in the language modelling experiments. Figure 3 diagrams the two variants of the blocks.

3.5 Sub-Batch Normalization

We introduce a modification to Batch Normalization (BN)

(Ioffe and Szegedy, 2015)

in order to make it applicable to target networks and decoders. Standard BN computes the mean and variance of the activations of a given convolutional layer along the batch, height, and width dimensions. In a decoder, the standard BN operation at training time would average activations along all the tokens in the input target sequence, and the BN output for each target token would incorporate the information about the tokens that follow it. This breaks the conditioning structure of Eq. 

1, since the succeeding tokens are yet to be predicted.

To circumvent this issue, we present Sub-Batch Normalization (SubBN). It is a variant of BN, where a batch of training samples is split into two parts: the main batch and the auxiliary batch. For each layer, the mean and variance of its activations are computed over the auxiliary batch, but are used for the batch normalization of the main batch. At the same time, the loss is computed only on the predictions of the main batch, ignoring the predictions from the auxiliary batch.

3.6 Bag of Character -Grams

The tokens that we adopt correspond to characters in the input sequences. An efficient way to increase the capacity of the models is to use input embeddings not just for single tokens, but also for -grams of adjacent tokens. At each position we sum the embeddings of the respective -grams for

component-wise into a single vector. Although the portion of seen

-grams decreases as the value of increases – a cutoff threshold is chosen for each – all characters (-grams for ) are seen during training. This fallback structure provided by the bag of character -grams guarantees that at any position the input given to the network is always well defined. The length of the sequences corresponds to the number of characters and does not change when using bags of -grams.

Figure 4: Recurrent ByteNet variants of the ByteNet architecture. Left: Recurrent ByteNet with convolutional source network and recurrent target network. Right: Recurrent ByteNet with bidirectional recurrent source network and recurrent target network. The latter architecture is a strict generalization of the RNN Enc-Dec network.

4 Model Comparison

In this section we analyze the properties of various previously and currently introduced neural translation models. For the sake of a more complete analysis, we also consider two recurrent variants in the ByteNet family of architectures, which we do not evaluate in the experiments.

4.1 Recurrent ByteNets

The ByteNet is composed of two stacked source and target networks where the top network dynamically adapts to the output length. This way of combining source and target networks is not tied to the networks being strictly convolutional. We may consider two variants of the ByteNet that use recurrent networks for one or both of the sub-networks (see Figure 4). The first variant replaces the convolutional target network with a recurrent one that is similarly stacked and dynamically unfolded. The second variant replaces the convolutional source network with a recurrent network, namely a bidirectional RNN. The target RNN is placed on top of the bidirectional source RNN. We can see that the RNN Enc-Dec network (Sutskever et al., 2014; Cho et al., 2014) is a Recurrent ByteNet where all connections between source and target – except for the first one that connects and – have been severed. The Recurrent ByteNet is thus a generalization of the RNN Enc-Dec and, modulo the type of sequential architecture, so is the ByteNet.

4.2 Comparison of Properties

In our comparison we consider the following neural translation models: the Recurrent Continuous Translation Model (RCTM) 1 and 2 (Kalchbrenner and Blunsom, 2013); the RNN Enc-Dec (Sutskever et al., 2014; Cho et al., 2014); the RNN Enc-Dec Att with the attentional pooling mechanism (Bahdanau et al., 2014) of which there are a few variations (Luong et al., 2015; Chung et al., 2016a); the Grid LSTM translation model (Kalchbrenner et al., 2016a) that uses a multi-dimensional architecture; the Extended Neural GPU model (Kaiser and Bengio, 2016) that has a convolutional RNN architecture; the ByteNet and the two Recurrent ByteNet variants.

The two grounds of comparison are the desiderata (i) and (ii) set out in Sect 2.1. We separate the computation time desideratum (i) into three columns. The first column indicates the time complexity of the network as a function of the length of the sequences and is denoted by Time. The other two columns Net and Net indicate, respectively, whether the source and the target network uses a convolutional structure (CNN) or a recurrent one (RNN); a CNN structure has the advantage that it can be run in parallel along the length of the sequence. We also break the learning desideratum (ii) into three columns. The first is denoted by RP and indicates whether the source representation in the network is resolution preserving. The second Path column corresponds to the length in layer steps of the shortest path between a source token and any output target token. Similarly, the third Path column corresponds to the length of the shortest path between an input target token and any output target token. Shorter paths lead to better forward and backward signal propagation.

Table 1 summarizes the properties of the models. The ByteNet, the Recurrent ByteNets and the RNN Enc-Dec are the only networks that have linear running time (up to the constant ). The RNN Enc-Dec, however, does not preserve the source sequence resolution, a feature that aggravates learning for long sequences such as those in character-level machine translation (Luong and Manning, 2016). The RCTM 2, the RNN Enc-Dec Att, the Grid LSTM and the Extended Neural GPU do preserve the resolution, but at a cost of a quadratic running time. The ByteNet stands out also for its Path properties. The dilated structure of the convolutions connects any two source or target tokens in the sequences by way of a small number of network layers corresponding to the depth of the source or target networks. For character sequences where learning long-range dependencies is important, paths that are sub-linear in the distance are advantageous.

Model Net Net Time RP Path Path
RNN Enc-Dec RNN RNN no
RNN Enc-Dec Att RNN RNN yes 1
Extended Neural GPU cRNN cRNN yes
Recurrent ByteNet RNN RNN yes
Recurrent ByteNet CNN RNN yes
ByteNet CNN CNN yes
Table 1: Properties of various previously and presently introduced neural translation models. The ByteNet models have both linear running time and are resolution preserving.

5 Character Prediction

We first evaluate the ByteNet Decoder separately on a character-level language modelling benchmark. We use the Hutter Prize version of the Wikipedia dataset and follow the standard split where the first 90 million bytes are used for training, the next 5 million bytes are used for validation and the last 5 million bytes are used for testing (Chung et al., 2015). The total number of characters in the vocabulary is 205.

The ByteNet Decoder that we use for the result has 25 residual blocks split into five sets of five blocks each; for the five blocks in each set the dilation rates are, respectively, 1,2,4,8 and 16. The masked kernel has size 3. This gives a receptive field of 315 characters. The number of hidden units is 892. For this task we use residual multiplicative blocks and Sub-BN (Fig. 3 Right); we do not use bags of character -grams for the inputs. For the optimization we use Adam (Kingma and Ba, 2014) with a learning rate of and a weight decay term of . We do not reduce the learning rate during training. At each step we sample a batch of sequences of 515 characters each, use the first 315 characters as context and predict only the latter 200 characters.

Table 2 lists recent results of various neural sequence models on the Wikipedia dataset. All the results except for the ByteNet result are obtained using some variant of the LSTM recurrent neural network (Hochreiter and Schmidhuber, 1997). The ByteNet Decoder achieves 1.33 bits/character on the test set.

Model Test
Stacked LSTM (Graves, 2013) 1.67
GF-LSTM (Chung et al., 2015) 1.58
Grid-LSTM (Kalchbrenner et al., 2016a) 1.47
Layer-normalized LSTM (Chung et al., 2016a) 1.46
MI-LSTM (Wu et al., 2016b) 1.44
Recurrent Highway Networks (Srivastava et al., 2015) 1.42
Recurrent Memory Array Structures (Rocki, 2016) 1.40
HM-LSTM (Chung et al., 2016a) 1.40
Layer Norm HyperLSTM (Ha et al., 2016) 1.38
Large Layer Norm HyperLSTM (Ha et al., 2016) 1.34
ByteNet Decoder
Table 2: Negative log-likelihood results in bits/byte on the Hutter Prize Wikipedia benchmark.
Model WMT Test ’14 WMT Test ’15
Phrase Based MT
RNN Enc-Dec
RNN Enc-Dec + reverse
RNN Enc-Dec Att
RNN Enc-Dec Att + deep (Zhou et al., 2016) 20.6
RNN Enc-Dec Att + local p + unk replace
RNN Enc-Dec Att + BPE in + BPE out

RNN Enc-Dec Att + BPE in + char out
GNMT + char in + char out (Wu et al., 2016a)
ByteNet 18.9 21.7
Table 3: BLEU scores on En-De WMT NewsTest 2014 and 2015 test sets. The ByteNet is character-level. The other models are word-level unless otherwise noted. Result (1) is from (Freitag et al., 2014), result (2) is from (Williams et al., 2015), results (3) are from (Luong et al., 2015) and results (4) are from (Chung et al., 2016b)
Figure 5: Lengths of sentences in characters and their correlation coefficient for the En-De and the En-Ru WMT NewsTest-2013 validation data. The correlation coefficient is similarly high () for all other language pairs that we inspected.
At the same time, around 3000 demonstrators attempted to reach the official residency of
Prime Minister Nawaz Sharif.
Gleichzeitig versuchten rund 3000 Demonstranten, zur Residenz von Premierminister
Nawaz Sharif zu gelangen.
Gleichzeitig haben etwa 3000 Demonstranten versucht, die offizielle Residenz des
Premierministers Nawaz Sharif zu erreichen.
Just try it: Laura, Lena, Lisa, Marie, Bettina, Emma and manager Lisa Neitzel
(from left to right) are looking forward to new members.
Einfach ausprobieren: Laura, Lena, Lisa, Marie, Bettina, Emma und Leiterin Lisa Neitzel
(von links) freuen sich auf Mitstreiter.
Probieren Sie es aus: Laura, Lena, Lisa, Marie, Bettina, Emma und Manager Lisa Neitzel
(von links nach rechts) freuen sich auf neue Mitglieder.
He could have said, “I love you,” but it was too soon for that.
Er hätte sagen können “ich liebe dich”, aber dafür war es noch zu früh.
Er hätte sagen können: “I love you”, aber es war zu früh.
Table 4: Raw output translations generated from the ByteNet that highlight interesting reordering and transliteration phenomena. For each group, the first row is the English source, the second row is the ground truth German target, and the third row is the ByteNet translation.

6 Character-Level Machine Translation

We evaluate the full ByteNet on the WMT English to German translation task. We use NewsTest 2013 for development and NewsTest 2014 and 2015 for testing. The English and German strings are encoded as sequences of characters; no explicit segmentation into words or morphemes is applied to the strings. The outputs of the network are strings of characters in the target language. There are about 140 characters in each of the languages.

The ByteNet used in the experiments has 15 residual blocks in the source network and 15 residual blocks in the target network. As in the ByteNet Decoder, the residual blocks are arranged in sets of five with corresponding dilation rates of 1,2,4,8 and 16. For this task we use residual blocks with ReLUs and Sub-BN (Fig. 3 Left). The number of hidden units is 892. The size of the kernel in the source network is , whereas the size of the masked kernel in the target network is . We use bags of character -grams as additional embeddings at the source and target inputs: for we prune all -grams that occur less than 500 times. For the optimization we use Adam with a learning rate of .

Each sentence is padded with special characters to the nearest greater multiple of 25. Each pair of sentences is mapped to a bucket based on the pair of padded lengths for efficient batching during training. We find that Sub-BN learns bucket-specific statistics that cannot easily be merged across buckets. We circumvent this issue by simply searching over possible target intervals as a first step during decoding with a beam search; each hypothesis uses Sub-BN statistics that are specific to a target length interval. The hypotheses are ranked according to the average likelihood of each character.

Table 3 contains the results of the experiments. We note that the lengths of the translations generated by the ByteNet are especially close to the lengths of the reference translations and do not tend to be too short; the brevity penalty in the BLEU scores is 0.995 and 1.0 for the two test sets, respectively. We also note that the ByteNet architecture seems particularly apt for machine translation. The correlation coefficient between the lengths of sentences from different languages is often very high (Fig. 5), an aspect that is compatible with the resolution preserving property of the architecture.

Table 4 contains some of the unaltered generated translations from the ByteNet that highlight reordering and other phenomena such as transliteration. The character-level aspect of the model makes post-processing unnecessary in principle. We further visualize the sensitivity of the ByteNet’s predictions to specific source and target inputs. Figure 6 represents a heatmap of the magnitude of the gradients of source and target inputs with respect to the generated outputs. For visual clarity, we sum the gradients for all the characters that make up each word and normalize the values along each column. In contrast with the attentional pooling mechanism (Bahdanau et al., 2014), this general technique allows us to inspect not just dependencies of the outputs on the source inputs, but also dependencies of the outputs on previous target inputs, or on any other neural network layers.

Figure 6: Magnitude of gradients of the predicted outputs with respect to the source and target inputs. The gradients are summed for all the characters in a given word. In the bottom heatmap the magnitudes are nonzero on the diagonal, since the prediction of a target character depends highly on the preceding target character in the same word.

7 Conclusion

We have introduced the ByteNet, a neural translation model that has linear running time, decouples translation from memorization and has short signal propagation paths for tokens in sequences. We have shown that the ByteNet Decoder is a state-of-the-art character-level language model based on a convolutional neural network that significantly outperforms recurrent language models. We have also shown that the ByteNet generalizes the RNN Enc-Dec architecture and achieves promising results for raw character-level machine translation while maintaining linear running time complexity. We have revealed the latent structure learnt by the ByteNet and found it to mirror the expected alignment between the tokens in the sentences.