Universal Vector Neural Machine Translation With Effective Attention

06/09/2020 ∙ by Satish Mylapore, et al. ∙ Southern Methodist University 0

Neural Machine Translation (NMT) leverages one or more trained neural networks for the translation of phrases. Sutskever introduced a sequence to sequence based encoder-decoder model which became the standard for NMT based systems. Attention mechanisms were later introduced to address the issues with the translation of long sentences and improving overall accuracy. In this paper, we propose a singular model for Neural Machine Translation based on encoder-decoder models. Most translation models are trained as one model for one translation. We introduce a neutral/universal model representation that can be used to predict more than one language depending on the source and a provided target. Secondly, we introduce an attention model by adding an overall learning vector to the multiplicative model. With these two changes, by using the novel universal model the number of models needed for multiple language translation applications are reduced.



There are no comments yet.


page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural Machine Translation (NMT) [15] is a significant recent development in large scale translation [14, 20]. The traditional translation model introduced by Koehn et al. 2003 [17]

was trained on a single large neural model with layers that are trained separately requiring many resources and effort. Today, most industry players have adopted a neural network based machine translation system derived from the Recurrent Neural Network (RNN) encoder-decoder model introduced by Cho et al. 2014


. For machine translation, the encoder is used with the source language to encode the sentence input into a vector representation for the decoder. The decoder uses the encoded sequence to begin predicting the target sequence. There were several advancements to this model by the introduction of different types of RNNs such as LSTM (Long Short Term Memory)

[12, 25, 28, 30]

, GRU (Gated Recurrent Unit)

[3], and Bi-RNN (Bidirectional RNN) [27]

which was introduced to address the vanishing gradient problem

[23] that was encountered during the training of the simple recurrent neural network.

Gated recurrent networks failed to fully resolve the problem of the encoder-decoder network [24]

which is the ability to learn and maintain information of the encoder for longer sentences. This is where the attention mechanism was introduced by Graves et al. 2014 that is based on the cosine similarity of the sentences

[8], Bahdanau et al. 2014 which concatenates the encoder and decoder information [1], and Loung et al. 2015 that uses the dot product of the the encoder and decoder information to score the attention on the target sequence [20, 19]. The introduction of attention mechanisms increased the scalability of machine translation at the cost of performance during training.

The latest development in the machine translation space is the introduction of the Transformer model by Vaswani et al. 2017 [32]. The Transformer model focuses more on self-attention and fully leverages recurrent networks. It promotes self-attention in both the encoder and the decoder where the encoding of the source sequences are done in parallel. This reduces the training time significantly. The decoder prediction is auto-regressive which means it predicts each word at a time in a regressive state. Vaswani claims that the results of the transformer model has a significant improvement in prediction accuracy when compared to other recent models in the NMT space with the use of a German translation task [32].

The transformer model is still in the incubation and adoption stage in current industry practice. This is due to its restricted context length during translation (fixed-length context). Furthermore, at present all RNN encoder-decoder based machine translation models still use a single model architecture for a translation job. For example, if a task requires translation from Spanish to English, one model will be trained. Another model would be trained to translate from English to Spanish. One model corresponds to one translation task, hence separate models are required. In this research, we seek to build a singular model to translate multiple languages. For the purpose of this research we have considered English-Spanish and Spanish-English translation using the same model.

All machine translation mechanisms to date use language specific encoders for each source language [10]. This paper will detail a novel method of hosting multiple neural machine translation tasks within the same model as follows. Section 2 will cover related works on the fundamental concepts of the sequence to sequence Recurrent Neural Network based Encoder-Decoder model, the additive attention model by Bahdanau, and wrap-up with the Dual Learning method introduced by Microsoft. Section 3 outlines the architecture for the universal vector model and discusses each layer. Section 4 discusses the training method for the universal model, while Section 5 explains the dataset and how it is used for training. Section 6 is an overview of the BLEU score. The translation results of the Universal Vector is explained in Section 7 then Section 8 presents the analysis of the BLEU score, loss results, and attention model performance. Section 9 goes over limitations and potential steps to take in the future with Section 10 discussing previously considered experiments. Finally, the paper is concluded with some parting thoughts on the development of this novel model in Section 11.

2 Related Work

This section will go over the associated work related to building the Universal Vector Neural Machine Translation model. First Recurrent Neural Network Based Encoder-Decoder Models proposed by Sutskever et al. and Cho et al. will be discussed. Next, the attention mechanism first proposed by Bahdanau et al. will be detailed. Finally, the Dual Learning model training approach is explained.

2.1 Recurrent Neural Network Based Encoder-Decoder Models

Many NMTs are built upon the fundamental Recurrent Neural Network (RNN) based Encoder-Decoder model as proposed by Sutskever et al. (2014) and by Cho et al. (2014) [4, 3, 5, 31]. This model uses two networks, an encoder and a decoder, to learn sequences of information and make predictions. In this model a sequence of input is provided to the encoder, an RNN. An RNN allows for outputs of iterations through a network to be passed on as input to future iterations [6, 18, 26, 29]. is processed word by word () over multiple iterations. Each iteration calculates a hidden state that is based on the current word in a phrase () and the hidden states of previous iterations (). This is represented at a high level in Equation 1 below with a non-linear equation calculating hidden states at each position [1].


Once all hidden states have been calculated, a function will then return a single fixed length context vector with each hidden state as inputs like in Equation 2 below. represents the full summary of the output of the encoder network [1].


The output of the encoder, , is then fed into the decoder which is another trained RNN. The decoder emits the prediction for each input at iteration

where these conditional outputs come together as a probability distribution like below in Equation 

3 [1].


is another non-linear function that takes in the previously predicted words (), the hidden state of the current iteration of the network (), and the context vector from above (). represents a predicted target sequence of words for a given input sequence of words with conditional probability [22]. This is the basis of the Encoder-Decoder model that has been used heavily in neural machine translation.

2.2 Attention Mechanism

Attention mechanisms have gained visibility recently as they are able to improve the performance of translation by helping the encoder and decoder to align by providing guidance on what parts of a large sentence will be most useful in predicting the next word [1, 19, 32, 33]. In recent years many attention models have been introduced such as Bahdanau et al [1] which concatenates (referred to as ”concat” in Luong, et al., 2015 [19] and as ”additive attention” in Vaswani, et al., 2017 [32]) forward and backward information from the source. This model changes the fundamental RNN Encoder-Decoder described above in a variety of ways.

The encoder is built using a bi-directional recurrent neural network that contains two models. Each model will compute hidden states in either direction from a given input . This will yield two hidden states, and . These two hidden states are concatenated together to form a vector as below in Equation 4 that will represent the whole sentence emanating out from a given input word and are referred to as annotations [1].


Due to RNNs tendency toward recency bias, the words immediately surrounding around a given input () will be better represented in the input word’s annotation (). This will be reflected when calculating attention which begins with a replacement to the fixed length context vector mentioned in Section 2.1. A new context vector is calculated for every output word . This begins with a scoring function which will represent the importance of the hidden state output from the previous iteration of the decoder to a given annotation represented by Equation 5 below [1]. A higher score will represent higher importance.


is then fed into a softmax function found below in Equation 6 which will return a vector of numbers that all sum up to one that represents the weight of each annotation with respect to the given position of [1].


Finally, the context vector unique to each word output by the decoder is calculated with the summation found in Equation 7 below.


Vector will be used in the calculation of hidden states in the decoder found in Equation 8. , the previously predicted words , and the will then be used in calculating the output of each iteration of the decoder at step as in Equation 9 below. The output is a vector of probabilities of each possible word that could be predicted at . The context vector will weigh in words at input position that scored a higher importance from in Equation 5 more than others which represents attention. This in in contrast to taking the whole vector of input words into account at every th position of [1].


This is an early implementation of attention proposed by Bahdanau et al. 2014 [1]. Many other forms of attention have been proposed since. Luong et al refers to Bahdanau’s attention mechanism as ”global attention.” In turn, Luong et al. proposed a ”local attention method” that focuses on smaller portions of context instead of applying attention weights on the entire source text [19]. The new attention mechanism proposed in this paper combines the two.

2.3 Dual Learning

In a paper proposed by Microsoft Research [9]

, the team considered a dual learning mechanism to handle the complexities in the training data labeling. The dual learning mechanism considers two agents, one agent for the forward translation model (source to target language) and the second agent is considered for the dual translation (target to source language). These models use two different corpora for training which are not parallel data sets. This enables reinforcement learning for the convergence of source and target language. The inputs considered on the Microsoft Research paper are ”Monolingual corpora

and , initial translation models and , language models LMA and LMB, hyper-parameter , beam search size , learning rates , .” [9] The experiment used to test dual training uses two separate models, one for each translation direction.

In this paper, there are two contributions based on RNN Encoder-Decoder based machine translations. First, a neutral/universal vector representation for machine translation is introduced. Then a modified attention mechanism based on global attention mechanism proposed in Luong et al. [19] and Bahdanau et al. [1] is discussed. Finally, testing of the proposed neutral vector representation with modified attention mechanism are examined and the results are presented.

3 Model Architecture

The architecture of the model is built on top of the basic sequence to sequence model and modified to translate more than one language. A high level architecture diagram is found in Fig. 1 below. This model contains two networks, an encoder and a decoder, with embedded inputs and outputs for each. It also contains the modified attention mechanism and a Fully Connected Layer. In the current structure, the source text is inserted into the Input Embedding layer which contains the Encoder RNN. There are multiple Input Embedding Layers to handle different source texts such as Spanish, English, German, etc. From the Input Embedding layer, the results (context vectors) are fed into the Target Embedding layer which contains the Decoder RNN along with the modified Attention layer. As is the case with the Encoder portion of the system, there are multiple Target Embedding layers for multiple target languages. Lastly, the output from the Target Embedding layer is passed into the Target Fully Connected layer. The result is a vector of probabilities for words in the target language. From this vector, the predicted phrase is converted from a numeric vector representation to words in a natural language.

Figure 1: Model architecture detailing the encoder and decoder networks and their inputs.

3.1 Embedding Layers

The model starts with an encoder layer to generate a vector that will be fed to the decoder to generate predictions in a target language. Embedding vectors for the encoder will be built as layers, which are considered to be the source input to the encoding layer as Equation 10 below.


Here, is the embedding vector and each number represented by represents a different language used as a source for translation. This will be used as the first layer in the encoder network. Similarly, an embedding layer for the decoder is also built as in Equation 11 below.


is the embedding vector and each number represented by represents a different language used as the target prediction.

3.2 Attention Layer

The modified attention mechanism considers a context vector created by the encoder. This vector is created based on all the hidden vectors of the hidden states during the encoding phase. This vector has a representation of each word from the source. The attention score used to predict each target word is calculated by the dot product of the hidden value of each prediction and the encoded output [32]. This scoring mechanism is based on the global attention method proposed by Luong et al. [19]. Learning weights were introduced into the dot product score, which is calculated using Equation 12 below. The purpose of this is to learn the overall weights of the dot product score.


The context vector is computed by taking the dot product of the encoder output. This is done to add global alignment to the context vector, which will be used to estimate the score of the next prediction.

The attention mechanism will be used to align decoder predictions of the target vector. Attention weights for each target language is defined as Equation 13 below. Where is the attention weight and is the target language.


3.3 Fully Connected Layer

The last layer is a fully connected layer, which will be attached to the size of the target language as seen in Equation 14 below where is a connected layer and

is each target language. The purpose of the fully connected layer is to act as a classifier for each targeted translated text.


4 Model Training

The model training process considers training for each set of translations in a sequence. For this experiment, Spanish and English languages are considered for training with Gated Recurrent Units (GRUs) as the recurrent unit were considered to address the long term dependencies [12, 2, 13, 11] where , , , and all act as attention weight matrices, is the embedding layer for Spanish and is the embedding layer for English, is the fully connected layer for Spanish and is the fully connected layer for English. The weight matrices for each gate in the GRU are represented as and for the update gates, and for relevance gates, and for the context gates, and and represent the hidden vectors. The weights will be initialized using the Glorot Uniform Initializer [7]. The Adam optimization algorithm has been used with a learning rate and a decayed learning rate

. Loss will be measured using the discrete classification methodology which leverages the sparse Softmax cross-entropy with logits loss. Spanish-English will be used as the parallel database where each example of language will be trained in parallel.

A pseudo algorithm of the training process is given below.

1:Parallel Dataset with phrases in both and
3:     for all Phrases made up of in  do
4:         Block 1: Encode example and compute encoder GRU layer , , ,
5:         Decode to predict (English) using the encoder output of ,
6:         Compute Compute and for alignment model
7:         Compute , , , for each prediction.
8:         Compute
9:         Compute loss using the sparse softmax cross entropy with logits loss
10:         Block 2: Encode example and compute encoder GRU layer , , ,
11:         Decode to predict (Spanish) using the encoder output of ,
12:         Compute and for alignment model
13:         Compute , , , for each prediction.
14:         Compute
15:         Compute loss using the sparse softmax cross entropy with logits loss
16:         Compute total loss
17:         Optimize using Adam optimization with learning rate and decayed learning rate .
18:     end for
19:until All Phrases have been processed
Algorithm 1 Model Training Process

This process is repeated for all the examples. Here, we keep the encoder and decoder same for all the languages that are trained for prediction. If another translation is added, then the blocks are repeated for each language.

5 Dataset

Parallel datasets for Spanish and English are used for training of the Universal Vector model. Data is taken from Many Things, an online resource for English as a Second Language Students111The primary source of the dataset used in this study as well as many more language pairings can be found at http://www.manythings.org/anki/

. We used a copy hosted by the TensorFlow team at

http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip [16]. It contains 122,936 pairs of phrases in English and a corresponding Spanish translation.

5.1 Training

The universal vector model is trained using a modified version of the Dual Training method proposed by Xia et al. [9]

. This model is trained with a sequence for each training data, first with Spanish to English and then English to Spanish for every iteration of the dataset mentioned above. Sample phrases in both English and Spanish were used to test the predictive ability of the network into both languages. The model has been trained at 20, 30, and 40 epochs to see the effectiveness of the model as the amount of training increases.

6 BLEU Score

A Bilingual Evaluation Understudy (BLEU) score was used as a metric to determine the effectiveness of our NMT. BLEU was developed as a replacement for human-based validation of machine based translation that was becoming an expensive bottleneck due to the need for language expertise. The formula to calculate the score is language independent, does not need to be trained, and is able to mimic human evaluation. The function takes in the translated sentence and one or more reference sentences that it will be compared with. Groups of words, or -grams, in the translated sentence to be evaluated are matched with -grams in the reference sentences.

The first step in the scoring process is to calculate a precision score by taking the number of matching -grams between the evaluated sentence and the reference sentences. This number is then divided by the total count of the -grams in both the references sentences and the evaluated translation. This equation can be found below in Equation 15 [21]. Another consideration when determining a score for a translation is the length of the output. There are many ways to say the same thing in most languages, but using too many words can introduce ambiguity and using too few words may not provide enough nuance.


Penalties are in place to ensure sentence of proper length score better. The precision score equation has a built in penalty for candidate sentences that are too long as more -grams will increase the denominator and lead to a smaller score. For translations that are too short, a penalty is introduced in the form of a Brevity Penalty (BP) as in Equation 16 [21] below. is the count of the words in the reference sentence that is closest to the translated sentence being evaluated. is the length of the candidate sentence. If there is a match, the BP is 1 and there is no penalty assessed. If there is not an exact match in length, then a penalty is assessed according to an exponentiation of .


The overall BLEU score for a candidate sentence is the product of the brevity penalty and the exponential sum of the product of the log of the precision score multiplied by a positive weight . This weight is based on the number of -grams such that . The overall score is found by using Equation 17 below. Equation 18 below is a form of the equation that is used to provide values that are more able to be ranked among other candidate translated sentences by applying a log to the whole sentence.


The NLTK BLUE Score package is used for evaluation of the model222documentation for the BLEU score functionality can be found https://www.nltk.org/_modules/nltk/translate/bleu_score.html.

7 Results

The following section will cover the translation results obtained from the Universal Vector model. It will discuss the translations from English to Spanish and Spanish to English.

7.1 Translations

Example phrases in each language were fed to the model. Two example pairs of phrases are found in Table 1 below.

English Spanish
They abandoned their country Ellos abandonaron su país
This is my life Esta es mi vida
Table 1: Example phrases used for testing

The results of the English to Spanish task can be found in Table 2 below. In the case of our model, a BLEU score cannot capture the accuracy since it is based on matching -grams. The sentences were too short to have anything larger than matching bigrams which are too small for the scoring algorithm. The result of the first phrase perfectly matched the reference sentence found in Table 1. The output of the second phrase switched the gender of the word for ”this” in English from ”esto” to ”esta”. Without more context before a phrase, the model is not able to consistently determine the genders of specific words.

English Input Spanish Output
They abandoned their country Ellos abandonaron su país
This is my life Esto es mi vida
Table 2: English input and Spanish output

When English and Spanish are flipped, the model provided similar results. The resulting English outputs can be found below in Table 3. Small differences are present, again due to small gender differences that small sentences will be expected to yield without proper context for pronouns.

Spanish Input English Output
Ellos abandonaron su país They abandoned his country
Esta es mi vida This is my life
Table 3: Spanish input and English output

Sentences longer than four or five words yielded very poor results. This is due to the small dataset and low number of training iterations when compared with other papers in the NMT space such as most of those cited in this paper. With a larger dataset and more training time the model will better handle longer phrases.

8 Model Analysis

The following section covers the analysis and each subsection that follows is the analysis discussion for the BLEU score, Loss analysis and Attention model.

8.1 BLEU Score Analysis

Applying the BLEU score to the Universal Vector Model resulted in unfavorable scores. Table 4 shows the results of the BLEU score from Spanish to English and English to Spanish. BLEU score calculations are provided as part of this work to show the minimum capability of this model to translate more than one language using a single universal model. The score from this work should not be compared with other translation models like Bert and other Transformer based models [32, 34]

. There are two main reasons behind this. First, the tested sentence words were short in length. Second, the short sentences did not meet the minimal n-gram length of 2 for proper scoring. The use of longer sentences could have solved these issues, however the model had difficulty translating longer sentences at the level of training we were able to accomplish in the time given (60 epochs).

Sentence Direction BLEU Score
esto es mi vida. English to Spanish 8.3882e-155
this is my life. Spanish to English 6.8681e-78
Table 4: BLEU Score Results

8.2 Loss Analysis

Figure 2: Loss analysis by epoch during training of the model.

Since the BLEU score could not properly capture model accuracy for testing, more attention has been placed into minimizing loss. The loss explains how well the model is performing by minimizing error. A lower number for loss correlates to a better performing model. Figure 2 shows the performance of the universal vector model by loss and training time over each epoch. The results of the model are shown between 40-60 Epochs to show where the loss curve flattens. The figure shows that the loss gradually declined between 41 to 48 epochs and began to stabilize at about .

At the 60th epoch, a loss of 0.6963 was obtained which was sufficient to translate short sentences. The model struggled with training performance with respect to time between 40-50 epochs. The exponential jump in time could potentially be due to the model struggling to get to the local minima point during optimization. Overall the loss obtained is sufficient to translate short sentences and shows that the universal vector model can translate words with minimal error.

8.3 Attention Model Analysis

Heat maps were created to visualize how the attention mechanism directed the focus of the decoder when predicting the corresponding text in a translation. The diagram has each word in the source language on the top and each word in the predicted sentence in the target language on the left axis. Fig. 3 below was generated when the Spanish phrase ”Esta es mi vida” was fed into the model. On the left is the output of the model which is a prediction of the English Translation. As a visual reminder, the heat map does not necessarily show how words are correlated from source to target. Instead the visual representation of the heat map gives insight into the parts of input that the attention model focuses on when translating. For example, the yellow box in the upper left shows heavy focus on the Spanish word ”esta” when the model predicts the English word ”this”. From there, the heavy areas of focus follow a diagonal line down and to the left. This means as the decoder moves on to predict words later in the sentence, the focus is directed to later parts of the source sentence which is generally good. Longer sentences would show more defined and more varied areas of heat as they get more complicated. Overall, the maps generated from the small sentence sizes that the model can handle, show potential that the modified attention mechanism is working as intended.

Figure 3: Heat map showing areas of focus from Spanish to English.

9 Limitations and Future Expansion

For further model experimentation on translation of more than two languages, a parallel dataset containing a triad of language phrases is required. While the architecture and model as part of this experiment is created to handle more than two languages, we only consider using a single model for two languages. As of today, most parallel datasets available are bilingual. In the future, a parallel dataset with three or more languages will be used to train and modify the current universal vector representation model. Furthermore, larger datasets will be used with more training iterations akin to other papers in the NMT space. A more standardized test such as those provided by the annual Workshop on Machine Translation can be used on the model translated text.

10 Previously Considered Experiments

Connected Learning was the first attempt at a novel proposal. During the time of initial research, there were no other papers proposing the methods that made up this new idea. This method would allow the weights to learn source and target language as a format. First the model is trained in the direction of Source Target, then immediately trained again with the direction of Target Source, and finally the weights are retrained from Source Target.

In connected learning, training is done on the source sequence of vectors as and target sequence of vectors as . For each of the sequence pairs of vectors, the source and target are swapped twice by utilizing the hidden output as the input when swapped. For example, if vector sequence represents Spanish and vector sequence is English, the model would first generate and and use the context vector ”” when combining the hidden state of the recurrent network and provide vector sequence as the source and generate values for .

The belief was that the weights in the contextual information would have all the target information, however the model could not converge to a local optimum point where it was aligned to both source and target information.

11 Conclusion

In this paper, the idea of a ”Universal Vector” is proposed as a new facet of NMT that can be used to translate between multiple languages in the same vector space. Models are usually built to translate in one direction. There exists some work that has been done in using both directions between a source and target language for reinforcement learning of training sets. However, the ”Universal Vector” model is a singular model that can be trained in both directions (source to target and target to source) for more than one pair of languages.

The ”Universal Vector” model detailed in this paper was built to test the proposition by modifying an RNN based Encoder-Decoder model. Existing attention mechanisms were also modified and used to create context vectors that increased performance in predicting the next translated text for overall target phrase translation. Multiple fully connected layers are added, one for each target language, to facilitate translations into multiple target languages.

The model is trained with parallel English and Spanish datasets. Phrases from both languages are trained from English to Spanish and Spanish to English within a recurrent network using Dual Training based methods. It was tested with many examples of both Spanish and English phrases. The attention mechanism was evaluated by viewing heat maps of where the model selectively focused on input text for its corresponding translated text.

While the results are promising, with more time and resources the experiment would provide better results. With more computing power the model can be trained using more words with more languages in a reasonable amount of time. In the future, better accepted benchmarks in translation such as those provided by the annual Workshop on Machine Translation can be used. While limited in scope, these results point to potential for greater accuracy on using a singular model for translating between multiple languages.


  • [1] D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Cited by: §1, §2.1, §2.1, §2.1, §2.2, §2.2, §2.2, §2.2, §2.2, §2.2, §2.3.
  • [2] Y. Bengio, P. Simard, and P. Frasconi (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5 (2), pp. 157–166. Cited by: §4.
  • [3] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio (2014) On the properties of neural machine translation: encoder-decoder approaches. Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. External Links: Link, Document Cited by: §1, §2.1.
  • [4] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation.

    Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    External Links: Link, Document Cited by: §1, §2.1.
  • [5] M. L. Forcada and R. P. Ñeco (1997) Recursive hetero-associative memories for translation. In Proceedings of the International Work-Conference on Artificial and Natural Neural Networks: Biological and Artificial Computation: From Neuroscience to Technology, IWANN ’97, Berlin, Heidelberg, pp. 453–462. External Links: ISBN 3540630473 Cited by: §2.1.
  • [6] F. A. Gers, J. Schmidhuber, and F. Cummins (1999) Learning to forget: continual prediction with lstm. In 1999 Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470), Vol. 2, pp. 850–855 vol.2. Cited by: §2.1.
  • [7] X. Glorot and Y. Bengio (2010-13–15 May) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics

    , Y. W. Teh and M. Titterington (Eds.),

    Proceedings of Machine Learning Research

    , Vol. 9, Chia Laguna Resort, Sardinia, Italy, pp. 249–256.
    External Links: Link Cited by: §4.
  • [8] A. Graves, G. Wayne, and I. Danihelka (2014) Neural turing machines. CoRR abs/1410.5401. External Links: Link, 1410.5401 Cited by: §1.
  • [9] D. He, Y. Xia, T. Qin, L. Wang, N. Yu, T. Liu, and W. Ma (2016) Dual learning for machine translation. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, USA, pp. 820–828. External Links: ISBN 978-1-5108-3881-9, Link Cited by: §2.3, §5.1.
  • [10] K. M. Hermann and P. Blunsom (2014-04)

    Multilingual Distributed Representations without Word Alignment

    In Proceedings of ICLR, External Links: Link Cited by: §1.
  • [11] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber (2001) Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Cited by: §4.
  • [12] S. Hochreiter and J. Schmidhuber (1997-12) Long short-term memory. Neural computation 9, pp. 1735–80. External Links: Document Cited by: §1, §4.
  • [13] S. Hochreiter (1991-04) Untersuchungen zu dynamischen neuronalen netzen. pp. . Cited by: §4.
  • [14] S. Jean, K. Cho, R. Memisevic, and Y. Bengio (2015) On using very large target vocabulary for neural machine translation. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). External Links: Link, Document Cited by: §1.
  • [15] N. Kalchbrenner and P. Blunsom (2013-10) Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, pp. 1700–1709. External Links: Link Cited by: §1.
  • [16] C. Kelly (2020-03) Tab-delimited Bilingual Sentence Pairs. (English). External Links: Link Cited by: §5.
  • [17] P. Koehn, F. J. Och, and D. Marcu (2003) Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 127–133. External Links: Link Cited by: §1.
  • [18] Z. C. Lipton (2015) A critical review of recurrent neural networks for sequence learning. CoRR abs/1506.00019. External Links: Link, 1506.00019 Cited by: §2.1.
  • [19] T. Luong, H. Pham, and C. D. Manning (2015) Effective approaches to attention-based neural machine translation. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. External Links: Link, Document Cited by: §1, §2.2, §2.2, §2.3, §3.2.
  • [20] T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba (2014) Addressing the rare word problem in neural machine translation. CoRR abs/1410.8206. External Links: Link, 1410.8206 Cited by: §1, §1.
  • [21] K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002-07) Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pp. 311–318. External Links: Link, Document Cited by: §6, §6.
  • [22] R. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio (2014) How to construct deep recurrent neural networks. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), (English (US)). Cited by: §2.1.
  • [23] R. Pascanu, T. Mikolov, and Y. Bengio (2013) On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML’13, pp. III–1310–III–1318. Cited by: §1.
  • [24] J. Pouget-Abadie, D. Bahdanau, B. van Merriënboer, K. Cho, and Y. Bengio (2014-10) Overcoming the curse of sentence length for neural machine translation using automatic segmentation. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, pp. 78–85. External Links: Link, Document Cited by: §1.
  • [25] D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1988) Learning representations by back-propagating errors. In Neurocomputing: Foundations of Research, pp. 696–699. External Links: ISBN 0262010976 Cited by: §1.
  • [26] H. Salehinejad, J. Baarbe, S. Sankar, J. Barfett, E. Colak, and S. Valaee (2018) Recent advances in recurrent neural networks. CoRR abs/1801.01078. External Links: Link, 1801.01078 Cited by: §2.1.
  • [27] M. Schuster and K. K. Paliwal (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. Cited by: §1.
  • [28] H. Schwenk (2013-01)

    CSLM - a modular open-source continuous space language modeling toolkit

    Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 1198–1202. Cited by: §1.
  • [29] A. Sherstinsky (2018) Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. CoRR abs/1808.03314. External Links: Link, 1808.03314 Cited by: §2.1.
  • [30] M. Sundermeyer, R. Schlüter, and H. Ney (2012-09) LSTM neural networks for language modeling. pp. . Cited by: §1.
  • [31] I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In NIPS, Cited by: §2.1.
  • [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017) Attention is all you need. In NIPS, Cited by: §1, §2.2, §3.2, §8.1.
  • [33] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, Ł. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. CoRR abs/1609.08144. External Links: Link Cited by: §2.2.
  • [34] J. Zhu, Y. Xia, L. Wu, D. He, T. Qin, W. Zhou, H. Li, and T. Liu (2020) Incorporating bert into neural machine translation. External Links: 2002.06823 Cited by: §8.1.