Neural networks for sequence-to-sequence modeling typically consist of an encoder and a decoder coupled via an attention mechanism. Whereas the very first deep models used stacked recurrent neural networks (RNN) (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015) in the encoder and decoder, the recent Transformer model (Vaswani et al., 2017) constitutes the current state-of-the-art approach, owing to its better context modeling via multi-head self- and cross-attentions.
Given an encoder-decoder architecture and its hyper-parameters, such as the number of encoder and decoder layers, vocabulary sizes (in the case of text based models), and hidden layers, the parameters of the model, i.e., matrices and biases for non-linear transformations, are optimized by iteratively updating them so that the loss for the training data is minimized. The hyper-parameters can also be tuned, for instance, through maximizing the automatic evaluation score on the development data. However, in general, it is highly unlikely (or impossible) that a single optimized model suffices diverse cost-benefit demands at the same time. For instance, in practical low-latency scenarios, one may accept some performance drop for speed. However, a model used with a subset of optimized parameters might perform badly. A single optimized model cannot also guarantee the best performance for each individual input. Although this is practically important, it has drawn only a little attention. An existing solution for this problem is to host multiple models simultaneously for flexible choice. However, this approach is not very practical, because it requires an unreasonably large quantity of resources. Furthermore, there are no well-established methods for selecting appropriate models for each individual input prior to decoding.
As a more effective solution, we consider training a single model that subsumes multiple models which can be used for decoding with different hyper-parameter settings depending on the input or on the latency requirements. In this paper, we focus on the number of layers as an important hyper-parameter that impacts both speed and quality of decoding, and propose a multi-layer softmaxing method, which trains multi-layer neural models referring to the output of all layers during training. Conceptually, as illustrated in Figure 1, this method involves tying (sharing) the parameters of multiple models with different number of layers and is not specific to particular types of multi-layer neural models. On top of the above method, we consider to exploit the model further. For saving decoding time, we design and evaluate mechanisms to choose appropriate number of layers depending on the input prior to decoding. As for further model compression, we leverage other orthogonal types of parameter tying approaches, such as those reviewed in Section 2.
Despite the generality of our proposed method, in this paper, we focus on encoder-decoder models with encoder and decoder layers, and compress models111Rather than casting the encoder-decoder model into a single column model with () layers. by updating the model with a total of losses computed by softmaxing the output of each of the decoder layers, where it attends to the output of each of the encoder layers. The number of parameters of the resultant encoder-decoder model is equivalent to that of the most complex subsumed model with encoder and decoder layers. Yet, we can now perform faster decoding using a fewer number of layers, given that shallower layers are better trained.
To evaluate our proposed method, we take the case study of neural machine translation (NMT) (Cho et al., 2014; Bahdanau et al., 2015), using the Transformer model (Vaswani et al., 2017), and demonstrate that a single model with encoder and decoder layers trained by our method can be used for flexibly decoding with fewer than and layers without appreciable quality loss. We evaluate our proposed method on WMT18 English-to-German translation task, and give a cost-benefit analysis for translation quality vs. decoding speed.
The rest of the paper is organized as follows. Section 2 briefly reviews related work for compressing neural models. Section 3 covers our method that ties multiple models by softmaxing all encoder-decoder layer combinations. Section 4 describes our efforts towards designing and evaluating a mechanism for dynamically selecting encoder-decoder layer combinations prior to decoding. Section 5 describes two orthogonal extensions to our model aiming at further model compression and speeding-up of decoding. The paper ends with Section 6 containing conclusion and future work.
2 Related Work
There are studies that exploit multiple layers simultaneously. Wang et al. (2018)
fused hidden representations of multiple layers in order to improve the translation quality.Belinkov et al. (2017) and Dou et al. (2018)
attempted to identify which layer can generate useful representations for different natural language processing tasks. Unlike them, we make all layers of the encoder and decoder usable for decoding with any encoder-decoder layer combination. In practical scenarios, we can save significant amounts of time by choosing shallower encoder and decoder layers for inference.
Our method ties the parameters of multiple models, which is orthogonal to the work that ties parameters between layers (Dabre and Fujita, 2019) and/or between the encoder and decoder within a single model (Xia et al., 2019; Dabre and Fujita, 2019). Parameter tying leads to compact models, but they usually suffer from drops in inference quality. In this paper, we counter such drops with knowledge distillation (Hinton et al., 2015; Kim and Rush, 2016; Freitag et al., 2017)
. This approach utilizes smoothed data or smoothed training signals instead of the actual training data. A model with a large number of parameters and high performance provides smoothed distributions that are then used as labels for training small models instead of one-hot vectors.
As one of the aims in this work is model size reduction, it is related to a growing body of work that addresses the computational requirement reduction. Pruning of pre-trained models (See et al., 2016)
makes it possible to discard around 80% of the smallest weights of a model without deterioration in inference quality, given it is re-trained with appropriate hyper-parameters after pruning. Currently, most deep learning implementations use 32-bit floating point representations, but 16-bit floating point representations(Gupta et al., 2015; Ott et al., 2018)
or aggressive binarization(Courbariaux et al., 2017) can be alternatives. Compact models are usually faster to decode; studies on quantization (Lin et al., 2016) and average attention networks (Xiong et al., 2018) address this topic.
To the best of our knowledge, none of the above work has attempted to combine multi-model parameter tying, knowledge distillation, and dynamic layer selection for obtaining and exploiting highly-compressed and flexible deep neural models.
3 Multi-Layer Softmaxing
3.1 Proposed Method
Figure 1 gives a simple overview of the concept of multi-layer softmaxing, taking a generic model as an example. The rightmost 4-layer model takes an input and passes it through 4 layers222We make no assumptions about the nature of the layers.
before a softmax layer to predict the output. Typically, one would apply softmax to the 4th layer only, compute loss, and then back-propagate gradients in order to update parameters. Instead, we propose to apply softmax to each layer, aggregate the computed losses, and then perform back-propagation. This enables us to choose any layer combination during decoding instead of only the topmost layer.
Extending this to a multi-layer encoder-decoder model is straightforward. In encoder-decoder models, the encoder comprises an embedding layer for the input (source language for NMT) and stacked transformation layers. The decoder consists of an embedding layer and a softmax layer for generating the output (target language for NMT) along with stacked transformation layers. Let be the input to the -layer encoder, the anticipated output of the -layer decoder as well as the input to the decoder (for training), and the predicted output by the decoder. Algorithm 1 shows the pseudo-code for our proposed method. The line 3 represents the process done by the -th encoder layer, , and the line 5 does the same for the -th decoder layer, . In simple words, we compute a loss using the output of each of the decoder layers which in turn is computed using the output of each of the encoder layers. In line 8, the losses are aggregated333We averaged multiple losses in our experiment, but there are a number of options, such as weighted averaging. before back-propagation. Henceforth, we will refer to this as the Tied-Multi model.
For a comparison, the vanilla model is formulated as follows: , , and .
|BLEU score||Decoding time (sec)|
|36 vanilla models||Single tied-multi model|
3.2 Experimental Setup
We evaluated the following two types of models on both translation quality and decoding speed.
- Vanilla models:
36 vanilla models with 1 to 6 encoder and 1 to 6 decoder layers, each trained referring only to the last layer for computing loss.
- Tied-Multi model:
A single tied-multi model with encoder and decoder layers, trained by our multi-layer softmaxing.
We experimented with the WMT18 English-to-German (EnDe) translation task. We used all the parallel corpora available for WMT18, except ParaCrawl corpus,444http://www.statmt.org/wmt18/translation-task.html
We excluded ParaCrawl following the instruction on the WMT18 website: “BLEU score dropped by 1.0” for this task. consisting of 5.58M sentence pairs, as the training data and 2,998 sentences in newstest2018 as test data. The English and German sentences were pre-processed using the tokenizer.perl and truecase.perl scripts in Moses.555https://github.com/moses-smt/mosesdecoder The true-case models for English and German were trained on 10M sentences randomly extracted from the monolingual data made available for the WMT18 translation task, using the train-truecaser.perl script available in Moses.
Our multi-layer softmaxing method was implemented on top of an open-source toolkit of the Transformer model(Vaswani et al., 2017) in the version 1.6 branch of tensor2tensor.666https://github.com/tensorflow/tensor2tensor For training, we used the default model settings corresponding to transformer_base_single_gpu in the implementation, except what follows. We used a shared sub-word vocabulary of 32k determined using the internal sub-word segmenter of tensor2tensor and trained the models for 300k iterations. We trained the vanilla models on 1 GPU and our tied-multi model on 2 GPUs with batch size halved to ensure that both models see the same amount of training data. We averaged the last 10 checkpoints saved every after 1k updates, decoded the test sentences, fixing a beam size777One can realize faster decoding by narrowing down the beam width. This approach is orthogonal to ours and in this paper we do not insist which is superior to the other. of 4 and length penalty of 0.6, and post-processed the decoded results using the detokenizer.perl and detruecase.perl scripts in Moses.
We evaluated our models using the BLEU metric (Papineni et al., 2002) implemented in sacreBLEU (Post, 2018).888https://github.com/mjpost/sacreBLEU
signature: BLEU+case.mixed+lang.en-de+numrefs.1 +smooth.exp+test.wmt18+tok.13a+version.1.3.7 We also present the time (in seconds) consumed to translate the test set, which includes times for the model instantiation, loading the checkpoint, sub-word splitting and indexing, decoding, and sub-word de-indexing and merging, whereas times for detokenization are not taken into account.
Note that we did not use any development data for two reasons. First, we train all models for the same number of iterations.999This enables a fair comparison, since it ensures that each model sees roughly the same number of training examples. Second, we use checkpoint averaging before decoding, which does not require a development set unlike early stopping.
Table 1 summarizes the BLEU scores and the decoding times of all the models, exhibiting the cost-benefit property of our tied-multi model in comparison with the results of the corresponding 36 vanilla models.
Even though the objective function for the tied-multi model is substantially more complex than the one for the vanilla model, when performing decoding with the 6 encoder and 6 decoder layers, it achieved a BLEU score of 35.0, which is approaching to the best BLEU score of 35.7 given by the vanilla model with 6 encoder and 4 decoder layers. Note that when using a single encoder layer and/or a single decoder layer, the vanilla models gave significantly higher BLEU score than the tied-multi model. However, when the number of layers is increased, there is no significant difference between the two types of models.
Regarding the cost-benefit property of our tied-multi model, two points must be noted:
BLEU score and decoding time increase only slightly, when we use more encoder layers.
The bulk of the decoding time is consumed by the decoder, since it works in an auto-regressive manner. We can substantially cut down decoding time by using fewer decoder layers which does lead to sub-optimal translation quality.
One may argue that training a single vanilla model with optimal number of encoder and decoder layers is enough. However, as discussed in Section 1, it is impossible to know a priori which combination is the best. More importantly, a single vanilla model cannot suffice diverse cost-benefit demands and cannot guarantee the best translation for any input (see Section 4.1). Recall that we aim at a flexible model and that all the results in Table 1 have been obtained using a single tied-multi model, albeit using different number of encoder and decoder layers for decoding.
We conducted an analysis from the perspective of training times and model sizes, in comparison with vanilla models.
Given that all our models were trained for the same number of iterations, we compared the training times between vanilla and tied-multi models. As a reference, we use the vanilla model with 6 encoder and 6 decoder layers. The total training time for all the 36 vanilla models was 25.5 times101010We measured the collapsed time for a fair comparison, assuming that all vanilla models were trained on a single GPU one after another, even though one may be able to use multiple GPUs to train the 36 vanilla models in parallel. that of the reference model. In contrast, the training time for our tied-multi model was about 9.5 times that of the same reference model. Unexpectedly, training a tied-multi model was much more computationally efficient than independently training all the 36 vanilla models with different number of layers.
The number of parameters of our tied-multi model is exactly the same as the vanilla model with encoder and decoder layers. If we train a set of vanilla models with different numbers of encoder and decoder layers, we end up with significantly more parameters. For instance, in case of in our experiment, we have 25.2 times more parameters: a total of 4,607M for the 36 vanilla models against 183M for our tied-multi model. In Section 5, we discuss the possibility of further model compression.
4 Dynamic Layer Selection
To better understand the nature of our proposed method, we analyzed the distribution of oracle translations within 36 translations generated by each of the vanilla and our tied-multi models.
Having confirmed that a single encoder-decoder layer combination cannot guarantee the best translation for any input, we tackled an advanced problem: designing a mechanism for dynamically selecting one layer combination prior to decoding.111111 This is the crucial difference from two post-decoding processes: translation quality estimation
This is the crucial difference from two post-decoding processes: translation quality estimation(Specia et al., 2010) and -best-list re-ranking (Kumar and Byrne, 2004).
4.1 Decoding Behavior of Tied-Multi Model
Let be an encoder-decoder layer combination of a given model with encoder and decoder layers. The oracle layer combination for an input sentence was determined by measuring the quality of the translation derived from each layer combination. We used a reference-based metric, chrF (Popović, 2016), since it has been particularity designed to sentence-level translation evaluation and was shown to have relatively high correlation with human judgment of translation quality at sentence level for the English–German pair (Ma et al., 2018). In cases where multiple combinations have the highest score, we chose the fastest one following the overall trend of decoding time (Table 1). Formally, we considered a combination is faster than another combination if the following holds.
Figure 2 presents the distribution of oracle layer combinations for the vanilla and our tied-multi models. A comparison between the two distributions revealed that the shallower layer combinations in our tied-multi model often generates better translations than deeper ones unlike the vanilla models, despite the lower corpus-level BLEU scores. This sharp bias towards shallower layer combinations suggests the potential reduction of decoding time by dynamically selecting the layer combination per input sentence prior to decoding, ideally without performance drop.
where is the -th input sentence (), is the translation for derived from the -th layer combination () among all possible combinations, is the model with parameters , and is a loss function. Assuming that the independence of target labels (layer combinations) for a given input sentence allows for ties, the model is able to predict multiple layer combinations for the same input sentence.
The model with parameters implemented in our experiment is a multi-head self-attention neural network inspired by Vaswani et al. (2017). The number of layers and attention heads are optimized during a hyper-parameter search, while the feed-forward layer dimensionality is fixed to 2,048. Input sequences of tokens are mapped to their corresponding embeddings, initialized by the embedding table of the tied-multi NMT model. Similarly to BERT (Devlin et al., 2019)
, a specific token is appended to each input sequence before being fed to the classifier. This token is finally fed during the forward pass to the output linear layer for sentence classification. The output linear layer haspossible combinations.
The parameters2013) and the loss function , implemented as a weighted binary cross-entropy (BCE) function detailed in (3) and averaged over the batch.
where is the reference class of the -th input sentence , is the output of the network after the sigmoid layer given and is the weight given to the -th class based on class distribution prior knowledge. During our experiment, we have found that the classifier tends to favor recall in detriment to precision. To tackle this issue, we introduce another loss using an approximation of the macro implemented following (4).
where , , and .
The final loss function is the linear interpolation ofaveraged over the classes and with parameter : . We tune , , and during the classifier hyper-parameter search based on the validation loss.
The layer combination classifier was trained on a subset of the tied-multi NMT model training data presented in Section 3.2 containing 5.00M sentences, whereas the remaining sentences compose a validation and a test set containing approximately 200k sentences each. The two latter subsets were used for hyper-parameter search and evaluation of the classifier, respectively. To allow for comparison and reproducibility, the final evaluation of the proposed approach in terms of translation quality and decoding speed were conducted on the official WMT development (newstest2017, 3,004 sentences) and test (newstest2018, 2,998 sentences) sets; the latter is the one also used in Section 3.2.
The training, development, and test sets were translated with each layer combination of the tied-multi NMT model. Each source sentence was thus aligned with 36 translations whose quality were measured by the chrF metric. Because several combinations can lead to the best score, the obtained dataset was labeled with multiple classes (36 layer combinations) and multiple labels (ties with regard to the metric). During inference, the ties were broken by selecting the layer combination with the highest value given by the sigmoid function, or backing-off to the deepest layer combination (6, 6) if no output value reaches 0.5. This tie breaking method differs from the oracle layer selection presented in Equation (1) and in Figure 2 which prioritizes shallowest layer combinations. In this experiment, decoding time was measured by processing one sentence at a time instead of batch decoding, the former being slower compared to the latter, but leads to precise results. The decoding times were 954s and 2,773s when using (1,1) and (6,6) layer combinations, respectively. By selecting the fastest encoder-decoder layer combinations according to an oracle, the decoding times went down to 1,918s and 1,812s for the individual and tied-multi models, respectively. However, our objective is to be faster than default setting, that is, where one would choose (6,6) combination.
|Baseline (tied (6,6))||2,773||35.0|
|(#1) 8 layers, 8 heads||✓||2,736||35.0|
|(#2) 2 layers, 4 heads||✓||2,686||34.8|
|(#3) 2 layers, 4 heads||2,645||34.7|
|(#4) 4 layers, 2 heads||2,563||34.3|
Several classifiers were trained and evaluated on the WMT test set, with or without fine-tuning on the WMT development set. Table 2 presents the results in terms of corpus-level BLEU and decoding speed. Some classifiers maintain the translation quality (top rows), whereas others show quality degradation but further gain in decoding speed (bottom rows). The classification results show that gains in decoding speed are possible with an a-priori decision for which encoder-decoder combination to select, based on the information contained in the source sentence only. However, no BLEU gain has so far been observed, demonstrating a trade-off between decoding speed and translation quality. Our best configuration for decoding speed (#4) reduced 210s but leads to a 0.7 point BLEU degradation. On the other hand, when preserving the translation quality compared to the baseline configuration (#1) we saved only 37s. The oracle layer combination can achieve substantial gains both in terms of BLEU (7.1 points) and decoding speed (961s). These oracle results motivate possible future work in layer combination prediction for the tied-multi NMT model.
5 Further Model Compression
We examined the combination of our multi-layer softmaxing approach with another parameter-tying method in neural networks, called recurrent stacking (RS) (Dabre and Fujita, 2019), complemented by sequence-level knowledge distillation (Kim and Rush, 2016), a specific type of knowledge distillation (Hinton et al., 2015). We demonstrate that these existing techniques help reduce the number of parameters in our model even further.
|Tied-multi model||Tied-multi RS model|
5.1 Distillation into a Recurrently Stacked Model
In Section 2, we have discussed several model compression methods orthogonal to multi-layer softmaxing. Having already compressed models with our approach, we consider further compressing it using RS. However, models that use RS layers tend to suffer from performance drops due to the large reduction in the number of parameters. As a way of compensating the performance drop, we apply sequence-level knowledge distillation.
First, we decode all source sentences in the training data to obtain a pseudo-parallel corpus containing distillation target sequences. By forward-translating the data, we create soft-targets for the child model which makes learning easier and hence is able to mimic the behavior of the parent model. Then, an RS child model is trained with multi-layer softmaxing on the generated pseudo-parallel corpus. Among a variety of distillation techniques, we chose the simplest one to show the impact that distillation can have in our setting, leaving an extensive exploration of more complex methods for the future.
We conducted an experiment to show that RS and sequence distillation can lead to an extremely compressed tied-multi model which no longer suffers from performance drops. We compared the following four variations of our tied-multi model trained with multi-layer softmaxing.
- Tied-multi model:
A model that does not share the parameters across layers, trained on the original parallel corpus.
- Distilled tied-multi model:
The same model as above but trained on the pseudo-parallel corpus.
- Tied-multi RS model:
A tied-multi model that uses RS layers, trained with the original parallel corpus.
- Distilled tied-multi RS model:
The same model as above but trained on the pseudo-parallel corpus.
Note that we incurred much higher cost for training the distilled models than training models directly on the original parallel corpus. First, we trained 5 vanilla models with 6 encoder and 6 decoder layers, because the performance of distilled models is affected by the quality of parent models, and NMT models vary vastly in performance (around 2.0 BLEU) depending on parameter initialization. We then decode the entire training data (5.58M sentences) with the one121212Ensemble of multiple models (Freitag et al., 2017) is commonly used for distillation, but we used a single model to save decoding time. with the highest BLEU score on the newstest2017 (used in Section 4.3) in order to generate pseudo-parallel corpus for sequence distillation. Nevertheless, we consider that we can fairly compare the performance of the above four models, since they were trained only once with a random parameter initialization, without seeing the test set.
Table 3 gives the BLEU scores for all models. Comparing top-left and top-right blocks of the table, we can see that the BLEU scores for RS models are higher than their non-RS counterparts when using fewer than 3 decoder layers. This shows the benefit of RS layers despite the large parameter reduction. However, the reduction in parameters negatively affects (up to 1.3 BLEU points) when decoding with more decoder layers, confirming the limitation of RS as expected.
Comparing the scores of the top and bottom halves of the table, we can see that distillation dramatically boosts the performance of the shallower encoder and decoder layers. For instance, without distillation, the tied-multi model gave a BLEU of 23.2 when decoding with 1 encoder and 1 decoder layers, but the same layer combination reaches 30.1 BLEU through distillation. Given that RS further improves performance using lower layers, the BLEU score increases to 31.2. As such, distillation enables decoding using fewer layers without substantial drops in performance. Furthermore, the BLEU scores did not vary significantly when the layers deeper than 3 were used, meaning that we might as well train shallower models using distillation. The performance of our final model, i.e., the distilled tied-multi RS model (bottom-right), was significantly lower (up to 1.5 BLEU points) similarly to its non-distilled counterpart. However, given that it outperforms our original tied-multi model (top-left) in all the encoder-decoder layer combinations, we conclude that we can obtain a substantially compressed model with better performance.
We now analyze model size and decoding speed resulted by applying RS and knowledge distillation. Note that RS has no effect on training time because the computational complexity is the same.
Table 4 gives the sizes of various models that we have trained and their ratio with respect to the tied-multi model. Training vanilla and RS models with 36 different encoder-decoder layer combinations required 25.2 and 14.3 times the number of parameters of a single tied-multi model, respectively. Although RS led to some parameter reduction, combining RS with our tied-multi model resulted a further compressed single model: 0.40 times that of the single tied-multi model without RS. This model has 63.2 times and 36.0 times fewer parameters than all the individual vanilla and RS models, respectively. Given that knowledge distillation can reduce the performance drops due to RS (see Table 3), we believe that combining it with this approach is an effective way to compress a large number of models into one.
|36 vanilla models||4,608M||25.16|
|Single tied-multi model||183M||1.00|
|36 RS models||2,623M||14.33|
|Single tied-multi RS model||73M||0.40|
Although we do not give scores, we observed that greedy decoding is faster than beam decoding but suffers from significantly reduced scores of around 2.0 BLEU. By using our distilled models, however, greedy decoding reduced the scores only by 0.5 BLEU compared to beam decoding. This happens because we have used translations generated by beam decoding as target sentences for knowledge distillation, which has the ability to loosely distill beam search behavior into greedy decoding behavior Kim and Rush (2016). For instance, greedy decoding with the distilled tied-multi RS model with 2 encoder and 2 decoder layers resulted in a BLEU score of 35.0 in 66.1s. In comparison, beam decoding with the tied-multi model without RS and distillation with 5 encoder and 6 decoder layers led a BLEU score of 35.1 in 261.5s (Table 1), showing that comparable translation quality is obtained with a factor of 4 in decoding time when using RS and distillation. Even though we intended to minimize performance drops besides the model compression, we obtained an unexpected benefit in terms of faster decoding through greedy search without a significant loss in translation quality.
In this paper, we have proposed a novel procedure for training encoder-decoder models, where the softmax function is applied to the output of each of the decoder layers derived using the output of each of the encoder layers. This compresses models into a single model that can be used for decoding with a variable number of encoder () and decoder () layers. This model can be used in different latency scenarios and hence is highly versatile. We have made a cost-benefit analysis of our method, taking NMT as a case study of encoder-decoder models. We have proposed and evaluated two orthogonal extensions and show that we can (a) dynamically choose layer combinations for slightly faster decoding and (b) further compress models using recurrent stacking with knowledge distillation.
For further speed up in decoding as well as model compression, we plan to combine our approach with other techniques, such as those mentioned in Section 2. Although we have only tested our idea for NMT, it should be applicable to other tasks based on deep neural networks.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, USA.
- Belinkov et al. (2017) Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1–10, Taipei, Taiwan.
- Cho et al. (2014) Kyunghyun Cho, Bart van Merriënboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724–1734, Doha, Qatar.
- Courbariaux et al. (2017) Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2017. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830.
Dabre and Fujita (2019)
Raj Dabre and Atsushi Fujita. 2019.
Recurrent stacking of layers for compact neural machine translation models.
Proceedings of the AAAI Conference on Artificial Intelligence, pages 6292–6299, Honolulu, USA.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional Transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, USA.
- Dou et al. (2018) Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, and Tong Zhang. 2018. Exploiting deep representations for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4253–4262, Brussels, Belgium.
- Freitag et al. (2017) Markus Freitag, Yaser Al-Onaizan, and Baskaran Sankaran. 2017. Ensemble distillation for neural machine translation. CoRR, abs/1702.01802.
Gupta et al. (2015)
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan.
with limited numerical precision.
Proceedings of the 32nd International Conference on Machine Learning, pages 1737–1746, Lille, France.
- Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.
- Kim and Rush (2016) Yoon Kim and Alexander M. Rush. 2016. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, USA.
- Kumar and Byrne (2004) Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 169–176, Boston, USA.
- Lin et al. (2016) Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. 2016. Fixed point quantization of deep convolutional networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, pages 2849–2858, New York, USA.
- Ma et al. (2018) Qingsong Ma, Ondřej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 682–701, Brussels, Belgium.
- Ott et al. (2018) Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Philadelphia, USA.
- Popović (2016) Maja Popović. 2016. chrF deconstructed: parameters and -gram weights. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 499–504, Berlin, Germany.
- Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186–191, Brussels, Belgium.
- See et al. (2016) Abigail See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of neural machine translation models via pruning. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 291–301, Berlin, Germany.
- Specia et al. (2010) Lucia Specia, Dhwaj Raj, and Marco Turchi. 2010. Machine translation evaluation versus quality estimation. Machine Translation, 24(1):39–50.
- Sutskever et al. (2013) Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning, pages 1139–1147, Atlanta, USA.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th Neural Information Processing Systems Conference, pages 3104–3112, Montréal, Canada.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 30th Neural Information Processing Systems Conference, pages 5998–6008, Long Beach, USA.
- Wang et al. (2018) Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. 2018. Multi-layer representation fusion for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3015–3026, Santa Fe, USA.
- Xia et al. (2019) Yingce Xia, Tianyu He, Xu Tan, Fei Tian, Di He, and Tao Qin. 2019. Tied Transformers: Neural machine translation with shared encoder and decoder. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5466–5473, Honolulu, USA.
- Xiong et al. (2018) Deyi Xiong, Biao Zhang, and Jinsong Su. 2018. Accelerating neural Transformer via an average attention network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Long Papers, pages 1789–1798, Melbourne, Australia.