Knowledge distillation describes the idea of enhancing a student network by matching its predictions to the ones of a stronger teacher network. There are two possible ways of using knowledge distillation for Neural Machine Translation (NMT): First, the student network can be a model with less layers and/or hidden units. The main purpose of this is to reduce the model size of the NMT system without significant loss in translation quality. Secondly, without changing the model architecture, one can get reasonable gains by combining different models of the same architecture with ensemble. By using an ensemble, you also get the disadvantage of a much slower decoding speed.
We show that the performance of a teacher composed of an ensemble of 6 models can be achieved with a student composed of a single model leading to a significantly faster decoding and smaller memory footprint. We also investigate a teacher network that is producing the oracle Bleu translations from the final decoder beam and demonstrate how to improve an NMT system, even if the student network has the same architecture and dimensions as the teacher network. In our knowledge distillation approach, we translate the full training data with the teacher model to use the translations as additional training data for the student network. This kind of knowledge transfer does not need any source code modification and can be reproduced by any NMT network architecture.
Training an NMT system on several millions of parallel sentences is already slow. When applying knowledge distillation, a second training phase on at least the same amount of data is needed. We show how to use the knowledge of the teacher model to filter the training data. We show that filtering the data does not only make the training faster, but also improves the translation quality.
To summarize the main contributions:
We apply knowledge distillation on an ensemble and oracle Bleu teacher model
We demonstrate how to successfully use knowledge distillation if the student network has the same architecture and dimensions as the teacher network
We introduce a simple and easy reproducible approach
We filter the training data with the knowledge of the teacher model
We compare different parameter initializations for the student network
2 Knowledge Distillation
The idea of knowledge distillation is to match the predictions of a student network to that of a teacher network. In this work, we collect the predictions of the teacher network by translating the full training data with the teacher network. By doing this, we produce a new reference for the training data which can be used by the student network to simulate the teacher network. There are two ways of using the forward translation. First, we can train the student network only on the original source and the translations. Secondly, we can add the translations as additional training data to the original training data. This has the side effect that the final training data size of the student network is doubled.
3 Teacher Networks
Ensemble Teacher Model
An ensemble of different NMT models can improve the translation performance of an NMT system. The idea is to train several models in parallel and combine their predictions by averaging the probabilities of the individual models at each time step during decoding. In this work, we use an ensemble of 6 models as a teacher model. All 6 individual systems are trained on the same parallel data and use the same optimization method. The only difference is the random initialization of the parameters.
Oracle Bleu Teacher Model
We use a left-to-right beam-search decoder to build new translations that aims to maximize the conditional probability of a given model. It stops the search when it found a fix number of hypotheses that end with an end-of-sequence symbol and picks the translation with the highest log-probability out of the final candidate list. In our distillation approach, we produce the forward translation of our parallel data. Since we know the reference translation of all sentences, we choose instead of the highest log-probability the sentence with the highest sentence level Bleu from the final candidate list. We use the sentence level Bleu proposed by [Lin and Och2004]
which adds 1 to both the matched and the total n-gram counts.
4 Data Filtering
In machine translation, bilingual sentence pairs that serve as training data are mostly crawled from the web and contain many nonparallel sentence pairs. Furthermore, one source sentence can have several correct translations that differ in choice and order of words. The training of the network gets complicated, if the training corpus contains noisy sentence pairs or sentences with several correct translations. In our knowledge distillation approach, we translate the full parallel data with our teacher model. This gives us the option to score each translation with the original reference. We remove sentences with high Ter scores [Snover et al.2006] from our training data. By removing noisy or unreachable sentence pairs, the training algorithm is able to learn a stronger network.
We run our experiments on the GermanEnglish WMT 2016 translation task [Bojar et al.2016] (3.9M parallel sentences) and use newstest2014 as validation and newstest2015 as test set. We use our in-house attention-based NMT implementation which is similar to [Bahdanau et al.2014]. We use sub-word units extracted by byte pair encoding [Sennrich et al.2015]
instead of words which shrinks the vocabulary to 40k sub-word symbols for both source and target. We use an embedding dimension of 620 and fix the RNN GRU layers to be of 1000 cells each. For the training procedure, we use SGD to update the model parameters with a mini-batch size of 64. Starting from the 4th epoch, we reduce the learning rate by half every epoch. The training data is shuffled after each epoch and we use beam 5 for all translations. We stopped the training of each setup (including baseline) when the validation score did not improve in the last 3 epochs. All different setups are run twice: First, we train the student network from scratch with random parameter initialization. Secondly, we continue the training based on the final parameters of the baseline model.
Single Teacher Model
Instead of using a stronger teacher model, we use the same model for both student and teacher network. By using the forward translation, we can stabilize the student network and make its decision much stronger. Results are given in Table 1. Using only the forward translation does not improve the model. When combining both the reference and the forward translation, we improve the model by 1.4 points in both Bleu and Ter. Pruning the training data and using only sentence pairs with a Ter score less than 0.8 yields to similar translation quality while reducing the training data by 12% leading to a faster training.
Ensemble Teacher Model
The results for using an ensemble of 6 models as a teacher model are summarized in Table 2. Using only the forward translation improves the single system by 1.4 points in Bleu and 1.9 points in Ter. When using both the original reference and the forward translation, we get an additional improvement of 0.3 points in Bleu. When pruning the parallel data and using only sentences with a Ter less than 0.8, we can improve the single system by 2 points in Bleu and 2.2 points in Ter.
Oracle Bleu Teacher Model
The teacher model is the same ensemble model as before, but instead of choosing the hypotheses with the highest log probability, it chooses the sentence with the highest sentence level Bleu from the final candidate list. The empirical results obtained with the oracle Bleu teacher model are summarized in Table 3. By using only the forward translation of the teacher network, we gain improvements of 1.1 points in Bleu and 1.2 points in Ter. By combining both the forward translation and the reference, we obtain improvements of 1.5 points in both Bleu and Ter. However, the results are slightly worse compared to the results obtained with an ensemble teacher network.
Reducing Model Size
We use the ensemble teacher network to teach a student network with lower dimensions. Empirical results are given in Table 4. We reduced the original word embedding (Wemb) size of 620 to 150 and the original hidden layer size (hlayer) of 1000 to 300 without losing any translation quality compared to the single model. In fact the performance is even better by 0.4 points in Bleu and 0.6 points in Ter.
|baseline single||original (4M)||0round(27.32,1)0||0round(54.64,1)0||0round(27.43,1)0||0round(53.72,1)0|
|distillation all||trans baseline (4M)||0round(27.48,1)0||0round(54.19,1)0||0round(27.65,1)0||0round(53.31,1)0|
|trans baseline + original (8M)||0round(28.63,1)0||0round(53.28,1)0||0round(28.77,1)0||0round(52.28,1)0|
|distillation Ter 0.8||reference (3.5M)||0round(26.95,1)0||0round(55.00,1)0||0round(27.36,1)0||0round(53.76,1)0|
|trans baseline (3.5M)||0round(27.50,1)0||0round(54.07,1)0||0round(27.67,1)0||0round(53.21,1)0|
|trans baseline + original (7M)||0round(28.61,1)0||0round(52.95,1)0||0round(28.49,1)0||0round(52.38,1)0|
|ensemble of 6 (teacher)||original (4M)||0round(29.82,1)0||0round(51.74,1)0||0round(29.77,1)0||0round(51.15,1)0|
|distillation||trans ens (4M)||0round(28.58,1)0||0round(52.90,1)0||0round(28.43,1)0||0round(52.41,1)0|
|trans ens + original (8M)||0round(29.10,1)0||0round(52.58,1)0||0round(29.03,1)0||0round(51.96,1)0|
|distillation Ter 0.8||trans ens + original (7M)||0round(29.19,1)0||0round(52.34,1)0||0round(29.16,1)0||0round(51.67,1)0|
|oBleu (teacher)||original (4M)||0round(34.49,1)0||0round(46.90,1)0||0round(33.74,1)0||0round(46.38,1)0|
|distillation||oBleu trans (4M)||0round(28.52,1)0||0round(53.14,1)0||0round(28.51,1)0||0round(52.46,1)0|
|oracle Bleu trans + original (8M)||0round(28.86,1)0||0round(52.75,1)0||0round(28.87,1)0||0round(52.24,1)0|
|setup||parallel data||from||model size||newstest2014||newstest2015|
|baseline single||original (4M)||1000,620||0round(27.32,1)0||0round(54.64,1)0||0round(27.43,1)0||0round(53.72,1)0|
|ensemble of 6 (teacher)||original (4M)||1000,620||0round(29.82,1)0||0round(51.74,1)0||0round(29.77,1)0||0round(51.15,1)0|
|distillation||trans ens + original (8M)||1000,620||0round(29.10,1)0||0round(52.58,1)0||0round(29.03,1)0||0round(51.96,1)0|
7 Related Work
[Buciluǎ et al.2006] show how to compress the function that is learned by a complex ensemble model into a much smaller, faster model that has comparable performance. Results on eight test problems show that, on average, the loss in performance due to compression is usually negligible.
[Ba and Caruana2014] demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets with knowledge distillation. On the TIMIT phoneme recognition and CIFAR-10 image recognition tasks, shallow nets can be trained that perform similarly to deeper convolutional models.
[Hinton et al.2015] present knowledge distillation for image classification (MNIST) and acoustic modelling. They show that nearly all of the improvement that is achieved by training an ensemble of deep neural nets can be distilled into a single neural net of the same size.
use knowledge distillation for NMT to reduce the model size of their neural network. Their best student model runs 10 times faster with little loss in performance. Even their work is quite similar and in fact was the motivation for our work, there are several differences:
We run experiments with an ensemble teacher model whereas Kim and Rush only reduced the dimension of a single teacher network.
Kim and Rush run experiments based on a combination of oracle Bleu and forward translation. We instead successfully showed how to use only the oracle translation for the teacher model.
We utilized both the forward translation and the original reference in our experiments which lead to reasonable improvements in comparison to only using the forward translation.
In addition, we used the information from the forward translation and pruned the training data which does not only speed up the training, but also improves the performance.
We showed how to successfully use knowledge distillation to even benefit from a teacher network that has the same architecture and dimensions as the student network.
We further investigate, if the parameters of the student model should be randomly initialize or if we start training from the final parameters of a baseline student network that has been trained on the given parallel data only.
In this work, we applied knowledge distillation for several kinds of teacher networks. First we demonstrate how to benefit from a teacher network that is the same architecture as the student network. By combining both forward translation and the original reference, we get an improvement of 1.4 points in Bleu. Using an ensemble model of 6 single models as teacher model further improves the translation quality of the student network. We showed how to prune the parallel data based on the Ter values obtained with the forward translations. The combination of an ensemble teacher network and pruning all sentences with a Ter value higher than 0.8 leads us to the best setup which improves the baseline by 2 points in Bleu and 2.2 points in Ter. Using a teacher model based on the oracle Bleu translations does improve the translation quality, but the results are slightly worse compared to the ensemble teacher model. Furthermore, we showed how to use the teacher ensemble model to significantly reduce the size of the student network while still getting gains in translation quality.
- [Ba and Caruana2014] Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654–2662.
- [Bahdanau et al.2014] D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. ArXiv e-prints, September.
- [Bojar et al.2016] Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation (wmt16). Proceedings of WMT.
- [Buciluǎ et al.2006] Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, pages 535–541, New York, NY, USA. ACM.
- [Hinton et al.2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
- [Kim and Rush2016] Yoon Kim and Alexander M Rush. 2016. Sequence-level knowledge distillation. arXiv preprint arXiv:1606.07947.
- [Lin and Och2004] Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 605. Association for Computational Linguistics.
- [Sennrich et al.2015] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
- [Snover et al.2006] Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200.