Sequence to sequence (seq2seq) learning models Sutskever et al. (2014) using an encoder decoder framework have been a popular choice for the task of machine translation before the recent popularity of transformer models Vaswani et al. (2017). Several recent works Bahdanau et al. (2014); Luong et al. (2015); Zhao and Zhang (2018); Shankar et al. (2018) aims to improve the translation performance of seq2seq models by formulating new attention mechanisms to capture similarity between the encoder and decoder states. Recent works Zhang et al. (2017)van der Wees et al. (2017) Wang et al. (2018) suggests that apart from attention mechanisms, data ordering patterns also affect the performance of neural machine translation. In this work we aim to empirically analyze the performance improvements of different data ordering patterns of the training data on the English-Vietnamese translation task.
Curriculum learning Bengio et al. (2009) proposes that choosing an ordering of the training samples from easier to learn examples to harder to learn examples can help train better models and achieve faster convergence. We take insights from curriculum learning and evaluate approaches to rank the training data based on complexity using several different metrics.
We empirically observe that a pre-fixed data ordering pattern based on sorted perplexity scores from a pre-trained model is able to outperform the default approach of randomly shuffling data every epoch and gets a 1.7 BLEU score improvement. We also perform experiments to analyze the effect of these ranking metrics from different pre-trained models and conclude that a shallow architecture suffices to achieve performance gains by providing an efficient data ordering pattern.
The remaining paper is structured by first describing the related work, followed by the different proposed data ordering patterns and experimental results and concluding by briefly describing future work extensions and conclusion from this study.
2 Related Work
zhang-etal-2017-boosting propose a data boosting and bootstrap paradigm using a probabilistic approach which assigns higher weights to training examples that have lower perplexities in previous epoch. Similarly, van-der-wees-etal-2017-dynamic,wang-etal-2018-dynamic improve the training efficiency of NMT by dynamically selecting different subsets of training data between different epochs using domain relevance and difference between the training costs of two iterations respectively.
DBLP:journals/corr/abs-1811-00739 propose to split the training samples into a predefined number of bins based on varied difficulty metrics like maximum and average word frequency rank. DBLP:journals/corr/abs-1903-09848 uses a difficulty and competence based metric for faster convergence and better performance than uniformly sampling training examples.
Our approach differs from these recent works in the main aspect that the model can access the entire training data in each epoch in our approach as compared to other techniques which partition the training data set and provide a different portion of the training data to the model each epoch.
3 Data Ordering Patterns
Neural Machine Translation using seq2seq learning models uses mini-batches of data for training and typically requires multiple passes over the training data to reach convergence. The default baseline strategy used for training the models is using a random shuffle of the training data every epoch.
Traditional curriculum learning approaches propose breaking the training data into groups based on the training data complexity and using these groups of training order in a sequential order from lower to higher complexity. Our approach just re-orders the training data and the model can access the whole training data in every epoch as contrasted to curriculum learning where it accesses specific portions of the training data in different training epochs.
Each train data point is a pair of sentences: one in the source and the other is the corresponding translation in the target language. We propose 4 different ordering strategies for the training data where this order is fixed before the training starts and mini-batches are chosen from this ordered training data sequentially. No random shuffling of training data is carried out every epoch when using these data patterns. We do not disturb the sentences within a pair in any of our strategies, but rather just re-order different pairs. The following data patterns are proposed:
Random Shuffle: Randomly shuffle the training data points. This results in a model which shuffles the training data only once before training starts.
Sequence Length Order: Sort the training data based on the length of the sentences of the source or target language. There are 2 orders hence obtained: sorted in ascending and descending order of lengths for each of the source and target sentences.
Perplexity based Order: Sort the training data based on perplexity scores of each training data pairs from a pre-trained model . For the purpose of optimization, if the cross-entropy of sentence pairs is denoted by , the perplexity is defined as . This results in 2 orderings: sorted in ascending and descending order of perplexity scores.
BLEU based Order: Sort the training data based on BLEU score Papineni et al. (2002) of each training data pairs from a pre-trained model . This results in 2 orderings: sorted in ascending and descending order of BLEU scores
For the perplexity and BLEU score based ordering, we use a pre-trained model on the same training corpus. We experiment with using different pre-trained models to analyze the impact on the performance. The experiments and results are presented in the next section.
|Data Ordering Pattern||Epochs||Test PPL||Test BLEU|
|Random Shuffle every epoch||32||16.55||18.1|
|Random Shuffle once||30||18.78||18.0|
|Ascending Sequence Length Order for source language||15||24.02||16.6|
|Descending Sequence Length Order for source language||31||20.31||17.1|
|Ascending Sequence Length Order for target language||23||29.64||15.6|
|Descending Sequence Length Order for target language||29||23.66||16.8|
|Ascending PPL Order (From Pre-trained base model)||32||14.69||19.8|
|Descending PPL Order (From Pre-trained base model)||31||15.28||19.1|
|Ascending BLEU Order (From Pre-trained base model)||31||15.46||18.9|
|Descending BLEU Order (From Pre-trained base model)||29||15.78||18.6|
4.1 Training Details and Hyper-parameters
We use an encoder-decoder architecture with Bahdanau attention Bahdanau et al. (2014)
using two layers of 512 units of LSTM encoder and decoder with a 0.2 dropout. We refer to this as the base model when presenting the results. We also experiment with a smaller architecture of the encoder-decoder framework using just 1 layer of 128 unit LSTM encoder and two layers of 128 unit LSTM decoder without attention having a 0.2 dropout probability and refer to this as the small model. The model was trained using an AdamKingma and Ba (2014) optimizer with a learning rate of . Training was performed on a 12GB Titan-X GPU using a batch size of 128. We use the BLEU score on the test data as the metric to evaluate the performance.
For our experiments we used the standard IWSLT 2015 English-Vietnamese language data set Cettolo et al. (2015) which has around 133k training sentence pairs. We sample 60k training samples from the training data by filtering out duplicates and sentences of sequence lengths more than 60 and less than 5 to have a consistent evaluation between the different data shuffling patterns. We use the original validation and test data splits for the experiments.
Since our experiments aim to empirically contrast the performance of the different data ordering strategies, we do not use an ensemble of seq2seq models and use greedy decoding instead of beam-decoding (which adds to the memory consumed and the training time of the model). Hence our BLEU scores are typically lower than the state of the art performance of 26.1 on this task.
We present the experimental results using different data ordering patterns of the training data in table 1. Randomly shuffling the data once before training can achieve a comparable BLEU score to the default technique used for seq2seq model training of randomly shuffling the training data every epoch, but the model with shuffling every epoch achieves a considerably lower test perplexity than with a single shuffle of the training data. A simple curriculum learning based approach of sorting the source language sentences based on their lengths performs considerably inferior to a randomly sorted ordering. Sorting based on the target language sentences length performs better than sorting based on the source language sentences lengths but is still not comparable to the random shuffling performance. This can possibly because of the optimizer getting stuck at a local optima and converging earlier rather than finding the global optima as is evidenced by the small number of epochs required for convergence.
Using data ordering patterns sorted on metrics like perplexity and BLEU outperforms the default approach of randomly shuffling the data. Perplexity and BLEU are the most commonly used estimators of the complexity of a sentence pair to be translated correctly by a translation model. A sentence pair having a lower perplexity or BLEU score than another from a pre-trained model implies that the first sentence pair was easier for the model to translate than the latter. From the table it is empirically observed that using an ascending order sorted approach performs the best and gets an improvement in BLEU score of 1.7 points from the default setting. We conjecture this is because the model first accesses less complex examples followed by more complex examples in line with the idea of curriculum learning.
The interesting observation made here is that a descending order sorted training data schedule based on perplexity or BLEU also outperforms the default setting of random shuffling though the model accesses training data samples from more complex to less complex. This idea is slightly in contrast to the curriculum learning approach of providing the model with training examples in increasing order of complexity.
4.4 Comparison across models
We evaluate the performance of the data ordering sorted based on perplexity and BLEU score from 2 different pre-trained models and show empirically that a smaller trained model is able to provide comparable performance gains to a larger trained model for these 2 data ordering strategies. The base and small models have been explained in section 4.1. The results are presented in table 2
From these results, we can infer that even a smaller trained model can give a good estimate of the training data sentence pair complexities through perplexity and BLEU metrics. Also this shows that the improvements in performance due to these specific data orderings can be generalized across different models. We also note that perplexity based ascending order is the best performing approach.
|Pre-Trained Model||Data Pattern||Epochs||BLEU|
|Small Model||Asc PPL||31||19.7|
|Base Model||Asc PPL||32||19.8|
5 Ongoing and Future Work
Ongoing work includes verifying this conjecture on the seq2seq framework for other machine translation datasets like English-German and English-French datasets. An extension of this work is to empirically verify if using a transformer based architecture can still provide similar gains from using a perplexity sorted ordering of the training data from a pre-trained model.
Results from section 4.4 show that a small model can be used to rank the training data points and this can then be used for improving the performance of a model trained on this data. This shows a promising future direction of work of making a NMT pipeline involving a low-resource model which can be trained fast and in a computationally inexpensive way for ranking the training data points and a larger model which can exploit this ranked order and produce performance improvements for machine translation.
From this study, we conclude that data ordering patterns can have an effect on the model performance for neural machine translation. While simple heuristics like sentence length actually lead to a drop in model performance, heuristics specific to evaluating NMT performance like perplexity and BLEU score can be good measures to rank the data for training and get performance gains over the default approach of randomly sampling data points for training.
- Neural machine translation by jointly learning to align and translate. Note: cite arxiv:1409.0473Comment: Accepted at ICLR 2015 as oral presentation External Links: Cited by: §1, §4.1.
- Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, New York, NY, USA, pp. 41–48. External Links: Cited by: §1.
- The iwslt 2015 evaluation campaign. Cited by: §4.2.
Visualizing and understanding curriculum learning for long short-term memory networks. CoRR abs/1611.06204. External Links: Cited by: §2.
Automated curriculum learning for neural networks. In Proceedings of the 34th International Conference on Machine Learning, D. Precup and Y. W. Teh (Eds.), Proceedings of Machine Learning Research, Vol. 70, International Convention Centre, Sydney, Australia, pp. 1311–1320. External Links: Cited by: §2.
- Sockeye: A toolkit for neural machine translation. CoRR abs/1712.05690. External Links: Cited by: §2.
- Adam: a method for stochastic optimization. CoRR abs/1412.6980. Cited by: §4.1.
- Effective approaches to attention-based neural machine translation. CoRR abs/1508.04025. External Links: Cited by: §1.
- BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, Stroudsburg, PA, USA, pp. 311–318. External Links: Cited by: 4th item.
- Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, pp. 65–68. External Links: Cited by: §2.
- Surprisingly easy hard-attention for sequence to sequence learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 640–645. External Links: Cited by: §1.
- Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 3104–3112. External Links: Cited by: §1.
- Learning the curriculum with Bayesian optimization for task-specific word representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 130–139. External Links: Cited by: §2.
- Dynamic data selection for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 1400–1410. External Links: Cited by: §1.
- Attention is all you need. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5998–6008. External Links: Cited by: §1.
- Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 298–304. External Links: Cited by: §1.
- Boosting neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Taipei, Taiwan, pp. 271–276. External Links: Cited by: §1.
- Attention-via-attention neural machine translation. In AAAI, Cited by: §1.