A Recipe for Arabic-English Neural Machine Translation

08/18/2018 ∙ by Abdullah Alrajeh, et al. ∙ 0

In this paper, we present a recipe for building a good Arabic-English neural machine translation. We compare neural systems with traditional phrase-based systems using various parallel corpora including UN, ISI and Ummah. We also investigate the importance of special preprocessing of the Arabic script. The presented results are based on test sets from NIST MT 2005 and 2012. The best neural system produces a gain of +13 BLEU points compared to an equivalent simple phrase-based system in NIST MT12 test set. Unexpectedly, we find that tuning a model trained on the whole data using a small high quality corpus like Ummah gives a substantial improvement (+3 BLEU points). We also find that training a neural system with a small Arabic-English corpus is competitive to a traditional phrase-based system.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural networks succeed to show impressive results as part of a statistical machine translation (SMT) system in the work of Devlin et al. (2014). Since then, the research shifted more towards an end-to-end approach. Currently, neural machine translation (NMT) has become the dominant approach in the field achieving state-of-the-art results in many translation tasks Bojar et al. (2017).

Junczys-Dowmunt et al. (2016) investigate 30 translation directions using the UN corpus (around 335M words). The experiments, based on test sets from the same corpus, show that NMT is superior to the traditional approach (i.e. phrase-based SMT). One of the investigated task is translation between Arabic and English but without special preprocessing for Arabic. Large improvement is observed (around 3 BLEU points) over phrase-based SMT in both directions.

Almahairi et al. (2016) compared a neural system against phrase-based one in Arabic-English translation task and found them to be comparable based on NIST 2005 test set. It is also observed that NMT is superior to SMT in an out-of-domain test set. In all cases, preprocessing of Arabic script did improve the translation quality.

In this paper, we further investigate Arabic to English translation using several corpora including Ummah, ISI, UN and many others. We compare the performance of NMT against phrase-based SMT. In our experiments, we applied Arabic preprocessing, which includes normalization and tokenization, to see its impact on both NMT and SMT systems. Our results is based on NIST MT sets for the year 2005, 2006 and 2012.

In the next section, we give a brief introduction to neural machine translation. Section 3 lists the parallel corpora will be used and show some statistics. Section 4 presents our SMT and NMT experiments followed by the conclusion.

2 Neural Machine Translation

Kalchbrenner and Blunsom (2013)

set the foundation of neural machine translation by proposing an end-to-end encoder-decoder approach. Convolutional neural networks (CNN) are used to encode a source sentence then generates its translation by recurrent neural networks (RNN).

Long sentences propose a challenge for RNN where there are long distance reordering. Sutskever et al. (2014)

develop sequence-to-sequence models that use RNN for both encoding and decoding. Standard RNN units are replaced with Long Short-Term Memory (LSTM) units to capture long-term dependencies.

Cho et al. (2014)

introduce Gated Recurrent Unit (GRU) which is simpler than LSTM.

In the previous work, a source sentence is encoded into a fixed-length vector which is a bottleneck holding NMT from being competitive to SMT particularly in long sentences.

Bahdanau et al. (2015) introduce the powerful attention mechanism that allows the decoder to focus on different words while translating.

These advancements and other such as byte pair encoding Sennrich et al. (2016b), to achieve open vocabulary NMT, pave the way for new state-of-the-art translation systems.

Mathematically, the probability of a translation sentence (

) of an input sentence () is computed as follows:

(1)
(2)

where is a hidden state in the RNN decoder while is the context vector computed from all hidden states () in the RNN encoder as follows:

(3)
(4)

where , is its weight and is an alignment model shows the importance of the input word in translating the output word . This mechanism allows the decoder to just pay attention to the related input words. Note that, the function that produces the next hidden state in the encoder and decoder can be defined as LSTM or GRU.

Usually, an input sentence is encoded by a forward RNN but a backward RNN, that reads the sentence in a reverse order, is found to improve the performance Sutskever et al. (2014). A bidirectional RNN also has been successful Bahdanau et al. (2015). It reads the sentence in both directions then concatenates the forward and backword hidden states as follows:

(5)

3 Corpora

There are many parallel corpora available for building Arabic-English translation systems. The UN corpus111https://conferences.unite.un.org/uncorpus is an obvious choice for many researchers and will be used in our experiments. It is composed of parliamentary documents of the United Nations since 1990 for Arabic, Chinese, English, French, Russian, and Spanish Ziemski et al. (2016).

There are also 11 LDC222http://ldc.upenn.edu/ corpora have been selected. These include Ummah and ISI with catalogue numbers LDC2004T18 and LDC2007T08, respectively. Ummah corpus contains news stories while ISI was extracted automatically from Arabic Gigaword and English Gigaword. The rest are mostly from GALE project with catalogue numbers LDC2004T17, LDC2005T05, LDC2008T09, LDC2009T09, LDC2013T10, LDC2013T14, LDC2015T05, LDC2015T07 and LDC2015T19.

Besides that, we used all Arabic-English corpora available on OPUS333http://opus.nlpl.eu/ website Tiedemann (2012); Rafalovitch and Dale (2009); Wołk and Marasek (2014). We exclude MultiUN because we already have larger version. OpenSubtitles and Tanzil are also excluded due to their low quality.

Table 1 shows statistics of all corpora. The total number of all English words is close to half a billion words.

No. Corpus Sentences Ar-En words
1 Ummah 80k 2.3m 2.9m
2 ISI 1.1m 28.9m 30.8m
3 LDC2004T17 19k 441k 581k
4 LDC2005T05 5k 106k 135k
5 LDC2008T09 3k 55k 68k
6 LDC2009T09 10k 145k 198k
7 LDC2013T10 8k 182k 240k
8 LDC2013T14 5k 89k 124k
9 LDC2015T05 18k 285k 379k
10 LDC2015T07 20k 330k 440k
11 LDC2015T19 6K 156k 210k
12 OPUS 639k 13.8m 13.8m
13 UN 185m 398m 448m
Table 1: Statistics of all Arabic-English corpora (m: million, k: thousand).

4 Experiments

We present SMT and NMT results on Arabic-English based on NIST MT sets for the year 2005, 2006 and 2012 (see Table 2). As commonly used in machine translation, we evaluated the translation performance by BLEU score (Papineni et al., 2002).

Ar-En Sentences Ar-En words
MT06 (dev) 1797 1797 42k 54k
MT05 (test) 1056 4224 26k 130k
MT12 (test) 1378 5512 35k 191k
Table 2: Statistics of NIST MT sets.

The systems are trained on different datasets ranging from small to very large. Training corpora in Table 1 are grouped into 4 sets:

  • Set A: Ummah corpus

  • Set B: Ummah, ISI and LDC2004T17

  • Set C: all corpora except UN

  • Set D: all corpora

The reasons for this setting are the following. Low-resource MT task is a known challenge for NMT Koehn and Knowles (2017). We would like to see if this is the case for Arabic-English task (Set A). Almahairi et al. (2016) report the first result on Arabic NMT therefore Set B are chosen for comparison. Finally, UN corpus might add no benefit Devlin et al. (2014) since it is not the news domain (Set C and D).

Preprocessing   In our experiments, we applied Arabic preprocessing, which includes normalization and tokenization (ATB scheme), to see its impact on both SMT and NMT systems. We used Farasa Abdelali et al. (2016), a fast Arabic segmenter. The maximum sentence length is 100.

Phrase-based MT   We use Moses toolkit (Koehn et al., 2007)

with its default settings. The language model is a 5-gram built from the English side with interpolation and Kneser-Ney smoothing

(Kneser and Ney, 1995) built by KenLM Heafield et al. (2013). Word alignments are extracted by fast_align Dyer et al. (2013). We tune the system using MERT technique Och (2003). The chosen option for the reordering model is msd-bidirectional-fe.

Neural MT   We use Marian, an efficien and fast NMT system written in C++ Junczys-Dowmunt et al. (2018). The system has implemented several models. The s2s option is chosen which is equivalent to Nematus models Sennrich et al. (2017) that are RNN encoder-decoder based with attention mechanism. The basic training script provided is used. To achieve open vocabulary, we apply byte pair encoding (BPE) Sennrich et al. (2016b) setting the maximum size of the joint Arabic-English vocabulary to 90,000.

Table 3 reports BLEU scores of many phrase-based SMT systems trained on various datasets. Clearly, preprocessing of the Arabic side is important. Substantial gain is observed when the training data is small (Set A). Note that adding UN corpus to the training data improves the BLEU score as in set D.

Set System MT05 MT12 avg
A baseline 39.49 22.50 31.00
+ ar preprocessing 42.17 31.87 37.02
B baseline 49.71 34.25 41.98
+ ar preprocessing 51.65 37.06 44.35
C baseline 51.32 38.12 44.72
+ ar preprocessing 52.76 40.80 46.78
D baseline 52.57 40.02 46.30
+ ar preprocessing 53.45 41.11 47.28
Table 3: BLEU scores of Arabic-English SMT.

Table 4 presents NMT systems’s performance in BLEU. Compared to Table 3, NMT is superior to SMT in all cases. The best NMT system produces a gain of +13 BLEU points in NIST MT12 test set. Unexpectedly, NMT is similar or better than SMT even with a small dataset (Set A is less than 3 million English words). Note that the gap in BLEU between NMT and SMT increases with more training data. Arabic preprocessing improves the performance as in Table 3 which indicates that BPE is not sufficient. We find that tuning a model trained on the whole data using a high quality corpus like Ummah (Set A) gives us a substantial improvement. Finally, an independent ensemble of 5 best models boosts the score with +1.5 BLEU.

Set System MT05 MT12 avg
A baseline 41.62 19.47 30.55
+ ar preprocessing 44.15 31.86 38.01
B baseline 53.27 38.19 45.73
+ ar preprocessing 54.69 40.07 47.38
C baseline 57.02 45.73 51.38
+ ar preprocessing 58.35 47.04 52.70
D baseline 57.31 45.81 51.56
+ ar preprocessing 58.43 47.74 53.09
+ tuning 61.26 52.53 56.90
+ ensemble of 5 62.98 54.27 58.63
Table 4: BLEU scores of Arabic-English NMT.

Training the whole data on a single GPU took 4 days444NVidia GTX 1080 Ti, CPU 4.20GHz and Hard disk SSD. The disk size of the best model is just 645 MB. It is very compact compared to the phrase table alone in SMT which is 8.5 GB.

During the experiments, other models have been tried like transformer Vaswani et al. (2017) but no improvement is gained. It is also the case for the joint vocabulary’s size.

5 Conclusion

We present Arabic to English machine translation using various training datasets. We compare neural systems with traditional ones (i.e. phrase-based SMT). We also investigate the importance of special preprocessing of the Arabic script. The systems are tested on NIST MT 2005 and 2012.

After the experiments, we draw the following conclusions. In both NMT and SMT systems, Arabic preprocessing improves the translation quality as found by Almahairi et al. (2016). Although UN corpus is not in the news domain, a gain is observed in both systems. Neural MT is superior to phrase-based MT in all cases. NMT able to perform very well given a small corpus. Finally, tuning a model trained on the whole data using a small high quality corpus (i.e. Ummah) gives a substantial improvement. The best NMT system produces a gain of +13 BLEU points in NIST MT12 test set.

There are techniques we have not considered in this work but might improve the translation quality such as back translation Sennrich et al. (2016a).

References

  • Abdelali et al. (2016) Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for arabic. In Proceedings of the Demonstrations Session, NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 11–16.
  • Almahairi et al. (2016) Amjad Almahairi, Kyunghyun Cho, Nizar Habash, and Aaron C. Courville. 2016. First result on Arabic neural machine translation. Computing Research Repository, arXiv:1606.02680. Version 1.
  • Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR).
  • Bojar et al. (2017) Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics.
  • Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In

    Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    , pages 1724–1734, Doha, Qatar. Association for Computational Linguistics.
  • Devlin et al. (2014) Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1370–1380, Baltimore, Maryland. Association for Computational Linguistics.
  • Dyer et al. (2013) Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics.
  • Heafield et al. (2013) Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013.

    Scalable modified kneser-ney language model estimation.

    In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 690–696, Sofia, Bulgaria. Association for Computational Linguistics.
  • Junczys-Dowmunt et al. (2016) Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? A case study on 30 translation directions. In Proceedings of the 13th International Workshop on Spoken Language Translation.
  • Junczys-Dowmunt et al. (2018) Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in c++. Computing Research Repository, arXiv:1804.00344. Version 3.
  • Kalchbrenner and Blunsom (2013) Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA. Association for Computational Linguistics.
  • Kneser and Ney (1995) Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Accoustics, Speech and Signal Processing, volume 1.
  • Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Christopher J. Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics.
  • Koehn and Knowles (2017) Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics.
  • Och (2003) Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
  • Rafalovitch and Dale (2009) Alexandre Rafalovitch and Robert Dale. 2009. United nations general assembly resolutions: A six-language parallel corpus. In Proceedings of the MT Summit XII, pages 292–299.
  • Sennrich et al. (2017) Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 65–68, Valencia, Spain. Association for Computational Linguistics.
  • Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics.
  • Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
  • Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc.
  • Tiedemann (2012) Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. European Language Resources Association (ELRA).
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc.
  • Wołk and Marasek (2014) Krzysztof Wołk and Krzysztof Marasek. 2014. Building subject-aligned comparable corpora and mining it for truly parallel sentence pairs. Procedia Technology, 18:126 – 132. International workshop on Innovations in Information and Communication Science and Technology, IICST 2014, 3-5 September 2014, Warsaw, Poland.
  • Ziemski et al. (2016) Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA).