A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation

06/15/2016 ∙ by Amrita Saha, et al. ∙ ibm NYU college 0

Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y. Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encode X and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Interlingua based MT [Nirenburg1994, Dorr et al.2010] relies on the principle that every language in the world can be mapped to a common linguistic representation. Further, given this representation, it should be possible to decode a target sentence in any language. This implies that given languages we just need decoders and encoders to translate between these language pairs. Note that even though we take inspiration from interlingua based MT, the focus of this work is not on MT. We believe that this idea is not just limited to translation but could be applicable to any kind of conversion involving multiple source and target languages and/or modalities (for example, transliteration, multilingual image captioning, multilingual image Question Answering, etc.). Even though this idea has had limited success, it is still fascinating and considered by many as the holy grail of multilingual multimodal processing.

It is interesting to consider the implications of this idea when viewed in the statistical context. For example, current state of the art statistical systems for MT [Koehn et al.2003, Chiang2005, Luong et al.2015b], transliteration [Finch et al.2015, Shao et al.2015, Nicolai et al.2015], image captioning [Vinyals et al.2015b, Xu et al.2015], etc. require parallel data between the source and target views (where a view could be a language or some other modality like image). Thus, given views, we require parallel datasets to build systems which can convert from any source view to any target view. Obviously, this does not scale well in practice because it is hard to find parallel data between all views. For example, publicly available parallel datasets for transliteration [Zhang et al.2012] cater to 20 languages. Similarly, publicly available image caption datasets are available only for English111http://mscoco.org/dataset/$#$download and German222http://www.statmt.org/wmt16/multimodal-task.html.

This problem of resource scarcity could be alleviated if we could learn only statistical encoders and statistical decoders wherein (i) the encoded representation is common across languages and (ii) the decoders can decode from this common representation (akin to interlingua based conversion). As a small step in this direction, we consider a scaled down version of this generic conversion problem. Specifically, we consider the case where we have three views , , but parallel data is available only between and (instead of all parallel datasets). At test time, we are interested in generating natural language sequences in starting from information available in . We refer to this as the bridge setup as the language here can be considered to be a bridge/pivot between and .

An obvious solution to the above problem is to train a two-stage system which first converts from to and then from to . While this solution may work very well in practice (as our experiments indeed suggest) it is perhaps less elegant and becomes tedious as the number of views increases. For example, consider the case of converting from to to to

. Instead, we suggest a neural network based model which simultaneously learns the following (i) a common representation for

and and (ii) decoding from this common representation. In other words, instead of training two independent models using the datasets between and , the model jointly learns from the two datasets. The resulting common representation learned for and can be viewed as a vectorial analogue of the linguistic representation sought by interlingua based approaches. Of course, by no means do we suggest that this vectorial representation is a substitute for the rich linguistic representation but its easier to learn from parallel data (as opposed to a linguistic representation which requires hand crafted resources).

Note that our work should not be confused with the recent work of [Firat et al.2016], [Zoph and Knight2016] and [Elliott et al.2015]. The last two works in fact require 3-way parallel data between , and and learn to decode sequences in given both and . For example, at test time, [Elliott et al.2015] generate captions in German, given both (i) the image and (ii) its corresponding English caption. This is indeed very different from the problem addressed in this paper. Similarly, even though [Firat et al.2016] learn a single encoder per language and a single decoder per language they do not learn shared representations for multiple languages (only the attention mechanism is shared). Further, in all their experiments they require parallel data between the two languages of interest. Specifically, they do not consider the case of generating sentences in given a sentence in when no parallel data is available between and .

We present an empirical comparison of jointly trained models which explicitly aim for shared encoder representations with two-stage architectures. We consider two downstream applications (i) bridge transliteration and (ii) bridge caption generation. We use the standard NEWS 2012 dataset [Zhang et al.2012] for transliteration. We consider transliteration between 12 languages pairs () using English as the bridge (). Bridge caption generation is a new task introduced in this paper where the aim is to generate French captions for an image when no Image-French() parallel data is available for training. Instead training data is available between Image-English () and English-French (). In both these tasks we report promising results. In fact, in our multilingual transliteration experiments we are able to beat the strong two-stage baseline in many cases. These results show potential for further research in interlingua inspired neural network architectures. We do acknowledge that a successful interlingua based statistical solution requiring only encoders and decoders is a much harder task whereas our work is only a small step in that direction.

2 Related Work

Encoder decoder based architectures for sequence to sequence generation were initially proposed in [Cho et al.2014, Sutskever et al.2014] in the context of Machine Translation (MT) and have also been successfully used for generating captions for images [Vinyals et al.2015b]. However, such sequence to sequence models are often difficult to train as they aim to encode the entire source sequence using a fixed encoder representation. attention introduced attention based models wherein a different representation is fed to the decoder at each time step by focusing the attention on different parts of the input sequence. Such attention based models have been more successful than vanilla encoder-decoder models and have been used successfully for MT [Bahdanau et al.2014], parsing [Vinyals et al.2015a], speech recognition [Chorowski et al.2015], image captioning [Xu et al.2015] among other applications. All the above mentioned works focus only on the case when there is one source and one target. The source can be image, text, or speech signal but the target is always a text sequence.

Encoder decoder models in a multi-source, single target setting have been explored by [Elliott et al.2015] and [Zoph and Knight2016]. Specifically, Elliott try to generate a German caption from an image and its corresponding English caption. Similarly, kevin focus on the problem of generating English translations given the same sentence in both French and German. We would like to highlight that both these models require three-way parallel data while we are focusing on situations where such data is not available. Single source, multi-target and multi-source, single target settings have been considered in [Luong et al.2015a]. Recent work by cho-multi explores multi-source to multi-target encoder decoder models in the context of Machine Translation. However, cho-multi focus on multi-task learning with a shared attention mechanism and the goal is to improve the MT performance for a pair of languages for which parallel data is available. This is clearly different from the goal of this paper which is to design encoder decoder models for a pair of languages for which no parallel data is available but data is available only between each of these languages and a bridge language.

Of course, in general the idea of pivot/bridge/interlingua based conversion is not new and has been used previously in several non-neural network settings. For example [Khapra et al.2010] use a bridge language or pivot language to do machine transliteration. Similarly, [Wu and Wang2007, Zhu et al.2014] do pivot based machine translation. Lastly, we would also like to mention the work in interlingua based Machine Translation [Nirenburg1994, Dorr et al.2010] which is clearly the inspiration for this work even though the focus of this work is not on MT.

The main theme explored in this paper is to learn a common representation for two views with the end goal of generating a target sequence in a third view. The idea of learning common representations for multiple views has been explored well in the past [Klementiev et al.2012, Chandar et al.2014, Hermann and Blunsom2014, Chandar et al.2016, Rajendran et al.2015]. For example, dcca propose Deep CCA for learning a common representation for two views. [Chandar et al.2014, Chandar et al.2016] propose correlational neural networks for common representation learning and DBLP:journals/corr/RajendranKCR15 propose bridge correlational networks for multilingual multimodal representation learning. From the point of view of representation learning, the work of DBLP:journals/corr/RajendranKCR15 is very similar to our work except that it focuses only on representation learning and does not consider the end goal of generating sequences in a target language.

3 Models

As mentioned earlier, one of the aims of this work is to compare a jointly trained model with a two stage model. We first briefly describe such a two stage encoder decoder architecture and then describe our model which is a correlation based jointly trained encoder decoder model.

3.1 A two stage encoder-decoder model

A two stage encoder-decoder is a straight-forward extension of sequence to sequence models [Cho et al.2014, Sutskever et al.2014] to the bridge setup. Given parallel data between and

, a two stage model will learn a generative model for each of the pairs independently. For the purpose of this work, the source can be an image or text but the target is always a natural language text. For encoding an image, we simply take its feature representation obtained from one of the fully connected layers of a convolutional neural network and pass it through a feed-forward layer. On the other hand, for encoding the source text sequence, we use a recurrent neural network. The decoder is always a recurrent neural network which generates the text sequence, one token at a time.

Figure 1: Two stage encoder-decoder model. Dashed lines denote how the model is used during training time and solid line denotes the test time usage. We can see that two encoder-decoders are trained independently but used jointly during testing.

Let the two training sets be and where , and . Given , the first encoder learns to encode and decode the corresponding from this encoded representation. The second encoder is trained independently of the first encoder and uses to encode and decode the corresponding from this encoded representation. These independent training processes are indicated by the dotted arrows in Figure 1. At test time, the two stages are run sequentially. In other words, given , we first encode it and decode from it using the first encoder-decoder model. This decoded is then fed to the second encoder-decoder model to generate . This sequential test process is indicated by solid arrows in Figure 1.

While this two stage encoder-decoder model is a trivial extension of a single encoder-decoder model, it serves as a very strong baseline as we will see later in the experiments section.

3.2 A correlation based joint encoder-decoder model

While the above model works well in practice, it becomes cumbersome when more views are involved (for example, when converting from to to to ). We desire a more elegant solution which could scale even when more views are involved (although for the purpose of this work, we restrict ourselves to 3 views only). To this end, we propose a joint model which uses the parallel data (as defined above) to learn one encoder each for and such that the representations of and are correlated. In addition and simultaneously the model uses and learns to decode from . Note that this joint training has the advantage that the encoder for benefits from instances in and .

Having given an intuitive explanation of the model, we now formally define the objective functions used during training. Given , the model tries to maximize the correlation between the encoded representations of and as defined below.

(1)

where is the representation computed by the encoder for and is the representation computed by the encoder for . As mentioned earlier, these encoders could be RNN encoders (in the case of text) and simple feedforward encoders (in the case of images).

is a standardization function which adjusts the hidden representations

and

so that they have zero-mean and unit-variance. Further,

is a scaling hyper-parameter and is the correlation function as defined below:

(2)

We would like to emphasize that ensures that the representations already have zero mean and unit variance and hence no separate standardization is required while computing the correlation.

Figure 2: Correlated encoder-decoder model. Dashed lines denote how the model is used during training time and solid line denotes the test time usage. We can see that during training, both the encoders are trained to produce correlated representations and the decoder for is trained based on encoder . During test time only encoder for and decoder for are used.

In addition to the above loss function, given

, the model minimizes the following cross entropy loss:

(3)

where

(4)

where is the number of tokens in .

The dotted lines in Figure 2 show the joint training process where the model simultaneously learns to compute correlated representations for and and decode from . The testing process is shown by the solid lines wherein the model computes a hidden representation for and then decodes from it directly without transitioning through .

While training, we alternately pick mini-batches from and

and use the corresponding objective function. Means and variances for the representations computed by the two encoders are updated at the end of every epoch based on the hidden representations of all instances in the training data. During the first epoch we assume the mean and variance to be 0 and 1. Note that

rescales the value of the correlation loss term so that it is in the same range as the value of the cross-entropy loss term.

4 Experiment 1: Bridge Transliteration

We consider the task of transliteration between two languages and when no direct data is available between them but parallel data is available between & and & . In the following subsections we describe the datasets used for our task, the hyper-parameters considered for our experiments and results.

Language Pair Train-Set Validation-Set Test-Set
En-Hi 19918 500 1000
En-Ka 16556 500 1000
En-Ma 8500 500 1000
En-Ta 16857 500 1000
Table 1: Train, Validation and Test Splits of the NEWS 2012 Transliteration Corpus for the 4 Indian languages (Hindi, Kannada, Tamil, Marathi)

Two Stage PBSMT
srctgt Hi Ka Ta Ma
Hi 36.3 33.2 33.6
Ka 41.3 32.1 26.2
Ta 30.5 25.8 19.2
Ma 46.9 33.7 30.9
Two Stage Encoder Decoder
srctgt Hi Ka Ta Ma
Hi 42.1 43.4 34.8
Ka 46.2 42.9 30.7
Ta 37.5 34.8 23.8
Ma 45.8 34.9 31.7
Correlational Encoder Decoder
srctgt Hi Ka Ta Ma
Hi 43.1 40.6 40.9
Ka 47.5 40.2 27.9
Ta 33.6 27.7 17.0
Ma 59.0 37.1 34.5
Table 2: Transliteration Accuracy on the 12 language pairs involving (Hindi, Kannada, Tamil, Marathi) for the three comparative methods (Two Stage PBSMT, Two Stage Encoder Decoder and the proposed Correlational Encoder Decoder model. An underlined number in this table signifies that for that specific language pair the corresponding system is performing better than the Two Stage PBSMT model and the best performing system for any of the language-pairs is represented in bold font

4.1 Datasets

We consider transliteration between 4 languages, viz., Hindi, Kannada, Tamil and Marathi resulting in language pairs. However, we do not use any direct parallel data between any of these languages. Instead we use the standard datasets available between English and each of these languages which were released as part of the NEWS 2012 shared task [Zhang et al.2012]. Just to be explicit, for the task of transliterating from Hindi to Kannada, we construct from the English-Hindi dataset and from the English-Kannada dataset. Table 1 summarizes the sizes of these parallel datasets. Fortunately, the English portion of the test set was common across all the 4 language pairs mentioned in Table 1. This allowed us to easily create test sets for all the 12 language pairs. For example, if is the transliteration of the English word in the English-Hindi test set and is the transliteration of the same English word in the English-Kannada test set then we add as a transliteration pair in our Hindi-Kannada test set. In this way, we created test sets containing 1000 words for all the 12 language pairs.

4.2 Hyperparameters

For the two stage encoder decoder model, we considered the following hyperparameters: embedding size

{1024, 2048} for characters, rnn hidden unit size {1024, 2048}, initial learning rate {0.01, 0.001} and batch size {32, 64}. The numbers in bracket indicate the distinct values that we considered for each hyperparameter. Note that the embedding size and rnn size are always kept equal. All these parameters were tuned independently for the two stages using their respective validation sets. For the correlated encoder decoder model, in addition to the above hyperparamaters we also had [0.1, 1.0] as a hyperparameter. Here, we tuned the hyperparameters based on the performance on the validation set available between (since the correlated encoder decoder can also decode from ). Note that we do not use any parallel data between for tuning the hyperparameters because the general assumption is that no parallel data is available between . We used Adam [Kingma and Ba2014] as the optimizer for all our experiments.

4.3 Results

We compare our model with the following systems:

1. Two Stage PBSMT: Here, we train two PBSMT [Koehn et al.2003] based transliteration systems using and . This is an additional baseline to see how well an encoder decoder architecture compares to a conventional PBSMT based system. We used Moses [Koehn et al.2007] for building our PBSMT systems. The decoder parameters were tuned using the validation sets. Language model was trained on the target portion of the parallel corpus.

2. Two Stage Encoder Decoder: Here, we train two encoder decoder based transliteration systems using and as described in Section 4.

Table 2 summarizes the accuracy (% of correct transliterations) of the three systems in the bridge setup. We observe that in 6 out of the 12 language pairs our correlated model does better than the 2 stage encoder decoder model. Further, it does better than the two-stage PBSMT baseline in 11 out of the 12 language pairs. This is very encouraging especially because such 2-stage approaches are considered to be very strong baselines for these tasks [Khapra et al.2010]. In general, the encoder decoder based approaches do better than PBSMT based systems. This is indeed the case even when we compare the performance of the PBSMT based system and the Encoder Decoder based system independently on the two stages (Table 3).

Source-Target Pair Accuracy(%)
PBSMT Encoder-Decoder
En-Hi 51.7 61.6
En-Ka 45.3 53.7
En-Ta 50.0 57.7
En-Ma 30.2 38.0
Hi-En 51.1 57.3
Ka-En 47.9 54.5
Ta-En 41.4 46.2
Ma-En 35.0 31.1
Table 3: Transliteration accuracy of the PBSMT system and the Encoder-Decoder model on the 4 Indian languages (Hindi, Kannada, Tamil, Marathi) when transliterated from English and to English

5 Experiment 2: Bridge Captioning

We now introduce the task of bridge caption generation. The purpose of introducing this task is two-fold. Firstly, we feel that it is important to put things in perspective and demonstrate that while interlingua inspired encoder decoder architectures are a step in the right direction, much more work is needed when dealing with different modalities in a bridge setup. Secondly, we think that this is an important task which has not received any attention in the past. We would like to formally define and report some initial baselines to motivate further research in this area. The formal task definition is as follows: Generate captions for images in language (say, French) when no parallel data is available between images and but parallel data is available between Image- () and between - () where is another language (say, English). In the following subsection we describe the datasets used for this task, the hyperparameters considered for our experiments and the results.

Systems BLEU-4 BLEU-3 BLEU-2 BLEU-1 ROUGE-L CIDEr
Pseudo Im-Fr 15.5 24.2 37.4 56.5 38.3 41.2
Two Stage 16.6 25.7 39.0 58.3 39.5 49.1
Correlational Encoder Decoder 12.6 19.3 31.1 50.5 34.3 29.8
Table 4: Image Captioning performance in generating French caption for a given image for the three methods: Pseudo Im-Fr, Two Stage and our Correlational Encoder Decoder based model.

5.1 Datasets

Even though we do not have direct training data between Image-French, we need some test data to evaluate our model. For this, we use the Image-French test set recently released by [Rajendran et al.2015]. To create this data, they first merged the 80K images from the standard train split and 40K images from the standard valid split of MSCOCO data333http://mscoco.org/dataset/$#$download. They then randomly split the merged 120K images into train(118K), validation (1K) and test set (1K). They then collect French translations for all the 5 captions for each image in the test set using crowdsourcing. CrowdFlower444https://make.crowdflower.com was used as the crowdsourcing platform and they solicited one French and one German translation for each of the 5000 captions using native speakers. Note that [Rajendran et al.2015] report results for cross modal search and do not address the problem of crosslingual image captioning.

In our model, for we use the same train(118K), validation (1K) and test sets (1K) as defined in [Rajendran et al.2015] and explained above. Choosing was a bit more tricky. Initially we considered the corpus released as part of WMT’12 [Callison-Burch et al.2012] which contains roughly 44M English-French parallel sentences from various sources including News, parliamentary proceedings, etc. However, our initial small scale experiments showed that this does not work well because there is a clear mismatch between the vocabulary of this corpus and the vocabulary that we need for generating captions. Also the vocabulary is much larger (at least an order higher than what we need for image captioning) and it thus hampers training. Further, the average length and structure of these sentences is also very different from captions. Domain shift in MT is itself a challenging problem (not to mention the added complexity in a multimodal bridge setup). It was unrealistic to expect our model to work in the presence of these orthogonal complexities.

To isolate these issues and evaluate our model in a controlled environment, we needed a parallel corpus which had very similar characteristics to that observed in captions. Since we did not have such a corpus at our disposal we decided to follow [Rajendran et al.2015] and use a pseudo parallel corpus between English-French. Specifically, we take the English captions from the MSCOCO data and translate them to French using the publicly available translation system provided by IBM555http://www.ibm.com/smarterplanet/us/en/ibmwatson. Note that our model still does not see direct parallel data between Image and French during training. We acknowledge that this is not the ideal thing to do but it is good enough to do a proof-of-concept evaluation of our model and understand its potential. We, of course, account for the liberty taken here by comparing with equally strong baselines as discussed later in the results section.

5.2 Hyperparameters

Our model has the following hyperparameters: embedding size, batch size, hidden representation size, and learning rate. Based on experiments involving direct Image-to-English caption generation we observed that the following parameters work well : embedding size = 512, batch size = 80, rnn hidden unit size = 512, and learning rate = 4e-4 with Adam [Kingma and Ba2014] as the optimizer. We just retained these hyper-parameters and did not tune them again for the bridge setup. We tuned the value of by evaluating the correlation loss on the Image-English validation set. Again, we do not use any Image-French data for tuning any hyperparameters.

MSCOCO Images


Correlational Encoder Decoder
Un homme est surfer sur une vague dans l’océan Un skateur est en train de décoller sur un skateboard Une plaque avec un sandwich et un verre de bière Une girafe est debout dans la poussière près d’une arborescence Un bus de transport en commun dans une rue de la ville Un salon avec un canapé , une table et un téléviseur
Two-Stage Un homme circonscription une vague sur une planche de surf Un homme circonscription un skateboard sur une rampe en bois Une plaque de nourriture sur une table avec un verre de vin Une girafe debout dans un champ avec des arbres en arrièreplan Un bus double sandwich au volant dune rue Un salon avec un canapé fauteuil et une télévision
Pseudo Im-Fr Un internaute dans une combinaison isothermique est circonscription une vague Un jeune garçon circonscription un skateboard dans un parc Une plaque de nourriture sur une table en bois Une girafe debout à côté d’un autre girafe dans une zone Un bus ville faire baisser une rue de la ville Un salon avec un canapé , une table et un canapé

Table 5: Example captions generated by the three methods on a sample set of MSCOCO test images

5.3 Results

We now present the results of our experiments where we compare with the following strong baselines.

1. Two Stage : Here we use a Show & Tell model [Vinyals et al.2015b] trained using to generate an English caption for the image. We then translate this caption into French using IBM’s translation system as described above.

2. Pseudo Im-Fr : Here we train an Image-to-French Show & Tell model [Vinyals et al.2015b] by pairing the images in the MSCOCO dataset with their pseudo French captions generated by translating the English captions into French (using IBM’s translation system).

We observe that our model is unable to beat the two strong baselines described above but still comes close to their performance. We believe this reinforces our belief in this line of research and hopefully more powerful models (perhaps attention based) could eventually surpass these two baselines.

As a qualitative evaluation of our model, Table 5 shows the captions generated by our model. It is exciting that even in a complex multimodal bridge setup the model is able to capture correlations between Images and English sentences and further decode relevant French captions from a given image.

6 Conclusion

In this paper, we considered the problem of pivot based sequence generation. Specifically, we are interested in generating sequences in a target language starting from information in a source view. However, no direct training data is available between the source and target views but training data is available between each of these views and a pivot view. To this end, we take inspiration from interlingua based MT and propose a neural network based model which explicitly maximizes the correlation between the source and pivot view and simultaneously learns to decode target sequences from this correlated representation. We evaluate our model on the task of bridge transliteration and show that it outperforms a strong two-stage baseline for many language pairs. Finally, we introduce the task of bridge caption generation and report promising initial results. We hope this new task will fuel further research in this area.

As future work, we would like to go beyond simple encoder decoder based correlational models. For example, we would like to apply the idea of correlation to attention based encoder decoder models. The ideas expressed here can also be applied to other tasks such as bridge translation, bridge Image QA, etc. However, for these tasks, additional issues such as larger vocabulary sizes, complex sentence structures, non-monotonic alignments between source and target language pairs need to be addressed. The model proposed here is just a beginning and much more work is needed to cater to these complex tasks.

References

  • [Andrew et al.2013] Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. ICML.
  • [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
  • [Callison-Burch et al.2012] Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Seventh Workshop on Statistical Machine Translation, WMT, pages 10–51, Montréal, Canada.
  • [Chandar et al.2014] Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, and Amrita Saha. 2014.

    An autoencoder approach to learning bilingual word representations.

    In Proceedings of NIPS.
  • [Chandar et al.2016] Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, and Balaraman Ravindran. 2016. Correlational neural networks. Neural Computation, 28(2):257 – 285.
  • [Chiang2005] David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 25-30 June 2005, University of Michigan, USA.
  • [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In

    Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL

    , pages 1724–1734.
  • [Chorowski et al.2015] Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 577–585.
  • [Dorr et al.2010] Bonnie J. Dorr, Rebecca J. Passonneau, David Farwell, Rebecca Green, Nizar Habash, Stephen Helmreich, Eduard H. Hovy, Lori S. Levin, Keith J. Miller, Teruko Mitamura, Owen Rambow, and Advaith Siddharthan. 2010. Interlingual annotation of parallel text corpora: a new framework for annotation and evaluation. Natural Language Engineering, 16(3):197–243.
  • [Elliott et al.2015] Desmond Elliott, Stella Frank, and Eva Hasler. 2015. Multi-language image description with neural sequence models. CoRR, abs/1510.04709.
  • [Finch et al.2015] Andrew Finch, Lemao Liu, Xiaolin Wang, and Eiichiro Sumita. 2015. Neural network transduction models in transliteration generation. Proceedings of NEWS 2015 The Fifth Named Entities Workshop, page 61.
  • [Firat et al.2016] Orhan Firat, KyungHyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. CoRR, abs/1601.01073.
  • [Hermann and Blunsom2014] Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 58–68.
  • [Khapra et al.2010] Mitesh M. Khapra, A. Kumaran, and Pushpak Bhattacharyya. 2010. Everybody loves a rich cousin: An empirical study of transliteration through bridge languages. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 2-4, 2010, Los Angeles, California, USA, pages 420–428.
  • [Kingma and Ba2014] Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
  • [Klementiev et al.2012] Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012.

    Inducing Crosslingual Distributed Representations of Words.

    In Proceedings of the International Conference on Computational Linguistics (COLING).
  • [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLT-NAACL.
  • [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • [Luong et al.2015a] Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. CoRR, abs/1511.06114.
  • [Luong et al.2015b] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025.
  • [Nicolai et al.2015] Garrett Nicolai, Bradley Hauer, Mohammad Salameh, Adam St Arnaud, Ying Xu, Lei Yao, and Grzegorz Kondrak. 2015. Multiple system combination for transliteration. Proceedings of NEWS 2015 The Fifth Named Entities Workshop, page 72.
  • [Nirenburg1994] Sergei Nirenburg. 1994. Pangloss: A machine translation project. In Human Language Technology, Proceedings of a Workshop held at Plainsboro, New Jerey, USA, March 8-11, 1994.
  • [Rajendran et al.2015] Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, and Balaraman Ravindran. 2015. Bridge correlational neural networks for multilingual multimodal representation learning. CoRR, abs/1510.03519.
  • [Shao et al.2015] Yan Shao, Jörg Tiedemann, and Joakim Nivre. 2015. Boosting english-chinese machine transliteration via high quality alignment and multilingual resources. Proceedings of NEWS 2015 The Fifth Named Entities Workshop, page 56.
  • [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112.
  • [Vinyals et al.2015a] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015a. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2773–2781.
  • [Vinyals et al.2015b] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and tell: A neural image caption generator. In

    IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015

    , pages 3156–3164.
  • [Wu and Wang2007] Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine translation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic.
  • [Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In

    Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015

    , pages 2048–2057.
  • [Zhang et al.2012] Min Zhang, Haizhou Li, A. Kumaran, and Ming Liu. 2012. Report of news 2012 machine transliteration shared task. In Proceedings of the 4th Named Entity Workshop, NEWS ’12, pages 10–20, Stroudsburg, PA, USA. Association for Computational Linguistics.
  • [Zhu et al.2014] Xiaoning Zhu, Zhongjun He, Hua Wu, Conghui Zhu, Haifeng Wang, and Tiejun Zhao. 2014. Improving pivot-based statistical machine translation by pivoting the co-occurrence count of phrase pairs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1665–1675.
  • [Zoph and Knight2016] Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. CoRR, abs/1601.00710.