Neural machine translation (NMT) [Kalchbrenner and Blunsom2013, Sutskever, Vinyals, and Le2014, Bahdanau, Cho, and Bengio2015], which directly models the translation process in an end-to-end way, has achieved state-of-the-art translation performance on resource-rich language pairs such as English-French and German-English [Johnson et al.2016, Gehring et al.2017, Vaswani et al.2017]. The success is mainly attributed to the quality and scale of available parallel corpora to train NMT systems. However, preparing such parallel corpora has remained a big problem in some specific domains or between resource-scarce language pairs. Zoph et al. Zoph2016TransferLF indicate that NMT trends to obtain much worse translation quality than statistical machine translation (SMT) under small-data conditions.
As a result, developing methods to achieve neural machine translation without direct source-target parallel corpora has attracted increasing attention in the community recently. These methods utilize a third language [Firat et al.2016, Johnson et al.2016, Chen et al.2017, Zheng, Cheng, and Liu2017, Cheng et al.2017] or modality [Nakayama and Nishida2017] as a pivot to enable zero-resource source-to-target translation. Although promising results have been obtained, pivoting with a third language still demands large scale parallel source-pivot and pivot-target corpora. On the other hand, large amounts of monolingual text documents with rich multimodal content are available on the web, e.g., text with photos or videos posted to social networking sites and blogs. How to utilize the monolingual multimodal content to build zero-resource NMT systems remains an open question.
Multimodal content, especially image, has been widely explored in the context of NMT recently. Most of the work focus on using image in addition to text query to reinforce the translation performance [Caglayan et al.2016, Hitschler, Schamoni, and Riezler2016, Calixto, Liu, and Campbell2017]. This task is called multimodal neural machine translation and has become a subtask in WMT16 111http://www.statmt.org/wmt16/ and WMT17 222http://www.statmt.org/wmt17/
. In contrast, there exists limited work on bridging languages using multimodal content only. Gella et al. Gella2017ImagePF propose to learn multimodal multilingual representations of fixed length for matching images and sentences in different languages in the same space with image as a pivot. Nakayama and Nishida Nakayama2017ZeroresourceMT suggest putting a decoder on top of the fixed-length modality-agnostic representation to generate a translation in the target language. Although the approach enables zero-resource translation, the use of a fixed-length vector is a bottleneck in improving translation performance[Bahdanau, Cho, and Bengio2015].
In this work, we introduce a multi-agent communication game within a multimodal environment [Lazaridou, Pham, and Baroni2016, Havrylov and Titov2017] to achieve direct modeling of zero-resource source-to-target NMT. We have two agents in the game: a captioner which describes an image in the source language and a translator which translates a source-language sentence to a target-language sentence. Apparently, the translator is our training target. The two agents collaborate with each other to accomplish the task of describing an image in the target language with message in the source language exchanged between the agents. Both agents get reward from the communication game and collectively learn to maximize the expected reward. Experiments on German-to-English and English-to-German translation tasks over the IAPR-TC12 and Multi30K datasets demonstrate that the proposed approach yields substantial gains over the baseline methods.
Given a source-language sentence and a target-language sentence
, a NMT model aims to build a single neural networkthat translates into , where is a set of model parameters. For resource-rich language pairs, there exists a source-target parallel corpus
to train the NMT model. The model parameters can be learned with standard maximum likelihood estimation on the parallel corpus:
Unfortunately, parallel corpora are usually not readily available for low-resource language pairs or domains. On the other hand, there exists monolingual multimodal content (images with text descriptions) in the source and target language. It is possible to bridge the source and target languages with the multimodal information [Nakayama and Nishida2017] for an image is a universal representation across all languages.
One way to ground a natural language to a visual image is through image captioning, which annotates a description for an input image with natural language through a CNN-RNN architecture [Xu et al.2015, Karpathy and Fei-Fei2015]. Below, we call a pair of a text description and its counterpart image a “document” and use to denote an image. Given documents in the target language , an image caption model can be built, which “translates” an image to a sentence in the target language. The model parameters can be learned by maximizing the log-likelihood of the monolingual multimodal documents:
Inspired by the idea of pivot-based translation [Cheng et al.2017, Zheng, Cheng, and Liu2017], another way to achieve image-to-target translation is using a second language (the source language) as a pivot. As a result, image-to-target translation can be divided into two steps: the image is first translated to a source sentence using the image-to-source captioning model, which is then translated to a target sentence using the source-to-target translation model. We use two agents to represent the image-to-source captioning model and the source-to-target translation model. The image-to-target translation procedure can be simulated by a two-agent communication game, where agents cooperate with each other to play the game and collectively learn their model parameters based on the feedback. Below we formally define the game, which is a general learning framework for training zero-resource machine translation model with monolingual multimodal documents only.
Two-agent Communication Game
Given monolingual documents in the source language and in the target language , our aim is to learn a model which translates source sentence to target sentence . Importantly, and do not overlap; they do not share the same images at all. Our model consists of a captioner which translates image to source sentence and a translator which translates source sentence to target sentence, where and are model parameters. To make the task a real zero-resource scenario, we assume that no parallel corpora are available even in the validation set. That is to say, we only have monolingual documents and for validation.
In the following we describe two models: (i) the PRE. (pre-training) model that only trains the translator in the two-agent game as the captioner can be pre-trained with
; (ii) the JOINT model that jointly optimize the captioner and translator through reinforcement learning in the communication game.
As illustrated in Fig. 1, we propose a simple communication game with two agents, the captioner and the translator . Sampling a monolingual document from , the game is defined as follows:
is shown the image and is told to describe the image with a source-language sentence .
is shown the middle sentence generated by without the image information. It is told to translate to a target-language sentence.
The environment evaluates the consistency of the translated target sentence and the gold-standard target-language sentence and then both agents receive a reward.
The captioner and the translator must work together to achieve a good reward. should learn how to provide accurate image description in the source language and should be good at translating a source sentence to a target sentence. This game can be played for an arbitrary number of rounds, and the captioner and the translator will get trained through this reinforcement procedure (e.g., by means of the policy gradient methods). In this way, we develop a general learning framework for training zero-resource machine translation model (the translator ) with monolingual multimodal documents only through a multi-agent communication game. As we do not assume the specific architectures of the captioner and the translator, our proposed learning framework is transparent to architectures and can be applied to any end-to-end image captioning and NMT systems.
For a game beginning with a monolingual document , we use to denote the exchanged source sentence between agents. The goal of training is to find the parameters of the agents that maximize the expected reward:
We follow He et al. He2016DualLF and define the reward as the log probability of agentgenerates from :
As a result, the expected reward in the multi-agent communication game can be re-written as:
In training, we optimize the parameters of the captioner and translator through policy gradient methods for expected reward maximization:
We compute the gradient of with respect to parameters and . According to the policy gradient theorem [Sutton et al.1999], it is easy to verify that:
in which the expectation is taken over and .
, we adopt beam search for gradient estimation. Compared with random sampling, beam search can help to avoid the very large variance and sometimes unreasonable results brought by image captioning[Ranzato et al.2015]. Specifically, we run beam search with the captioner to generate top- high-probability middle outputs in the source language, and use the averaged value on the middle outputs to approximate the true gradient. Algorithm 1 shows the detailed learning algorithm.
Since the captioner has a very large action space to generate source description , it is extremely difficult to learn with an initial random policy. Specifically, the search space for is of size , where is the number of words in the source vocabulary (more than 1000 in our experiments) and is the length of the sentence (around 10 to 20 in our experiments). Thus, we pre-train the captioner with maximum likelihood estimation leveraging monolingual dataset :
We initialize the captioner with the pre-trained image caption model and randomly initialize the translator . To avoid the randomly initialized agent doing harm to the pre-trained captioner , we fix
for the initial few epochs and only optimize agent. Then we adopt two different training approaches:
The PRE. (pre-training) model: We keep the captioner fixed and only train the translator in the two-agent game. Thus, the training objective is:
The JOINT model: We jointly optimize the captioner and the translator through reinforcement learning in the communication game. To encourage the captioner to correctly describe the image in the source language, we use half documents from monolingual dataset in each mini batch to constrain the parameters of . We train to maximize the weighted sum of the reward based on documents from and the log-likelihood of image-to-source captioning model on documents from . The objective becomes:
Since we do not have access to parallel sentences even at validation time, we need to have a criterion for model and hyper-parameters selection for the PRE. and JOINT methods. We validate the model on .
Given model parameters and at some iteration and a monolingual document from , we output the image’s description in the target language with the following decision rule using beam search:
With the generated as the hypothesis and the gold-standard description as the reference, we use the BLEU score as the validation criterion, namely, we choose the translation model with the highest BLEU score. Figure 2 shows a typical example of the correlation between this measure and the translation performance of the translator on test set.
|Num. of images||20,000||31,014|
We evaluate our model on two publicly available multilingual image-description datasets as in [Nakayama and Nishida2017]. The IAPR-TC12 dataset [Grubinger et al.2006], which consists of English image descriptions and the corresponding German translations, has a total of 20K images. Each image contains multiple descriptions and each description corresponds to a different aspect of the image. Since the first sentence is likely to describe the most salient objects [Grubinger et al.2006], we use only the first description of each image. We randomly split the dataset into training, validation and test sets with 18K, 1K and 1K images respectively. The recently published Multi30K dataset [Elliott et al.2016], which is a multilingual extension of Flickr30k corpus [Young et al.2014], has 29K, 1K and 1K images in the training, validation and test splits respectively with English and German image descriptions [Elliott et al.2016]. There are two types of multilingual annotations in the dataset: (i) a corpus of one English description per image and its German translation; and (ii) a corpus of 5 independently collected English and German descriptions per image. Since the corpus of independently collected English and German descriptions better fit the noisy multimodal content on the web, we adopt this corpus in our experiments. Note that although these descriptions describe the same image, they are not translations of each other.
For preprocessing, we use the scripts in the Moses SMT Toolkit [Koehn et al.2007] to normalise and tokenize English and German descriptions. For the IAPR-TC12 dataset, we construct the vocabulary with words appearing more than 5 times in the training splits and replace those appearing less than 5 times with UNK symbol. For the Multi30K dataset, we adopt a joint byte pair encoding (BPE) [Sennrich, Haddow, and Birch2016] with 10K merge operations on English and German descriptions to reduce vocabulary size. To comply with the zero-resource setting, we randomly split the images in the training and validation datasets into two parts with equal size. One part constructs the image-English split and the other part the image-German split. Unnecessary modalities for each split (e.g., German descriptions for image-English split) are ignored. Note that the two splits have no overlapping images, and we have no direct English-German parallel corpus. Table 1 and 2 summarizes data statistics.
pre-trained on ImageNet without finetuning. We use the (14,14,1024) feature map of the res4fx (end of Block-4) layer after ReLU. For some baseline methods that do not support attention mechanism, we extract 2048-dimension feature after the pool5 layer. We follow the architecture in[Xu et al.2015] for image captioning and standard RNNSearch architecture [Bahdanau, Cho, and Bengio2015] for translation. We leverage dl4mt 333dl4mt-tutorial: https://github.com/nyu-dl and arctic-captions 444arctic-captions: https://github.com/kelvinxu/arctic-captions for all our experiments. The beam search size is 2 in the middle image-to-source caption generation. During validation and testing, we set the beam search size to be 5 for both the captioner and the translator. All models are quantitatively evaluated with BLEU [Papineni et al.2002]. For the Multi30K dataset, each image is paired with 5 English descriptions and 5 German descriptions in the test set. We follow the setting in [Caglayan et al.2016] and let the NMT generate a target description for each of the 5 source sentences and pick the one with the highest probability as the translation result. The evaluation is performed against the corresponding five target descriptions in the testing phase.
We compare our approach with four baseline methods:
Random: For a sentence in source language, we randomly select a document in whose caption would be output as the translation result.
TFIDF: For a source sentence, we first search the nearest document in
in terms of cosine similarity of TFIDF text features. Then, for the coupled image of that document, we retrieve the most similar document inin terms of cosine similarity of pool5 feature vectors. We output the coupled target sentence of the retrieved document as our translation.
TS model [Chen et al.2017]: This method is originally designed for leveraging a third language as a pivot to enable zero-resource NMT. We follow the teacher-student framework and replace the pivot language with image. The teacher model is an image-to-target captioning model and the student model is the zero-resource source-to-target translation model.
Comparison with Baselines
Table 3 shows the translation performance of our proposed zero-resource NMT on the IAPR-TC12 and Multi30K datasets in comparison with other baselines. Note that the TS model, PRE. model and JOINT model all utilize attention mechanism, while the 3-way model does not support attention. It is clear from the table that in all the cases, our proposed JOINT model outperforms all the other baselines. Even with fixed captioner, the PRE. model can outperform the baselines in most cases.
Specifically, the JOINT model outperforms the 3-way model by +4.7 BLEU score on German-to-English translation and +5.6 BLEU score on English-to-German translation over the IAPR-TC12 dataset; +3.7 BLEU score on German-to-English translation and +6.5 BLEU score on English-to-German translation for the Multi30K dataset. The performance gap can be explained since the 3-way model attempts to encode a whole input sentence or image into a single fixed-length vector, rendering it difficult for the neural network to cope with long sentences or images with a lot of clutter. In contrast, the JOINT model adopts the attention mechanism for the captioner and the translator. Rather than compressing an entire image or source sentence into a static representation, attention allows for a model to automatically attend to parts of a source sentence or image that are relevant to predicting a target-side word. This explanation is in line with the observation that all three models with attention mechanism outperform the 3-way model.
Although the TS model also adopts the attention mechanism, its performance is dominated by the performance of the teacher model, namely the image captioning model. During the learning process of the student model, the teacher model is kept fixed all the time and the student tries to mimic the decoding behavior of the teacher in the teacher-student framework. The description of the same image can vary a lot in different languages as indicated in [van Miltenburg, Elliott, and Vossen2017]. Thus, the teacher model’s captioning result in target language is not necessarily the translation of the image’s coupled source-language description. On the contrary, the JOINT model jointly optimize the captioner and the translator to win a two-agent communication game. The two agents are complementary to each other and are trained jointly to maximize expected reward. This mechanism helps to solve the problem of cross-linguistic differences in image description [van Miltenburg, Elliott, and Vossen2017], resulting in better zero-resource NMT model.
Figure 3 shows translation examples of the zero-resource NMT trained by the PRE. and JOINT methods. We also list the corresponding image for reference. The first example is from the Multi30K dataset, while the second is from the IAPR-TC12 dataset. Apparently, the JOINT model has successfully learned how to translate even with zero-resource.
Effect of Joint Training
We also compare the performance of the captioner with (JOINT) and without (PRE.) the joint training. Table 4 shows the result. Cheng et al. Cheng2017JointTF demonstrate that in pivot-based NMT, where the pivot is a third language, the source-pivot and pivot-target translation models can be improved simultaneously through joint training. In our setting, image-to-source captioning does not necessarily improve with joint training. It gets slightly worse on three out of four translation tasks. Using a third language as a pivot, the source-to-pivot and pivot-to-target models are symmetrical as they are both NMT models. In contrast, in our setting the captioner and the translator are not symmetrical anymore since NMT is much easier to improve than image caption. We suspect that the translator’s performance dominates the multi-agent game.
Comparison with Oracle
Figure 4 compares the JOINT model with ORACLE that uses direct source-target parallel or comparable corpora on the IAPR-TC12 and Multi30K dataset.
For the Multi30K dataset, each image is annotated with 5 German sentences and 5 English sentences. The training dataset for ORACLE can be constructed by the cross product of 5 source and 5 target descriptions which results in a total of 25 description pairs for each image or by only taking the 5 pairwise descriptions. We follow [Caglayan et al.2016] and use the pairwise way.
For the IAPR-TC12 dataset, the JOINT model is roughly comparable with ORACLE when the number of parallel sentences are limited to about 15% that of our monolingual ones. For the Multi30K dataset, the JOINT model obtains comparable results with ORACLE trained by number of comparable sentence pairs about 40% that of our monolingual corpus. It obtains 75% of the BLEU score on German-to-English translation and 65% of the BLEU score on English-to-German translation for NMT trained with full corpora. Although there is still a significant gap in performance as compared to ORACLE trained with parallel corpus, this is encouraging since our approach only uses monolingual multimodal documents.
Training NMT models without source-target parallel corpora by leveraging a third language or image modality has attracted intensive attention in recent years. Utilizing a third language as a pivot has already achieved promising translation quality for zero-resource NMT. Firat et al. Firat2016ZeroResourceTW pre-train multi-way multilingual model and then fine-tune the attention mechanism with pseudo parallel data generated by the model to improve zero-resource translation. Johnson et al. [Johnson et al.2016] adopt a universal encoder-decoder network in multilingual scenarios to naturally enable zero-resource translation. In addition to the above multilingual methods, several authors propose to train the zero-resource source-to-target translation model directly. Chen et al. Chen2017ATF propose a teacher-student framework under the assumption that parallel sentences have close probabilities of generating a sentence in a third language. Zheng et al. Zheng2017MaximumEL maximize the expected likelihood to train the intended source-to-target model. However, all these methods assume that source-pivot and pivot-target parallel corpora are available. Another line is to bridge zero-resource language pairs via images. Nakayama and Nishida Nakayama2017ZeroresourceMT train multimodal encoders to learn modality-agnostic multilingual representation of fix length using image as a pivot. On top of the fix-length representation, they build a decoder to output a translation in the target language. Although the performance is limited by the fix-length representation, their work shows that zero-resource neural machine translation with an image pivot is possible.
Multimodal neural machine translation, which introduce image modality into NMT as an additional information source to reinforce the translation performance, has received much attention in the community recently. A lot of work have shown that image modality can benefit neural machine translation, hopefully by relaxing ambiguity in alignment that cannot be solved by texts only [Hitschler, Schamoni, and Riezler2016, Calixto, Liu, and Campbell2017]. Note that their setting is much easier than ours because in their setting multilingual descriptions for the same images are available in the training dataset and an image is part of the query in both training and testing phases.
In this work, we propose a multi-agent communication game to tackle the challenging task of training a zero-resource NMT system from just monolingual multimodal data. In contrast with previous studies that learn a modality-agnostic multilingual representation, we successfully deploy the attention mechanism to the target zero-resource NMT model by encouraging the agents to cooperate with each other to win a image-to-target translation game. Experiments on German-to-English and English-to-German translation over the IAPR-TC12 and Multi30K datasets show that our proposed multi-agent learning mechanism can significantly outperform the state-of-the-art methods.
In the future, we plan to explore whether machine translation can perform satisfactorily with automatically crawled noisy multimodal data from the web. Since our current method is intrinsically limited to the domain where texts can be grounded to visual content, it is also interesting to explore how to further extend the learned translation model to handle generic documents.
This work is partially supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61522204, No. 61331013) and the HKU Artificial Intelligence to Advance Well-being and Society (AI-WiSe) Lab.
- [Bahdanau, Cho, and Bengio2015] Bahdanau, D.; Cho, K.; and Bengio, Y. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.
- [Caglayan et al.2016] Caglayan, O.; Aransa, W.; Wang, Y.; Masana, M.; García-Martínez, M.; Bougares, F.; and Loïc Barrault and Joost van de Weijer. 2016. Does multimodality help human and machine for translation and image captioning? In WMT.
- [Calixto, Liu, and Campbell2017] Calixto, I.; Liu, Q.; and Campbell, N. 2017. Doubly-attentive decoder for multi-modal neural machine translation. In ACL.
- [Chen et al.2017] Chen, Y.; Liu, Y.; Cheng, Y.; and Li, V. O. K. 2017. A teacher-student framework for zero-resource neural machine translation. In ACL.
- [Cheng et al.2017] Cheng, Y.; Yang, Q.; Liu, Y.; Sun, M.; and Xu, W. 2017. Joint training for pivot-based neural machine translation. In IJCAI.
- [Elliott et al.2016] Elliott, D.; Frank, S.; Sima’an, K.; and Specia, L. 2016. Multi30k: Multilingual english-german image descriptions. In Proceedings of the 5th Workshop on Vision and Language, 70–74.
- [Firat et al.2016] Firat, O.; Sankaran, B.; Al-Onaizan, Y.; Yarman-Vural, F. T.; and Cho, K. 2016. Zero-resource translation with multi-lingual neural machine translation. In EMNLP.
- [Gehring et al.2017] Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; and Dauphin, Y. 2017. Convolutional sequence to sequence learning. In ICML.
- [Gella et al.2017] Gella, S.; Sennrich, R.; Keller, F.; and Lapata, M. 2017. Image pivoting for learning multilingual multimodal representations. CoRR abs/1707.07601.
- [Grubinger et al.2006] Grubinger, M.; Clough, P.; Müller, H.; and Deselaers, T. 2006. The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In LREC, 13–23.
- [Havrylov and Titov2017] Havrylov, S., and Titov, I. 2017. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. CoRR abs/1705.11192.
- [He et al.2016a] He, D.; Xia, Y.; Qin, T.; Wang, L.; Yu, N.; Liu, T.-Y.; and Ma, W.-Y. 2016a. Dual learning for machine translation. In NIPS.
- [He et al.2016b] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016b. Deep residual learning for image recognition. 770–778.
- [Hitschler, Schamoni, and Riezler2016] Hitschler, J.; Schamoni, S.; and Riezler, S. 2016. Multimodal pivots for image caption translation. In ACL.
- [Johnson et al.2016] Johnson, M.; Schuster, M.; Le, Q. V.; Krikun, M.; Wu, Y.; Chen, Z.; Thorat, N.; Viégas, F. B.; Wattenberg, M.; Corrado, G. S.; Hughes, M.; and Dean, J. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. CoRR abs/1611.04558.
- [Kalchbrenner and Blunsom2013] Kalchbrenner, N., and Blunsom, P. 2013. Recurrent continuous translation models. In EMNLP.
- [Karpathy and Fei-Fei2015] Karpathy, A., and Fei-Fei, L. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3128–3137.
- [Koehn et al.2007] Koehn, P.; Hoang, H.; Birch, A.; Callison-Burch, C.; Federico, M.; Bertoldi, N.; Cowan, B.; Shen, W.; Moran, C.; Zens, R.; Dyer, C.; Bojar, O.; Constantin, A.; and Herbst, E. 2007. Moses: Open source toolkit for statistical machine translation. In ACL.
- [Lazaridou, Pham, and Baroni2016] Lazaridou, A.; Pham, N. T.; and Baroni, M. 2016. Towards multi-agent communication-based language learning. CoRR abs/1605.07133.
- [Nakayama and Nishida2017] Nakayama, H., and Nishida, N. 2017. Zero-resource machine translation by multimodal encoder–decoder network with multimedia pivot. Machine Translation 31:49–64.
- [Papineni et al.2002] Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, 311–318. Association for Computational Linguistics.
- [Ranzato et al.2015] Ranzato, M.; Chopra, S.; Auli, M.; and Zaremba, W. 2015. Sequence level training with recurrent neural networks. CoRR abs/1511.06732.
- [Sennrich, Haddow, and Birch2016] Sennrich, R.; Haddow, B.; and Birch, A. 2016. Neural machine translation of rare words with subword units.
- [Sutskever, Vinyals, and Le2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks.
- [Sutton et al.1999] Sutton, R. S.; McAllester, D. A.; Singh, S. P.; and Mansour, Y. 1999. Policy gradient methods for reinforcement learning with function approximation. In NIPS.
- [van Miltenburg, Elliott, and Vossen2017] van Miltenburg, E.; Elliott, D.; and Vossen, P. T. J. M. 2017. Cross-linguistic differences and similarities in image descriptions. CoRR abs/1707.01736.
- [Vaswani et al.2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. CoRR abs/1706.03762.
- [Xu et al.2015] Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A. C.; Salakhutdinov, R.; Zemel, R. S.; and Bengio, Y. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML.
- [Young et al.2014] Young, P.; Lai, A.; Hodosh, M.; and Hockenmaier, J. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2:67–78.
- [Zheng, Cheng, and Liu2017] Zheng, H.; Cheng, Y.; and Liu, Y. 2017. Maximum expected likelihood estimation for zero-resource neural machine translation. In IJCAI.
- [Zoph et al.2016] Zoph, B.; Yuret, D.; May, J.; and Knight, K. 2016. Transfer learning for low-resource neural machine translation. In EMNLP.