Adversarial reconstruction for Multi-modal Machine Translation

10/07/2019 ∙ by Jean-Benoit Delbrouck, et al. ∙ 0

Even with the growing interest in problems at the intersection of Computer Vision and Natural Language, grounding (i.e. identifying) the components of a structured description in an image still remains a challenging task. This contribution aims to propose a model which learns grounding by reconstructing the visual features for the Multi-modal translation task. Previous works have partially investigated standard approaches such as regression methods to approximate the reconstruction of a visual input. In this paper, we propose a different and novel approach which learns grounding by adversarial feedback. To do so, we modulate our network following the recent promising adversarial architectures and evaluate how the adversarial response from a visual reconstruction as an auxiliary task helps the model in its learning. We report the highest scores in term of BLEU and METEOR metrics on the different datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Problems combining vision and natural language processing are viewed as a difficult task. It requires to grasp and express low to high-level aspects of local and global areas in an image as well as their relationships. Visual attention-based neural decoder models

Xu et al. (2015); Karpathy and Li (2015)

have been widely adopted to solve such tasks. The attention focuses only on part of an image and integrates this spatial information into the multi-modal model pipeline. The model, that usually consists of a Recurrent Neural Network (RNN), encodes the linguistic inputs and is trained to modulate, merge and use both visual and linguistic information in order to maximize a task score. For instance, in Multi-modal Machine Translation (MMT), the model is required to translate an image description to another language.


The integration of visual input in MMT has always been the primary focus of the different researches in the field. Regional and global features have first been investigated Huang et al. (2016), then convolutional features of higher dimensions (such as the res4f layer from ResNet) Calixto et al. (2017); Delbrouck and Dupont (2017b) were used because they carry more visual information. Recently, Caglayan et al. (2017) found that light architectures with fewer parameters are more suitable for the learning of the MMT task. Because of the limited number of training parameters, global features must be used. A trade-off arises : models with bigger attention mechanism could take advantage of richer visual input but the addition of training parameters seems to impair the translation quality.

To tackle this problem, we decide to take a state-of-the-art MMT model and add a conditional generator whose aim is to reconstruct the global visual input used during the translating process using only the model terminal state. We also want this reconstruction to be evaluated adversarially. This approach has four purposes :

  • We constrain the model to closely represent the semantic meaning of the sentence by reconstructing the visual input. We believe it would ground the visual information into the training process and enable better generalization;

  • We leave the whole translation model pipeline unchanged, no learning parameters are added for translation. The generator module is trained end-to-end during training but is unused during inference;

  • Because we use light global features, the reconstruction process is very fast and require few learning parameters;

  • By using an adversarial approach, we want our generator to approximate the true data distribution of images. We believe that the propagation of generator’s gradient back into the translation model would enable better generalization for unseen images on the different test-sets.

This reconstruction problem has two parts. First, we add the reconstruction module on top of our primary MMT task and investigate the different architecture for the generator. Secondly, we treat the reconstruction as an adversarial problem. We modulate our network following recent promising adversarial architectures and evaluate how the adversarial response helps the translation pipeline in its learning. We prove their efficiency by showing strong generalization the different MMT test-sets.

2 Related work

In the modality reconstruction field, the closest work related to ours is the one of Rohrbach et al. (2016) who proposes an approach which can learn to visually localize phrases relying on phrases associated with bounding boxes in an image. Nevertheless, our works differ in two ways. First, the reconstruction is linguistic. They aim to reconstruct the sentence from a visual attention. Secondly, their visual data are annotated with bounding boxes representing linguistic information while our approach doesnt require any preprocessing.

When reconstructing its input, a model can be seen as an auto-encoder Hinton and Salakhutdinov (2006) which aims to compress or encode with model a modality into a representation and then decode (or reconstruct) from z an approximation with decoder . The difference lies in that our latent variable (or compressed representation) is the final representation of a MMT model. Input is modulated in by the multi-modal model before being decoded (or reconstructed). Because our latent variable will be adversarially evaluated, our model is also close to an adversarial auto-encoders (AAE) Makhzani et al. (2016).

Adversarial approaches for multimodal-tasks have been investigated in image-captioning Feng et al. (2018) or visual question answering Ilievski and Feng (2017). In those works, the task goal is fully adversarial which differs from our approach. Our translation model is still a classification task and uses the widely adopted negative log likelihood loss. Only the reconstruction module is treated as adversarial.

Finally, reconstruction (or imagination as called in the author’s paper) has been investigated with regression techniques Elliott and Kádár (2017). A major difference, besides our adversarial approaches, is their choice to not use any visual information during inference. The image is only used as training input for the reconstruction module, not the translation module. We believe it could penalize the model to do so if the information for translation really is in the image. As previously stated, using visual input for translation might impair overall translation quality but we force our model to use a visual attention during inference as it is the very foundation of the multimodal translation task.

3 Background

In this section, we describe the concepts involved in our experiments. We start by describing how visual reconstruction as an auxiliary task is built on top of our MMT model. We then explain the two adversarial settings used to involved in our experiments: a generative adversarial network and an adversarial auto-encoder.

3.1 Visual reconstruction

We denote the MMT model and its inputs and for the linguistic and visual data respectively. The model learns to output the translation of as formulated hereafter:

(1)

where is defined as the model’s final state (or last hidden state). A generator takes as input and approximates a visual reconstruction :

(2)

From equation 1 and 2, we compute the total loss of model and generator :

(3)

Factor indicates the weight of the reconstruction loss.

Notation used in this sub-section 3.1 are matched in the following sub-sections 3.2 and 3.3 for clarity.

3.2 Generative adversarial network (GAN)

A generative adversarial network Goodfellow et al. (2014) is a model whose main focus is to generate new data based on source data. It is made of two networks: the generator that constructs synthetic data from noise samples and the discriminator that distinguishes generated samples from the generator or from the true data-set distribution. Intuitively, one can say that the goal of the generator is to fool the discriminator by synthesizing data close to the data distribution. This leads to a competition between both networks called the min-max objective:

(4)

where is an example from the true data and a sample from the Generator and variable is Gaussian noise.

To stabilize training and tackle the vanishing gradient problem,

Gulrajani et al. (2017) introduce a gradient penalty in the objective :

(5)

with and where

is a random number sampled from the uniform distribution

and is the penalty factor. This method produces more stable gradient and the critic can match more complex distribution.

This equation refers to the wasserstein GAN (WGAN, Gulrajani et al. (2017)) with gradient penalty that will be used in our experiments at section 4.

3.3 Adversarial Auto-encoders (AAE)

Auto-encoders are made of two parts : an encoder receives the input

and creates a latent or hidden representation

of it, and the generator takes this intermediate representation and tries to reconstruct the input as . A common loss is to use the mean square error between the input and reconstructed inputs.

(6)

Variational autoencoders impose a constraint on how to construct the hidden representation. The encoder can not use the entire latent space freely but has to restrict the hidden codes

produced to be likely under the prior distribution . This can be seen as a type of regularization on the amount of information that can be stored in the latent code. The benefit of this relies on the fact that now we can use the system as a generative model. To create a new sample that comes from the data distribution , we sample from

and run this sample through the generator. In order to enforce this property a second term is added to the loss function in the form of a Kullback-Liebler (KL) divergence between the two distributions :

(7)

where is the encoder of our network and is the prior distribution imposed on the latent code.

Adversarial autoencoders Makhzani et al. (2016) avoid using the KL divergence by using adversarial learning. In this architecture, a new discriminative network is trained to predict whether a sample comes from the latent code of the generator or from the prior distribution imposed on the latent code . The loss of the encoder is now composed by the reconstruction loss plus the loss given by the discriminator network.

We can now use the loss incurred by the encoder of the adversarial network instead of a KL divergence for it to learn how to produce samples according to the distribution . The loss of the discriminator is :

(8)

where is generated by the encoder and

is a sample from the true prior (usually a gaussian distribution). Following the mix-max game, the loss of the encoder

is :

(9)

As seen in the previous sub-section, we can make this AAE wasserstein (WAAE, Bousquet et al. (2018)

) by using the Wasserstein distance between the two probability distributions and by introducing a regularizer penalizing discrepancy between prior distribution and distribution induced by the encoder.

4 MMT Experiments

In this section, we describe the two visual reconstruction experiments on model evaluated in section 6.

4.1 -Wgan

In the original algorithm, G receive as input and is usually a sample from Gaussian noise. In the case of MMT, noise will be concatenated with the model ’s last hidden state so that the generator reconstruct the features according to the translated sentence. Generator then becomes a conditional generative network Mirza and Osindero (2014) and outputs the reconstructed features . This reconstruction will be evaluated by discriminator . This settings is illustrated in figure 1. The goal of noise is to make the generator non-deterministic so that it is harder the for model to discriminate between the real and the fake sample. Stochasticity can be induced by dropout as well Isola et al. (2017) and will be used in our model. The full procedure can be found in Algorithm 1.

[scale=0.26]gwgan.png

Figure 1: Training flow of -WGAN. Model omitted for clarity.

4.2 -Waae

In this experiment, the encoder is actually the multi-modal translation model . The latent variable is seen as the last hidden state of the model . has to discriminate between the latent code or the ”real” latent code sampled from a Gaussian distribution. Along the adversarial loss, a generator reconstruct the features with input . The figure 2 depicts the reconstruction. The full procedure can be found in Algorithm 2.

[scale=0.30]qwaar.png

Figure 2: Training flow of -WAAE. The last hidden state is the input for decoder

5 Settings

In this section, we describe the model and the data-set used.

5.1 Training

To be consistent with the state-of-the-art, we follow the settings that are used in the previous works we compare our model to in the result section. The full description of the model can be found in appendix A. RNN layer size, attention size, dropout, model ensembling and training settings are left unchanged for a fair comparison.

We train jointly and with Adam optimizer Kingma and Ba (2014)

with the learning rate 4e-4 and gradient clipping is set to 1. The visual input

used are the images features from the last pooling layer (pool5) of the ResNet-50 He et al. (2016) and are of dimension

. We use a batch-size of 32. For both task, we stop training if the task score doesn’t improve for more than 5 epochs. Model reported are ensembling of 5 models.


Finally, the gradient penalty is set to 10 for all experiments. For -WAAE, the coefficient is set to 5. The adversarial and reconstruction coefficients and are detailed in the results section 6. The discriminator is trained with adam with learning rate of 2e-4, = 0.5 and = 0.9. The architecture of and is available in Appendix B. We found out that the use spectral normalization Miyato et al. (2018)

and batch normalization didn’t improve the translation scores.

1.1 Require: Adversarial coefficient , gradient penalty coefficient , the number of iterations per iteration

Algorithm 1 -WGAN : Wasserstein GAN with gradient penality
Initialize the parameters of the MMT model , generator and features discriminator .
while Q not converged do
 Sample from the training set
 Output translations from
 Get last states from
for do
  Sample noise from
  Sample random number from
  
  
  Update by ascending:  
 Update and by descending the adversarial loss :
 Update by descending translation loss

1.1 Require: Adversarial coefficient , reconstruction coefficient , gradient penalty coefficient

Algorithm 2 -WAAE : Wasserstein Auto-Encoder with gradient penalty
Initialize the parameters of the MMT model , generator and latent discriminator . Use mean square error as .
while not converged do
 Sample from the training set
 Output translations from
 Get last states from
 Sample ”true” state from
 Sample random number from
 Update by ascending:
 Update and by descending reconstruction and adversarial loss :
 Update by descending translation loss

5.2 Dataset

We use the Multi30K dataset (Elliott et al., 2016). For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. We use the three available test sets to score our models. The Flickr Test2016 and the Flickr Test2017 set contain 1000 image-caption pairs and the ambiguous MSCOCO test set 461 pairs. Recently, a fourth dataset, the Flickr Test2018 set, is used for the online competition on codalab 111https://competitions.codalab. org/competitions/19917#results. It consists of 1,071 sentences is released without the German and French gold translations.

6 Results

We now report the results for the different two configurations introduced in section 4 on the Multi-modal Machine Translation (MMT) task. All experiments reported were run on a single NVIDIA GTX 1080 GPU.

Test sets Test 2016 Flickr Test 2017 Flickr
BLEU METEOR BLEU METEOR
FAACaglayan et al. (2018) - - 31.60 52.50
DeepGruDelbrouck and Dupont (2018) 40.34 59.58 32.57 53.60
Baseline 40.00 59.20 32.20 53.10
-WGAN 40.38 +0.38 60.03 +0.83 33.70 +1.50 54.50 +1.40
-WAAE 40.66 +0.66 60.06 +0.86 34.06 +1.86 54.94 +1.84
Test sets COCO-ambiguous Test 2018 Flickr
FAACaglayan et al. (2018) - - 31.39 51.43
DeepGruDelbrouck and Dupont (2018) 29.21 49.45 31.10 51.64
Baseline 28.50 48.80 - -
-WGAN 31.08 +2.58 50.43 +1.63 31.80 52.15
-WAAE 31.41 +2.91 50.95 +2.15 31.91 52.37
Table 1: Results on the ende MMT task. Test 2018 results (anonymized) can be checked on the official leaderboard (https://competitions.codalab.org/competitions/19917#results) in the ”german” tab. Score differences are computed against the baseline.

6.1 Quantity evaluation

First and foremost, we notice that the most successful model is -WAAE as it marginally surpasses the baseline and previous works in every dataset. It is also the best official reported score as constrained submission (only data provided by the challenge) of the test 2018 data-set. The submission surpasses the previous best METEOR score from DeepGru by 0.73 METEOR and the previous best BLEU score from FAA by 0.52 points. More importantly, the -WAAE model significantly improves the SOTA on the COCO-ambiguous data-set, a test-set that has been specifically designed to include 56 unique ambiguous verbs in 461 descriptions ( BLEU and METEOR).

0.2 0.5 0.8
[origin=c]90 0.2 50.95 50.08 49.33
0.5 49.79 49.62 49.16
0.8 49.70 49.16 48.02
Table 2: -WAAE : Impact on the METEOR metric of the reconstruction and adversarial loss coefficient on the ambiguous COCO data-set

To try and get the best results on the -WAAE, we mixed different combinations of the coefficient factors on the adversarial and reconstruction loss as shown in table 2. The results show that if the auxiliary loss (adversarial and/or reconstruction) is made too important compared to the translation loss, the translation quality is impaired.

The -WGAN also shows improvements over the baseline and obtains similar results to -WAAE. Nonetheless, a small discrepancy is noticeable on the COCO-ambiguous. We believe that the main advantage of the -WAAE loss is the actual presence of a direct mean square error reconstruction loss along the adversarial loss. We also noticed that the -WGAN model is really sensitive to the dimension of noise concatenated to the hidden state given as input to the generator as stated in table 3.

64 128 256 512
[origin=c]90 METEOR 50.35 50.43 49.71 49.48
Table 3: -GWAN : Impact of the noise concatenated to the hidden state of size 512

One can argue that because the generator is conditional on the hidden state which is of high dimension, its very hard for the generator to become deterministic. An important noise dimension could potentially harm the generator instead of fooling the discriminator.

6.2 Quality evaluation

To understand the success of -WAAE on the ambiguous COCO data-set, we perform an ablation study of the model. We first discard the adversarial discriminator so that we only train the reconstruction module with the MSE loss (+ ). We also discard the use of the features in the translation model for both the ablated model and -WAAE (no ). The results of the ablation study can be found in table 4.

Test sets COCO-ambiguous
BLEU METEOR
Baseline 28.50 48.80
Baseline + + no 29.43 49.60
Baseline + 29.91 49.24
-WAAE + no 30.57 50.15
-WAAE 31.41 50.95
Table 4: Ablation study of -WAAE model

A first observation is that the reconstruction module does improve the baseline, but the the Baseline + + no model (no the visual input in the translation pipeline) has a better METEOR metric than the Baseline +

model. It means that use of a visual attention model in the translation pipeline harms the overall translation quality, as already found in previous work. In contrast,

-WAAE hopefully performs better than -WAAE + no , which shows the successful integration of the visual input, as it should be expect for the MMT task. Using adversarial feedback does provide a stronger training and a better generalization over the different data-sets.

6.3 Improvements examples

[scale=0.26]jbw.png

Figure 3: An ambiguous COCO example where -WAAE finds the right translation for the verb

To further investigate the quality of the -WAAE model, we pick two examples to illustrate the improvements.

In figure 3, the baseline translates ”pointing a camera” to ”zeigt auf ein camera” which could translate to ”to point at a camera”. It is incorrect since the image displays the camera-man pointing a camera at the speaker. Also, the german verb ”zeigen” also means to show, to demonstrate, which is not ideal in this example. Our model translates ”pointing” to ”richtet” meaning ”pointing” with the idea of aiming which is more suitable. Also -WAAE does not use wrong prepositions. The sentence of baseline scores a BLEU of 0 while the sentence score of our model is a BLEU of 44.83.

[scale=0.35]jbw2.png

Figure 4: An ambiguous COCO example where -WAAE finds the right translation for the object

The second figure aims to show that not only -WAAE manages to correctly translates ambiguous verbs but more complex examples. In Figure 4, the -WAAE model ends up getting the perfect translation (a BLEU score of 100) whereas the baseline model outputs a translation closer to ”a woman winding up for softball”, missing the second verb (BLEU score of 22.60).

6.4 Other data-set

We decided to train -WAAE on another language pair of the Multi30K dataset, namely the en fr pair. Again the model surpasses the baseline for the COCO-ambiguous and test 2018 test sets.

BLEU METEOR
Test sets en fr COCO-ambiguous
DeepGru 46.16 65.79
-WAAE 47.00 66.50
Test 2017
DeepGru 55.13 71.52
FAA 52.80 69.60
-WAAE 56.54 72.32
Test 2018
FAA 39.48 59.85
-WAAE 40.09 60.54
Table 5: Results on the en fr Multi30K dataset, test 2018 results can found online in the aformentioned codalab link in the ”french” tab

7 Conclusion

We demonstrated that recent advances in adversarial generative modeling was able to successfully ground visual information for multi-modal translation using visual and linguistic input. We show that the use of visual information for the model still remains a challenging task. The presented work in this paper aimed to modulate the last hidden state at the end of the translation model, it would be interesting to investigate adversarial approaches more upstream in the pipeline like in the visual features extraction (as previously investigated in

Delbrouck and Dupont (2017a)).

References

  • O. Bousquet, S. Gelly, and B. Scholkopf (2018) Wasserstein auto-encoders. External Links: Link Cited by: §3.3.
  • O. Caglayan, W. Aransa, A. Bardet, M. García-Martínez, F. Bougares, L. Barrault, M. Masana, L. Herranz, and J. Van de Weijer (2017) LIUM-cvc submissions for wmt17 multimodal translation task. arXiv preprint arXiv:1707.04481. Cited by: §1.
  • O. Caglayan, A. Bardet, F. Bougares, L. Barrault, K. Wang, M. Masana, L. Herranz, and J. van de Weijer (2018) LIUM-cvc submissions for wmt18 multimodal translation task. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Belgium, Brussels, pp. 603–608. External Links: Link Cited by: Table 1.
  • I. Calixto, Q. Liu, and N. Campbell (2017)

    Doubly-attentive decoder for multi-modal neural machine translation

    .
    In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1913–1924. Cited by: §1.
  • J. Delbrouck and S. Dupont (2017a) Modulating and attending the source image during encoding improves multimodal translation. arXiv preprint arXiv:1712.03449. Cited by: §7.
  • J. Delbrouck and S. Dupont (2017b) Multimodal compact bilinear pooling for multimodal neural machine translation. arXiv preprint arXiv:1703.08084. Cited by: §1.
  • J. Delbrouck and S. Dupont (2018) UMONS submission for wmt18 multimodal translation task. In Proceedings of the First Conference on Machine Translation, Brussels, Belgium. Cited by: Table 1.
  • D. Elliott, S. Frank, K. Sima’an, and L. Specia (2016) Multi30K: multilingual english-german image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pp. 70–74. Cited by: §5.2.
  • D. Elliott and Á. Kádár (2017) Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 130–141. Cited by: §2.
  • Y. Feng, L. Ma, W. Liu, and J. Luo (2018)

    Unsupervised image captioning

    .
    arXiv preprint arXiv:1811.10787. Cited by: §2.
  • I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pp. 2672–2680. External Links: Link Cited by: §3.2.
  • I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5767–5777. External Links: Link Cited by: §3.2, §3.2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §5.1.
  • G. E. Hinton and R. R. Salakhutdinov (2006) Reducing the dimensionality of data with neural networks. science 313 (5786), pp. 504–507. Cited by: §2.
  • P. Huang, F. Liu, S. Shiang, J. Oh, and C. Dyer (2016) Attention-based multimodal neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pp. 639–645. Cited by: §1.
  • I. Ilievski and J. Feng (2017) Generative attention model with adversarial self-learning for visual question answering. In Proceedings of the on Thematic Workshops of ACM Multimedia 2017, pp. 415–423. Cited by: §2.
  • P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §4.1.
  • A. Karpathy and F. Li (2015) Deep visual-semantic alignments for generating image descriptions.. In CVPR, pp. 3128–3137. External Links: ISBN 978-1-4673-6964-0, Link Cited by: §1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. Note: cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015 External Links: Link Cited by: §5.1.
  • A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow (2016) Adversarial autoencoders. In International Conference on Learning Representations, External Links: Link Cited by: §2, §3.3.
  • M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. CoRR abs/1411.1784. Cited by: §4.1.
  • T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. Cited by: §5.1.
  • A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele (2016) Grounding of textual phrases in images by reconstruction. In European Conference on Computer Vision, pp. 817–834. Cited by: §2.
  • R. Sennrich, B. Haddow, and A. Birch (2016) Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725. External Links: Document, Link Cited by: Appendix A.
  • K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. In

    Proceedings of the 32nd International Conference on Machine Learning

    , F. Bach and D. Blei (Eds.),
    Proceedings of Machine Learning Research, Vol. 37, Lille, France, pp. 2048–2057. External Links: Link Cited by: §1.

Appendix A Model

Given a source sentence and visual features , an attention-based encoder-decoder model outputs the translated sentence . If we denote as the model parameters, then is learned by maximizing the likelihood of the observed sequence or in other words by minimizing the cross entropy loss. The objective function is given by:

(10)

Three main components are involved: an encoder, a decoder and an attention model.

Encoder  The encoder is a bidirectional-GRU that create a set of annotation :

A word has an embedding of 256, each GRU is of size 512 thus annotation are of size 1024.

Decoder  The decoder is a conditional GRU (cGRU). The following equations describes a cGRU cell :

(11)

where both GRU have 512 units and ATT is the attention module defined hereafter :

(12)
(13)
(14)
(15)
(16)

Matrices and map respective inputs to size 1024 . transform visual features to size 1024 and

transforms both attention vector back to size 512 to be compatible with

size.

Finally, a bottleneck function projects the cGRU output into probabilities over the target vocabulary. It is defined so:

(17)
(18)

where maps hidden state to size 256 and maps the bottleneck result to the vocabulary size.

Dropout of 0.3 is used on embeddings and annotations and of 0.5 on .

To marginally reduce our vocabulary size, we use the byte pair encoding (BPE) algorithm on the train set to convert space-separated tokens into sub-words Sennrich et al. (2016). With 10K merge operations, the resulting vocabulary sizes of each language pair are: 5204 7067 tokens for English German and 5835 6577 tokens for EnglishFrench.

Appendix B Generator and discriminator

-WAAE  Generator G is defined as follows:

where is of size .

Discriminator D is defined as follows :

where is of size .

-WGAN  Generator G is defined as follows:

where is of size .

Discriminator D is defined as follows ( is either real or generated ):

(19)
(20)
(21)

where is of size , of size is of size and of size is of size