Conditioning on multimodal information is one of the predominant methods of grounding representation learned in deep learning models(Chrupała et al., 2015; Lazaridou et al., 2015), i.e., relating the word or sentence representation to non-linguistic real-world entities such as objects in photographs. In the context of multimodal machine translation (MT), models using multimodal auxiliary loss have been shown to outperform their text-only counterparts (Elliott and Kádár, 2017; Helcl et al., 2018). Experiments with multimodal language models (LMs) also confirm that multimodality influences the semantic properties of learned representations (Poerner et al., 2018).
On the other hand, recent experiments with large-scale language modeling suggest that these models provide sufficiently informative representations reusable in most natural language processing (NLP) tasks(Peters et al., 2018; Devlin et al., 2018). Current research has also seen an increasing trend towards investigation on universality of learned representations where the learned representations are supposed to contain sufficient inductive biases for a variety of NLP tasks (Conneau et al., 2017; Howard and Ruder, 2018).
Research in evaluating representations has focused on measuring the correlation between the similarity of learned representations and the semantic similarity of words (Hill et al., 2015; Gerz et al., 2016) and sentences (Agirre et al., 2012, 2016). Work on probing representations include relating learned representations to existing well-trained models by finding a mutual projection between the learned representations and evaluating the performance of the projected representations within the trained model (Saphra and Lopez, 2018)
, and observing the effect of changes in the representation by backpropagating the changes to the input(Poerner et al., 2018).
Universal sentence representations are typically evaluated on its effects on downstream tasks. Conneau and Kiela (2018) and Wang et al. (2018) recently introduced comprehensive sets of such downstream tasks providing a benchmark for the sentence representation evaluation. The tasks include various sentence classification tasks, entailment or coreference resolution. However, the drawback of these methods is that they require generating representations of millions of sentences which are later used for a rather time-consuming training of models for the downstream tasks.
In this paper, we investigate representations obtained specifically from grounded models using the two predominant sequence modeling architectures: a model based on recurrent neural networks (RNN; Mikolov et al., 2010; Bahdanau et al., 2014) and a model based on the self-attentive Transformer architecture (Vaswani et al., 2017). We study the learned representations on aspects of grounding, semantics and the degree to which some of these representations are correlated, irrespective of modeling choices. Our main observations are: a) models with access to explicit grounded information learn to ignore image information; b) grounding accounts for better semantic representations as it provides a stronger training signal and is especially pronounced when a model has access to less training samples; c) while Transformer based models might have better task performance, we observe that RNN based models capture better semantic information.
2 Assessing Contextual Representations
In this section, we briefly describe the methods used for extracting representations and for quantifying the representation qualities: Canonical Correlation Analysis (CCA) for image retrieval evaluation, and cosine distance for Semantic Textual Similarity evaluation. Finally, we also use Distance Correlation (DC) for representation similarity evaluation. Whereas the first two of them are used for evaluation on downstream tasks, the latter one is only quantifies mutual similarities of the representations.
Canonical Correlation Analysis.
We take input as the two sets of aligned representations from two different subspaces, say and , where and
are vector representations. CCA(Hotelling, 1936) finds pairs of directions , such that the linear projections of and onto these directions, i.e., the canonical representations and , are maximally correlated. For, further details on CCA, we refer the reader to Hardoon et al. (2004).
The most significant property of CCA for our analysis is that CCA is a subspace only method where we obtain naturally occurring correlations between two spaces. Importantly, we don’t learn to align, but obtain alignments that are potentially present between the two subspaces. Further, CCA is affine-invariant due to its reliance on correlation rather than orthogonality of direction vectors.
We use CCA over mean-pooled sentence representations and image representations and obtain two highly correlated projections respectively. CCA and its variants have been used in previous research to obtain cross-modal representations Gong et al. (2014); Yan and Mikolajczyk (2015). We evaluate the projected representations on image retrieval task and report the recall at 10. Note that we do not backpropagate the correlation to the network and keep the representation fixed because our goal is not training towards optimal cross-modal representation but only to asses the (already trained) sentence representation.
For evaluation on the STS task, we use cosine distance between of vectors and :
Following the SentEval benchmark (Conneau and Kiela, 2018), we report the Spearman correlation between the distance and human assessments.
The goal of the STS task is to asses how well the representation capture semantic similarity of sentences as perceived by humans. Similar to the image retrieval task, we do not fine-tune the representations for the similarity task and report the Spearman correlation of the cosine distance between the representations and the ground-truth similarity.
Distance correlation (DC) is a measure of dependence between any two paired vectors of arbitrary dimensions Székely et al. (2007). Given, two paired vectors, and and suppose that and
are the individual characteristic functions and joint characteristic function of the two vectors respectively. The distance covariancebetween and
with finite first moments is a non-negative number given by:
where ; and are the dimensionalities of and respectively. The distance correlation (DC) is then defined as:
A detailed description of the DC is beyond the scope of this paper, but we refer the reader to Székely et al. (2007) for a thorough analysis.
Our use of DC is motivated by the result that DC quantifies dependence measure, especially it equals zero exactly when the two vectors are mutually independent and are not correlated. Also, DC measures both linear and non-linear association between two vectors. We use DC to measure the degree of correlation between different representations. We are especially interested in studying the degree to which two independently learned representations are correlated.
We examine representations for four types of models: a) LMs; b) image representation prediction models (Imaginet); c) textual MT; and d) multimodal MT models. For each task, we train models based using RNNs and the Transformer architecture. In addition, we use training datasets of different sizes. All models trained with Neural Monkey111https://github.com/ufal/neuralmonkey (Helcl and Libovický, 2017b).
The Imaginet models (Chrupała et al., 2015) predict image representation given a textual description of the image. The representations is trained only via its grounding in the image representation.
We use a bidirectional RNN encoder with the same hyperparameters as the aforementioned LM. The Transformer based Imaginet uses the same hyperparameters as the Transformer based LM. The states of the encoder are then mean-pooled and projected with a hidden layer of 4,096 and ReLU non-linearity to a 2,048-dimensional vector corresponding to the image representation from theResNet (He et al., 2016). For a fair comparison, we use the representation before the final non-linear projection.
For completeness, We also compare the LMs with ELMo (Peters et al., 2018), a representation based on deep RNN LM with character-based embeddings pre- trained on a large corpus, of 30 million sentences, and BERT (Devlin et al., 2018), a Transformer based sentence representation that is similar to Transformer based LM. We note however that BERT is trained in a significantly different procedure than regular LMs.
Textual MT models.
We trained the attentive RNN based seq2seq model (Bahdanau et al., 2014) with the same hyperparameters as the RNN Imaginet model, and with the conditional GRU (Firat and Cho, 2016) as the decoder. With the Transformer architecture, we used the same hyperparameters as for the Imaginet models.
Besides the text-only models, we trained Imagination models (Elliott and Kádár, 2017) that combine the translation with the Imaginet models in a multi-task setup. The model is trained to generate a sentence in target language and predict image representation at the same time.
With multi-task learning, the model takes advantage from large parallel data without images and monolingual image captioning data at the same time. Presumably, the model achieves a superior translation quality by being able to learn a better source sentence representation. At the inference time, the only requires the textual input.
Multimodal MT models.
For both RNN and Transformer architectures, we used the same hyperparameters as for the textual models. As in previous models, we use last convolutional layer of ResNet as image representation.
In the RNN setup, we experiment with decoder initialization with image representation (Caglayan et al., 2017; Calixto and Liu, 2017) and with doubly attentive decoder with three different attention combination strategies (Libovický and Helcl, 2017)
. First, we concatenate context vectors computed independently over the image representation and source sentence; second (flat attention combination), we compute a joint distribution over the image convolutional maps and the source encoder; third (hierarchical attention combination), we compute the context vectors independently and combine them hierarchically using another attention mechanism.
In the Transformer setup, the multimodal models use doubly attentive decoders (Libovický et al., 2018). We experiment with four setups: serial, parallel, flat and hierarchical input combination. The first two are a direct extension of the Transformer architecture by adding more sublayers in the decoder. The latter ones are a modification of the attention strategies for in the RNN setup.
To evaluate how the representation quality depends on the amount of the training data, we train our models on different datasets. The smallest dataset that is used for all types of experiments is Multi30k (Elliott et al., 2016) that consists of only 29k training images with English captions and their translations into German, French, and Czech.
For monolingual experiments (LM and image representation prediction) we further use English captions from the Flickr30k dataset (Plummer et al., 2015) that contains 5 captions for each image, in total 145k. The largest monolingual dataset we work with is a concatenation of Flickr30k and the COCO dataset (Lin et al., 2014), with 414k descriptions of 82k images.
For textual MT, where parallel data are needed, we also consider an unconstrained setup with additional data harvested from parallel and monolingual corpora (Helcl and Libovický, 2017a; Helcl et al., 2018) combined with the EU Bookshop corpus (Tiedemann, 2012), in total of 200M words.
Multimodal MT models are trained on the Multi30k data only.
We fit the CCA on the 29k image-sentence pairs of the training portion of the Multi30k and evaluate on the 1k pairs from the test set.
For STS, we evaluate the representations on the SemEval 2016 dataset (Agirre et al., 2016). The test set consists of 1,186 sentence pairs collected from datasets of newspaper headlines, machine translation post-editing, plagiarism detection, and question-to-question and answer-to-answer matching on Stack Exchange data. Each sentence pair is annotated with a similarity value.
|Flickr30k + COCO||11.80||23.0||.378|
|Flickr30k + COCO||11.69||21.0||.303|
|Flickr30k + COCO||39.4||25.4||.501|
|Flickr30k + COCO||38.4||28.0||.451|
|Flat att. comb.||34.6||14.6||.487|
|Hierar. att. comb.||37.6||16.7||.553|
|Serial att. comb.||38.7||15.8||.383|
|Parallel att. comb.||38.6||16.8||.398|
|Flat att. comb.||37.1||16.6||.385|
|Hierar. att. comb.||38.5||14.3||.346|
4 Results & Discussion
We present image retrieval and STS along with the task-specific metrics in Table 1. We observe that on moderately sized datasets, models conditioned on target language and visual modality provide a stronger training signal for learning sentence representations than models trained with simple language modeling objective.
The unconstrained variant of the RNN MMT models obtains a similar performance in the STS as the ELMo and BERT models even though the training samples was orders of magnitude fewer.
We also observe that while the Transformer based models achieve a superior translation quality on the MT tasks, the results on STS suggest that RNN models obtain semantically richer representations. While the textual RNN translation models perform better on image retrieval than the Transformer models, but the other way round with Transformer based Imagination models that are explicitly trained to predict the image representation perform better than their RNN counterparts. With these consistent observations, we posit that the Transformer based models, while achieving good performance on the task it is trained for, seem to ignore image information.
The slight difference between the image retrieval performance of the Imaginet and Imagination models suggest that training the representation using the vision and the target language signal is complementary.
We also evaluated the STS performance of the representations with the CCA projections. The Spearman’s correlation is consistently worse by about .
The encoder of the multimodal MT models that explicitly use the visual input in the decoder achieve significantly lower image retrieval scores. This observation suggests that the textual encoder seems to ignore information about visual aspects of the meaning as the decoder has full access to this information from the explicit conditioning on image representations. This observation is in line with the conclusions of the adversarial evaluation (Elliott, 2018; Libovický et al., 2018).
Our experiments also indicate that the performance on STS is highly correlated with the translation quality for both the RNN based and the Transformer based models (see Figure 2) which is in contrast in findings of Cífka and Bojar (2018) who measured correlation of BLEU score and STS under similar conditions. In addition, we observe that Transformers perform significantly worse with STS than their RNN counterparts. The translation quality also appears to be highly correlated with the amount of available training data and image retrieval abilities of the representation (see Table 2).
|Correlation of BLEU and …||Trans.||RNN|
|Image retrieval R@10||.825||.700|
|Training data size||.867||.724|
The result of DC for selected models are shown in Figure 1. The DC of the image and the sentence representations is proportional to the image retrieval score, also, images have the least correlation distance resulting in poorer resultant CCA based projections. Sentence representations seem to be more similar among the tasks than among the architectures. Most notable is the mutual similarity of representation from all MT systems regardless of the architecture and the modality setup.
We conducted a set of controlled and thorough experiments to asses the representational qualities of monomodal and multimodal sequential models with predominant architectures. Our experiments show that grounding, in either the visual modality or with another language, especially their combination in the Imagination models, results in better representations than LMs trained on datasets of similar sizes. We also showed that the translation quality of the MT models is highly correlated both, with the ability of the models to retain image information and with the semantic properties of the representations.
The RNN models tend to perform better on both the semantic similarity and image retrieval tasks, although they do not reach the same translation quality. We hypothesize this is because of the differences in the architecture that allows the Transformer network to directly access information that the RNN needs to pass in its hidden states.
Jindřich received funding from the Czech Science Foundation, grant no. 18-02196S.
- Agirre et al. (2016) Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497–511, San Diego, CA, USA. Association for Computational Linguistics.
- Agirre et al. (2012) Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385–393, Montréal, Canada. Association for Computational Linguistics.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
- Caglayan et al. (2017) Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Marc Masana, Luis Herranz, and Joost van de Weijer. 2017. Lium-cvc submissions for wmt17 multimodal translation task. In Proceedings of the Second Conference on Machine Translation, pages 432–439, Copenhagen, Denmark. Association for Computational Linguistics.
- Calixto and Liu (2017) Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 992–1003, Copenhagen, Denmark. Association for Computational Linguistics.
- Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics.
- Chrupała et al. (2015) Grzegorz Chrupała, Ákos Kádár, and Afra Alishahi. 2015. Learning language through pictures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 112–118, Beijing, China. Association for Computational Linguistics.
- Cífka and Bojar (2018) Ondřej Cífka and Ondřej Bojar. 2018. Are BLEU and meaning representation in opposition? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1362–1371, Melbourne, Australia. Association for Computational Linguistics.
- Conneau and Kiela (2018) Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC), pages 1699–1704, Miyazaki, Japan. European Language Resources Association (ELRA).
- Conneau et al. (2017) Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. CoRR, 1705.02364.
- Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
- Elliott (2018) Desmond Elliott. 2018. Adversarial evaluation of multimodal machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2974–2978, Brussels, Belgium. Association for Computational Linguistics.
- Elliott et al. (2016) Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual english-german image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70–74, Berlin, Germany. Association for Computational Linguistics.
- Elliott and Kádár (2017) Desmond Elliott and Ákos Kádár. 2017. Imagination improves multimodal translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 130–141, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Firat and Cho (2016)
Orhan Firat and Kyunghyun Cho. 2016.
Conditional gated recurrent unit with attention mechanism.https://github.com/nyu-dl/dl4mt-tutorial/blob/master/docs/cgru.pdf. Published online, version adbaeea.
- Gerz et al. (2016) Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A large-scale evaluation set of verb similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2173–2182, Austin, TX, USA. Association for Computational Linguistics.
- Gong et al. (2014) Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. 2014. Improving image-sentence embeddings using large weakly annotated photo collections. In Computer Vision – ECCV 2014, pages 529–545, Cham, Switzerland. Springer International Publishing.
- Hardoon et al. (2004) David R Hardoon, Sandor Szedmak, and John Shawe-Taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639–2664.
He et al. (2016)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016.
Deep residual learning
for image recognition.
Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, Las Vegas, NV, USA. IEEE Computer Society.
- Helcl and Libovický (2017a) Jindřich Helcl and Jindřich Libovický. 2017a. CUNI system for the WMT17 multimodal translation task. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pages 450–457, Copenhagen, Denmark. Association for Computational Linguistics.
- Helcl and Libovický (2017b) Jindřich Helcl and Jindřich Libovický. 2017b. Neural monkey: An open-source tool for sequence learning. The Prague Bulletin of Mathematical Linguistics, 107(1):5–17.
- Helcl et al. (2018) Jindřich Helcl, Jindřich Libovický, and Dušan Variš. 2018. CUNI system for the WMT18 multimodal translation task. In Proceedings of the Third Conference on Machine Translation, pages 622–629, Brussels, Belgium. Association for Computational Linguistics.
- Hill et al. (2015) Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665–695.
- Hotelling (1936) Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377.
- Howard and Ruder (2018) Jeremy Howard and Sebastian Ruder. 2018. Fine-tuned language models for text classification. CoRR, abs/1801.06147.
- Lazaridou et al. (2015) Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015. Combining language and vision with a multimodal skip-gram model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 153–163, Denver, CO, USA. Association for Computational Linguistics.
- Libovický and Helcl (2017) Jindřich Libovický and Jindřich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 196–202, Vancouver, Canada. Association for Computational Linguistics.
- Libovický et al. (2018) Jindřich Libovický, Jindřich Helcl, and David Mareček. 2018. Input combination strategies for multi-source transformer decoder. In Proceedings of the Third Conference on Machine Translation, pages 253–260, Brussels, Belgium. Association for Computational Linguistics.
- Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision – ECCV 2014, pages 740–755, Cham, Switzerland. Springer International Publishing.
- Mikolov et al. (2010) Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association, pages 1045–1048, Makuhari, Japan. International Speech Communication Association.
- Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, LA, USA. Association for Computational Linguistics.
- Plummer et al. (2015) Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pages 2641–2649, Les Condres, Chile. IEEE Computer Society.
- Poerner et al. (2018) Nina Poerner, Benjamin Roth, and Hinrich Schütze. 2018. Interpretable textual neuron representations for nlp. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 325–327, Brussels, Belgium. Association for Computational Linguistics.
- Saphra and Lopez (2018) Naomi Saphra and Adam Lopez. 2018. Language models learn pos first. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 328–330, Brussels, Belgium. Association for Computational Linguistics.
- Székely et al. (2007) Gábor J Székely, Maria L Rizzo, and Nail K Bakirov. 2007. Measuring and testing dependence by correlation of distances. The Annals of Statistics, 35(6):2769–2794.
- Tiedemann (2012) Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC), pages 2214–2218, Istanbul, Turkey. European Language Resources Association (ELRA).
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 6000–6010, Long Beach, CA, USA. Curran Associates, Inc.
- Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
- Yan and Mikolajczyk (2015) Fei Yan and Krystian Mikolajczyk. 2015. Deep correlation for matching images and text. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 3441–3450, Boston, MA, USA. IEEE Computer Society.