Image Captioning with Deep Bidirectional LSTMs

04/04/2016 ∙ by Cheng Wang, et al. ∙ 0

This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are proposed to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale and vertical mirror are proposed to prevent overfitting in training deep models. We visualize the evolution of bidirectional LSTM internal states over time and qualitatively analyze how our models "translate" image to sentence. Our proposed models are evaluated on caption generation and image-sentence retrieval tasks with three benchmark datasets: Flickr8K, Flickr30K and MSCOCO datasets. We demonstrate that bidirectional LSTM models achieve highly competitive performance to the state-of-the-art results on caption generation even without integrating additional mechanism (e.g. object detection, attention model etc.) and significantly outperform recent methods on retrieval task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 9

Code Repositories

image_captioning

Image Captioning with Deep Bidirectional LSTMs


view repo

image_captioning

Image Captioning with Deep Bidirectional LSTMs


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatically describe an image using sentence-level captions has been receiving much attention recent years [11, 10, 13, 17, 16, 23, 34, 39]. It is a challenging task integrating visual and language understanding. It requires not only the recognition of visual objects in an image and the semantic interactions between objects, but the ability to capture visual-language interactions and learn how to “translate” the visual understanding to sensible sentence descriptions. The most important part of this visual-language modeling is to capture the semantic correlations across image and sentence by learning a multimodal joint model. While some previous models [20, 15, 26, 17, 16] have been proposed to address the problem of image captioning, they rely on either use sentence templates, or treat it as retrieval task through ranking the best matching sentence in database as caption. Those approaches usually suffer difficulty in generating variable-length and novel sentences. Recent work [11, 10, 13, 23, 34, 39]

has indicated that embedding visual and language to common semantic space with relatively shallow recurrent neural network (RNN) can yield promising results.

In this work, we propose novel architectures to the problem of image captioning. Different to previous models, we learn a visual-language space where sentence embeddings are encoded using bidirectional Long-Short Term Memory (Bi-LSTM) and visual embeddings are encoded with CNN. Bi-LSTM is able to summarize long range visual-language interactions from forward and backward directions. Inspired by the architectural depth of human brain, we also explore the deep bidirectional LSTM architectures to learn higher level visual-language embeddings. All proposed models can be trained in end-to-end by optimizing a joint loss.

Why bidirectional LSTMs? In unidirectional sentence generation, one general way of predicting next word with visual context and history textual context is maximize . While unidirectional model includes past context, it is still limited to retain future context that can be used for reasoning previous word by maximizing . Bidirectional model tries to overcome the shortcomings that each unidirectional (forward and backward direction) model suffers on its own and exploits the past and future dependence to give a prediction. As shown in Figure 1, two example images with bidirectionally generated sentences intuitively support our assumption that bidirectional captions are complementary, combining them can generate more sensible captions.

Why deeper LSTMs? The recent success of deep CNN in image classification and object detection [14, 33] demonstrates that deep, hierarchical models can be more efficient at learning representation than shallower ones. This motivated our work to explore deeper LSTM architectures in the context of learning bidirectional visual-language embeddings. As claimed in [29], if we consider LSTM as a composition of multiple hidden layers that unfolded in time, LSTM is already deep network. But this is the way of increasing “horizontal depth” in which network weights are reused at each time step and limited to learn more representative features as increasing the “vertical depth” of network. To design deep LSTM, one straightforward way is to stack multiple LSTM layers as hidden to hidden transition. Alternatively, instead of stacking multiple LSTM layers, we propose to add multilayer perception (MLP) as intermediate transition between LSTM layers. This can not only increase LSTM network depth, but can also prevent the parameter size from growing dramatically.

Figure 1: Illustration of generated captions. Two example images from Flickr8K dataset and their best matching captions that generated in forward order (blue) and backward order (red). Bidirectional models capture different levels of visual-language interactions (more evidence see Sec.4.4

). The final caption is the sentence with higher probabilities (histogram under sentence). In both examples, backward caption is selected as final caption for corresponding image.

The core contributions of this work are threefold:

  • We propose an end-to-end trainable multimodal bidirectional LSTM (see Sec.3.2) and its deeper variant models (see Sec.3.3) that embed image and sentence into a high level semantic space by exploiting both long term history and future context.

  • We visualize the evolution of hidden states of bidirectional LSTM units to qualitatively analyze and understand how to generate sentence that conditioned by visual context information over time (see Sec.4.4).

  • We demonstrate the effectiveness of proposed models on three benchmark datasets: Flickr8K, Flickr30K and MSCOCO. Our experimental results show that bidirectional LSTM models achieve highly competitive performance to the state-of-the-art on caption generation (see Sec.4.5) and perform significantly better than recent methods on retrieval task (see Sec.4.6).

2 Related Work

Multimodal representation learning [27, 35] has significant value in multimedia understanding and retrieval. The shared concept across modalities plays an important role in bridging the “semantic gap” of multimodal data. Image captioning falls into this general category of learning multimodal representations.

Recently, several approaches have been proposed for image captioning. We can roughly classify those methods into three categories. The first category is template based approaches that generate caption templates based on detecting objects and discovering attributes within image. For example, the work

[20] was proposed to parse a whole sentence into several phrases, and learn the relationships between phrases and objects within an image. In [15], conditional random field (CRF) was used to correspond objects, attributes and prepositions of image content and predict the best label. Other similar methods were presented in [26, 17, 16]. These methods are typically hard-designed and rely on fixed template, which mostly lead to poor performance in generating variable-length sentences. The second category is retrieval based approach, this sort of methods treat image captioning as retrieval task. By leveraging distance metric to retrieve similar captioned images, then modify and combine retrieved captions to generate caption [17]. But these approaches generally need additional procedures such as modification and generalization process to fit image query.

Inspired by the success use of CNN [14, 45] and Recurrent Neural Network [24, 25, 1]. The third category is emerged as neural network based methods [39, 42, 13, 10, 11]. Our work also belongs to this category. Among those work, Kiro et al.[12] can been as pioneer work to use neural network for image captioning with multimodal neural language model. In their follow up work [13], Kiro et al. introduced an encoder-decoder pipeline where sentence was encoded by LSTM and decoded with structure-content neural language model (SC-NLM). Socher et al.[34]

presented a DT-RNN (Dependency Tree-Recursive Neural Network) to embed sentence into a vector space in order to retrieve images. Later on, Mao

et al.[23] proposed m-RNN which replaces feed-forward neural language model in [13]. Similar architectures were introduced in NIC [39] and LRCN [4], both approaches use LSTM to learn text context. But NIC only feed visual information at first time step while Mao et al.[23] and LRCN [4]’s model consider image context at each time step. Another group of neural network based approaches has been introduced in [10, 11] where image captions generated by integrating object detection with R-CNN (region-CNN) and inferring the alignment between image regions and descriptions.

Most recently, Fang et al.[5] used multi-instance learning and traditional maximum-entropy language model for description generation. Chen et al.[2] proposed to learn visual representation with RNN for generating image caption. In [42], Xu et al. introduced attention mechanism of human visual system into encoder-decoder framework. It is shown that attention model can visualize what the model “see” and yields significant improvements on image caption generation. Unlike those models, our deep LSTM model directly assumes the mapping relationship between visual-language is antisymmetric and dynamically learns long term bidirectional and hierarchical visual-language interactions. This is proved to be very effective in generation and retrieval tasks as we demonstrate in Sec.4.5 and Sec.4.6.

3 Model

In this section, we describe our multimodal bidirectional LSTM model (Bi-LSTM for short) and explore its deeper variants. We first briefly introduce LSTM which is at the center of model. The LSTM we used is described in [44].

3.1 Long Short Term Memory

Our model builds on LSTM cell, which is a particular form of traditional recurrent neural network (RNN). It has been successfully applied to machine translation [3], speech recognition [8] and sequence learning [36]. As shown in Figure 2, the reading and writing memory cell is controlled by a group of sigmoid gates. At given time step , LSTM receives inputs from different sources: current input , the previous hidden state of all LSTM units as well as previous memory cell state . The updating of those gates at time step for given inputs , and as follows.

(1)
(2)
(3)
(4)
(5)
(6)

where are the weight matrices learned from the network and

are bias vectors.

is the sigmoid activation function

and presents hyperbolic tangent . denotes the products with a gate value. The LSTM hidden output =, will be used to predict the next word by Softmax function with parameters and :

(7)

where

is the probability distribution for predicted word.

Figure 2: Long Short Term Memory (LSTM) cell. It is consist of an input gate , a forget gate , a memory cell and an output gate . The input gate decides let incoming signal go through to memory cell or block it. The output gate can allow new output or prevent it. The forget gate decides to remember or forget cell’s previous state. Updating cell states is performed by feeding previous cell output to itself by recurrent connections in two consecutive time steps.

Our key motivation of chosen LSTM is that it can learn long-term temporal activities and avoid quick exploding and vanishing problems that traditional RNN suffers from during back propagation optimization.

Figure 3:

Multimodal Bidirectional LSTM. L1: sentence embedding layer. L2: T-LSTM layer. L3: M-LSTM layer. L4: Softmax layer. We feed sentence in both forward (blue arrows) and backward (red arrows) order which allows our model summarizes context information from both left and right side for generating sentence word by word over time. Our model is end-to-end trainable by minimize a joint loss.

Figure 4: Illustrations of proposed deep architectures for image captioning. The network in (a) is commonly used in previous work, e.g. [4, 23]. (b) Our proposed Bidirectional LSTM (Bi-LSTM). (c) Our proposed Bidirectional Stacked LSTM (Bi-S-LSTM). (d) Our proposed Bidirectional LSTM with fully connected (FC) transition layer (Bi-F-LSTM).

3.2 Bidirectional LSTM

In order to make use of both the past and future context information of a sentence in predicting word, we propose a bidirectional model by feeding sentence to LSTM from forward and backward order. Figure 3 presents the overview of our model, it is comprised of three modules: a CNN for encoding image inputs, a Text-LSTM (T-LSTM) for encoding sentence inputs, a Multimodal LSTM (M-LSTM) for embedding visual and textual vectors to a common semantic space and decoding to sentence. The bidirectional LSTM is implemented with two separate LSTM layers for computing forward hidden sequences and backward hidden sequences . The forward LSTM starts at time and the backward LSTM starts at time . Formally, our model works as follows, for raw image input , forward order sentence and backward order sentence , the encoding performs as

(8)

where , represent CNN, T-LSTM respectively and , are their corresponding weights. and are bidirectional embedding matrices learned from network. Encoded visual and textual representations are then embedded to multimodal LSTM by:

(9)

where presents M-LSTM and its weight . aims to capture the correlation of visual context and words at different time steps. We feed visual vector to model at each time step for capturing strong visual-word correlation. On the top of M-LSTM are Softmax layers which compute the probability distribution of next predicted word by

(10)

where and is the vocabulary size.

3.3 Deeper LSTM architecture

To design deeper LSTM architectures, in addition to directly stack multiple LSTMs on each other that we named as Bi-S-LSTM (Figure 4(c)), we propose to use a fully connected layer as intermediate transition layer. Our motivation comes from the finding of [29], in which DT(S)-RNN (deep transition RNN with shortcut) is designed by adding hidden to hidden multilayer perception (MLP) transition. It is arguably easier to train such network. Inspired by this, we extend Bi-LSTM (Figure 4(b)) with a fully connected layer that we called Bi-F-LSTM (Figure 4(d)), shortcut connection between input and hidden states is introduced to make it easier to train model. The aim of extension models is to learn an extra hidden transition function . In Bi-S-LSTM

(11)

where presents the hidden states of -th layer at time , and are matrices connect to transition layer (also see Figure 5(L)). For readability, we consider one direction training and suppress bias terms. Similarly, in Bi-F-LSTM, to learn a hidden transition function by

(12)

where is the operator that concatenates and its abstractions to a long hidden states (also see Figure 5(R)).

presents rectified linear unit (Relu) activation function for transition layer, which performs

.

Figure 5: Transition for Bi-S-LSTM(L) and Bi-F-LSTM(R)

3.4 Data Augmentation

One of the most challenging aspects of training deep bidirectional LSTM models is preventing overfitting. Since our largest dataset has only 80K images [21] which might cause overfitting easily, we adopted several techniques such as fine-tuning on pre-trained visual model, weight decay, dropout and early stopping that commonly used in the literature. Additionally, it has been proved that data augmentation such as randomly cropping and horizontal mirror [32, 22], adding noise, blur and rotation [40] can effectively alleviate over-fitting and other. Inspired by this, we designed new data augmentation techniques to increase the number of image-sentence pairs. Our implementation performs on visual model, as follows:

  • Multi-Corp: Instead of randomly cropping on input image, we crop at the four corners and center region. Because we found random cropping is more tend to select center region and cause overfitting easily. By cropping four corners and center, the variations of network input can be increased to alleviate overfitting.

  • Multi-Scale: To further increase the number of image-sentence pairs, we rescale input image to multiple scales. For each input image with size , it is resized to 256 256, then we randomly select a region with size of , where is scale ratio. means we do not multi-scale operation on given image. Finally we resize it to AlexNet input size 227 227 or VggNet input size 224 224.

  • Vertical Mirror: Motivated by the effectiveness of widely used horizontal mirror, it is natural to also consider the vertical mirror of image for same purpose.

Those augmentation techniques are implemented in real-time fashion. Each input image is randomly transformed using one of augmentations to network input for training. In principle, our data augmentation can increase image-sentence training pairs by roughly 40 times (542).

3.5 Training and Inference

Our model is end-to-end trainable by using Stochastic Gradient Descent (SGD). The joint loss function

is computed by accumulating the Softmax losses of forward and backward directions. Our objective is to minimize , which is equivalent to maximize the probabilities of correctly generated sentences. We compute the gradient with Back-Propagation Through Time (BPTT) algorithm.

The trained model is used to predict a word with given image context and previous word context by in forward order, or by in backward order. We set == at start point respectively for forward and backward directions. Ultimately, with generated sentences from two directions, we decide the final sentence for given image according to the summation of word probability within sentence

(13)
(14)
(15)

Follow previous work, we adopted beam search to consider the best candidate sentences at time to infer the sentence at next time step. In our work, we fix in all experiments although the average of 2 BLEU [28] points better results can be achieved with compare to as reported in [39].

4 Experiments

In this section, we design several groups of experiments to accomplish following objectives:

  • Qualitatively analyze and understand how bidirectional multimodal LSTM learns to generate sentence conditioned by visual context information over time.

  • Measure the benefits and performance of proposed bidirectional model and its deeper variant models that we increase their nonlinearity depth from different ways.

  • Compare our approach with state-of-the-art methods in terms of sentence generation and image-sentence retrieval tasks on popular benchmark datasets.

4.1 Datasets

To validate the effectiveness, generality and robustness of our models, we conduct experiments on three benchmark datasets: Flickr8K [31], Flickr30K [43] and MSCOCO [21].

Flickr8K. It consists of 8,000 images and each of them has 5 sentence-level captions. We follow the standard dataset divisions provided by authors, 6,000/1,000/1,000 images for training/validation/testing respectively.

Flickr30K. An extension version of Flickr8K. It has 31,783 images and each of them has 5 captions. We follow the public accessible111http://cs.stanford.edu/people/karpathy/deepimagesent/ dataset divisions by Karpathy et al. [11]. In this dataset splits, 29,000/1,000/1,000 images are used for training/validation/testing respectively.

MSCOCO. This is a recent released dataset that covers 82,783 images for training and 40,504 images for validation. Each of images has 5 sentence annotations. Since there is lack of standard splits, we also follow the splits provided by Karpathy et al. [11]. Namely, 80,000 training images and 5,000 images for both validation and testing.

4.2 Implementation Details

Visual feature

. We use two visual models for encoding image: Caffe

[9] reference model which is pre-trained with AlexNet [14] and 16-layer VggNet model [33]. We extract features from last fully connected layer and feed to train visual-language model with LSTM. Previous work [39, 23] have demonstrated that with more powerful image models such as GoogleNet [37] and VggNet [33] can achieve promising improvements. To make a fair comparison with recent works, we select the widely used two models for experiments.

Textual feature. We first represent each word within sentence as one-hot vector, , is vocabulary size built on training sentences and different for different datasets. By performing basic tokenization and removing the words that occurs less than 5 times in the training set, we have 2028, 7400 and 8801 words for Flickr8K, Flickr30K and MSCOCO dataset vocabularies respectively.

(a) input
(b) input gate
(c) forget gate
(d) cell state
(e) output gate
(f) output
Figure 6: Visualization of LSTM cell. The horizontal axis corresponds to time steps. The vertical axis is cell index. Here we visualize the gates and cell states of the first 32 Bi-LSTM units of T-LSTM in forward direction over 11 time steps.
(a) T-LSTM (forward) units
(b) T-LSTM (backward) units
(c) M-LSTM (forward) units
(d) M-LSTM (backward) units
(e) probability (forward) units
(f) probability (backward) units
 A man in a black jacket is walking down the street Street the on walking is suit a in man a
   2    7   3  2   23     76   8      41     38     4    36    36    4   5     41     8 193  2 3   7    2

                      (g) Generated words and corresponding word index in vocabulary

Figure 7: Pattern of the first 96 hidden units chosen at each layer of Bi-LSTM in both forward and backward directions. The vertical axis presents time steps. The horizontal axis corresponds to different LSTM units. In this example, we visualize the T-LSTM layer for text only, the M-LSTM layer for both text and image and Softmax layer for computing word probability distribution. The model was trained on Flickr 30K dataset for generating sentence word by word at each time step. In (g), we provide the predicted words at different time steps and their corresponding index in vocabulary where we can also read from (e) and (f) (the highlight point at each row). Word with highest probability is selected as the predicted word.
(a) (b) (c) (d)

A woman in a tennis court holding a tennis racket.  
A woman getting ready to hit a tennis ball.

A living room with a couch and a table.  
Two chairs and a table in a living room.

A giraffe standing in a zoo enclosure with a baby in the background.  
A couple of giraffes are standing at a zoo.

A train is pulling into a train station.  
A train on the tracks at a train station.
Figure 8: Examples of generated captions for given query image on MSCOCO validation set. Blue-colored captions are generated in forward direction and red-colored captions are generated in backward direction. The final caption is selected according to equation (13) which selects the sentence with the higher probability. The final captions are marked in bold.

Our work uses the LSTM implementation of [4] on Caffe framework. All of our experiments were conducted on Ubuntu 14.04, 16G RAM and single Titan X GPU with 12G memory. Our LSTMs use 1000 hidden units and weights initialized uniformly from [-0.08, 0.08]. The batch sizes are 150, 100, 100, 32 for Bi-LSTM, Bi-S-LSTM, Bi-F-LSTM and Bi-LSTM (VGG) models respectively. Models were trained with learning rate (except for Bi-LSTM (VGG)), weight decay is 0.0005 and we used momentum 0.9. Each model is trained for 18

35 epochs with early stopping. The code for this work can be found at

https://github.com/deepsemantic/image_captioning.

4.3 Evaluation Metrics

We evaluate our models on two tasks: caption generation and image-sentence retrieval. In caption generation, we follow previous work to use BLEU-N (N=1,2,3,4) scores [28]:

(16)

where , are the length of reference sentence and generated sentence, is the modified -gram precisions. We also report METETOR [18] and CIDEr [38] scores for further comparison. In image-sentence retrieval (image query sentence and vice versa), we adopt R@K (K=1,5,10) and Med

as evaluation metrics. R@K is the recall rate R at top K candidates and Med

is the median rank of the first retrieved ground-truth image and sentence. All mentioned metric scores are computed by MSCOCO caption evaluation server222https://github.com/tylin/coco-caption, which is commonly used for image captioning challenge333http://mscoco.org/home/.

4.4 Visualization and Qualitative Analysis

The aim of this set experiment is to visualize the properties of proposed bidirectional LSTM model and explain how it works in generating sentence word by word over time.

First, we examine the temporal evolution of internal gate states and understand how bidirectional LSTM units retain valuable context information and attenuate unimportant information. Figure 6 shows input and output data, the pattern of three sigmoid gates (input, forget and output) as well as cell states. We can clearly see that dynamic states are periodically distilled to units from time step to . At , the input data are sigmoid modulated to input gate where values lie within in [0,1]. At this step, the values of forget gates of different LSTM units are zeros. Along with the increasing of time step, forget gate starts to decide which unimportant information should be forgotten, meanwhile, to retain those useful information. Then the memory cell states and output gate gradually absorb the valuable context information over time and make a rich representation of the output data.

Next, we examine how visual and textual features are embedded to common semantic space and used to predict word over time. Figure 7 shows the evolution of hidden units at different layers. For T-LSTM layer, units are conditioned by textual context from the past and future. It performs as the encoder of forward and backward sentences. At M-LSTM layer, LSTM units are conditioned by both visual and textual context. It learns the correlations between input word sequence and visual information that encoded by CNN. At given time step, by removing unimportant information that make less contribution to correlate input word and visual context, the units tend to appear sparsity pattern and learn more discriminative representations from inputs. At higher layer, embedded multimodal representations are used to compute the probability distribution of next predict word with Softmax. It should be noted, for given image, the number of words in generated sentence from forward and backward direction can be different.

Figure 8 presents some example images with generated captions. From it we found some interesting patterns of bidirectional captions: (1) Cover different semantics, for example, in (b) forward sentence captures “couch” and “table” while backward one describes “chairs” and “table”. (2) Describe static scenario and infer dynamics, in (a) and (d), one caption describes the static scene, and the other one presents the potential action or motion that possibly happen in the next time step. (3) Generate novel sentences, from generated captions, we found that a significant proportion (88% by randomly select 1000 images on MSCOCO validation set) of generated sentences are novel (not appear in training set). But generated sentences are highly similar to ground-truth captions, for example in (d), forward caption is similar to one of ground-truth captions (“A passenger train that is pulling into a station”) and backward caption is similar to ground-truth caption (“a train is in a tunnel by a station”). It illustrates that our model has strong capability in learning visual-language correlation and generating novel sentences.

Flickr8K Flickr30K MSCOCO
Models B-1 B-2 B-3 B-4 B-1 B-2 B-3 B-4 B-1 B-2 B-3 B-4
NIC[39] 63 41 27.2 - 42.3 27.7 18.3 66.6 46.1 32.9 24.6
LRCN[4] - - - - 58.8 39.1 25.1 16.5 62.8 44.2 30.4 -
DeepVS[11] 57.9 38.3 24.5 16 57.3 36.9 24.0 15.7 62.5 45 32.1 23
m-RNN[23] 56.5 38.6 25.6 17.0 54 36 23 15 - - - -
m-RNN[23] - - - - 60 41 28 19 67 49 35
Hard-Attention[42]
Bi-LSTM 61.9 43.3 29.7 20.0 58.9 39.3 25.9 17.1 63.4 44.7 30.6 20.6
Bi-S-LSTM 64.2 44.3 29.2 18.6 59.5 40.3 26.9 17.9 63.7 45.7 31.8 21.9
Bi-F-LSTM 63.0 43.7 29.2 19.1 58.6 39.2 26.0 17.4 63.5 44.8 30.7 20.6
Bi-LSTM 62.1 24.4
Table 1: Performance comparison on BLEU-N(high score is good). The superscript “A” means the visual model is AlexNet (or similar network), “V” is VggNet, “G” is GoogleNet, “-” indicates unknown value, “” means different data splits555On MSCOCO dataset, NIC uses 4K images for validation and test. LRCN randomly selects 5K images from MSCOCO validation set for validation and test. m-RNN uses 4K images for validation and 1K as test.. The best results are marked in and the second best results with underline. The superscripts are also applicable to Table 2.

4.5 Results on Caption Generation

Now, we compare with state-of-the-art methods. Table 1 presents the comparison results in terms of BLEU-N. Our approach achieves very competitive performance on evaluated datasets although with less powerful AlexNet visual model. We can see that increase the depth of LSTM is beneficial on generation task. Deeper variant models mostly obtain better performance compare to Bi-LSTM, but they are inferior to latter one in B-3 and B-4 on Flickr8K. We conjecture it should be the reason that Flick8K is a relatively small dataset which suffers difficulty in training deep models with limited data. One of interesting facts we found is that by stacking multiple LSTM layers is generally superior to LSTM with fully connected transition layer although Bi-S-LSTM needs more training time. By replacing AlexNet with VggNet brings significant improvements on all BLEU evaluation metrics. We should be aware of that a recent interesting work [42] achieves the best results by integrating attention mechanism [19, 42] on this task. Although we believe incorporating such powerful mechanism into our framework can make further improvements, note that our current model Bi-LSTM achieves the best or second best results on most of metrics while the small gap in performance between our model and Hard-Attention [42] is existed.

The further comparison on METEOR and CIDEr scores is plotted in Figure 9. Without integrating object detection and more powerful vision model, our model (Bi-LSTM) outperforms DeepVS[11] in a certain margin. It achieves 19.4/49.6 on Flickr 8K (compare to 16.7/31.8 of DeepVS) and 16.2/28.2 on Flickr30K (15.3/24.7 of DeepVS). On MSCOCO, our Bi-S-LSTM obtains 20.8/66.6 for METEOR/CIDEr, which exceeds 19.5/66.0 in DeepVS.

(a) METEOR score
(b) CIDEr score
Figure 9: METEOR/CIDEr scores on different datasets.
Image to Sentence Sentence to Image
Datasets Methods R@1 R@5 R@10 Med r R@1 R@5 R@10 Med r
Flickr 8K DeViSE[7] 4.8 16.5 27.3 28 5.9 20.1 29.6 29
SDT-RNN[34] 4.5 18.0 28.6 32 6.1 18.5 29.0 29
DeFrag[10] 12.6 32.9 44.0 14 9.7 29.6 42.5 15
Kiros et al. [13] 13.5 36.2 45.7 13 10.4 31.0 43.7 14
Kiros et al. [13] 18 40.9 55 8 12.5 37 51.5 10
m-RNN[23] 14.5 37.2 48.5 11 11.5 31.0 42.4 15
Mind’s Eye[2] 17.3 42.5 57.4 7 15.4 40.6 50.1 8
DeepVS[11] 16.5 40.6 54.2 7.6 11.8 32.1 44.7 12.4
NIC[39] 20 - 60 6 19 - 5
Bi-LSTM 21.3 44.7 56.5 6.5 15.1 37.8 50.1 9
Bi-S-LSTM 19.6 43.7 55.7 7 14.5 36.4 48.3 10.5
Bi-F-LSTM 19.9 44.0 56.0 7 14.9 37.4 49.8 10
Bi-LSTM 60.6
Flickr 30K DeViSE[7] 4.5 18.1 29.2 26 6.7 21.9 32.7 25
SDT-RNN[34] 9.6 29.8 41.1 16 8.9 29.8 41.1 16
Kiros et al. [13] 14.8 39.2 50.9 10 11.8 34.0 46.3 13
Kiros et al. [13] 23.0 50.7 62.9 5 16.8 42.0 56.5 8
LRCN[4] 14 34.9 47 11 - - - -
NIC[39] 17 - 56 7 17 - 57 8
m-RNN[23] 18.4 40.2 50.9 10 12.6 31.2 41.5 16
Mind’s Eye[2] 18.5 45.7 58.1 7 16.6 42.5 8
DeFrag [10] 16.4 40.2 54.7 8 10.3 31.4 44.5 13
DeepVS[11] 22.2 48.2 61.4 15.2 37.7 50.5 9.2
Bi-LSTM 18.7 41.2 52.6 8 14.0 34.0 44.0 14
Bi-S-LSTM 21 43.0 54.1 7 15.1 35.3 46.0 12
Bi-F-LSTM 20 44.4 55.2 7 15.1 35.8 46.8 12
Bi-LSTM 55.8
MSCOCO DeepVS[11] 16.5 39.2 52.0 9 10.7 29.6 42.2 14.0
Bi-LSTM 10.8 28.1 38.9 18 7.8 22.4 32.8 24
Bi-S-LSTM 13.4 33.1 44.7 13 9.4 26.5 37.7 19
Bi-F-LSTM 11.2 30 41.2 16 8.3 24.9 35.1 22
Bi-LSTM
Table 2: Comparison with state-of-the-art methods on R@K (high is good) and Med r (low is good). All scores are computed by averaging the results of forward and backward results. “” means the approach with additional object detection.

4.6 Results on Image-Sentence Retrieval

For retrieval evaluation, we focus on image to sentence retrieval and vice versa. This is an instance of cross-modal retrieval [6, 30, 41] which has been a hot research subject in multimedia field. Table 2 illustrates our results on different datasets. The performance of our models exceeds those compared methods on most of metrics or matching existing results. In a few metrics, our model didn’t show better result than Mind’s Eye [2] which combined image and text features in ranking (it makes this task more like multimodal retrieval) and NIC [39] which employed more powerful vision model, large beam size and model ensemble. While adopting more powerful visual model VggNet results in significant improvements across all metrics, with less powerful AlexNet model, our results are still competitive on some metrics, e.g. R@1, R@5 on Flickr8K and Flickr30K. We also note that on relatively small dataset Filckr8K, shallow model performs slightly better than deeper ones on retrieval task, which in contrast with the results on the other two datasets. As we explained before, we think deeper LSTM architectures are better suited for ranking task on large datasets which provides enough training data for more complicate model training, otherwise, overfitting occurs. By increasing data variations with our implemented data augmentation techniques can alleviate it in a certain degree. But we foresee further significant improvement gains as training example grows, by reducing reliance on augmentation with fresh data. Figure 10 presents some examples of retrieval experiments. For each caption (image) query, sensible images and descriptive captions are retrieved. It shows our models captured the visual-textual correlation for image and sentence ranking.

snowboarder jumping
A man practices boxing

 

Two guys one in a red uniform and one in a blue uniform playing soccer A soccer player tackles a player from the other team Two young men tackle an opponent during a scrimmage football game A football player preparing a football for a field goal kick, while his teammates can coach watch him
Black dog jumping out of the water with a stick in his mouth A black dog jumps in a body of water with a stick in his mouth A black dog is swimming through water A black dog swimming in the water with a tennis ball in his mouth
Figure 10:

Examples of image retrieval (top) and caption retrieval (bottom) with Bi-S-LSTM on Flickr30K validation set. Queries are marked with red color and top-4 retrieved results are marked with green color.

4.7 Discussion

Efficiency. In addition to showing superior performance, our models also possess high computational efficiency. Table 3 presents the computational costs of proposed models. We randomly select 10 images from Flickr8K validation set, and perform caption generation and image to sentence retrieval test for 5 times respectively. The table shows the averaged time costs across 5 test results. The time cost of network initialization is excluded. The costs of caption generation includes: computing image feature, sampling bidirectional captions, computing the final caption. The time costs for retrieval considers: computing image-sentence pair scores (totally 10 50 pairs), ranking sentences for each image query. As can be seen from Table 1, 2 and 3, deep models have only slightly higher time consumption but yield significant improvements and our proposed Bi-F-LSTM can strike the balance between performance and efficiency.

Bi-LSTM Bi-S-LSTM Bi-F-LSTM
Generation 0.93s 1.1s 0.97s
Retrieval 5.62s 7.46s 5.69s
Table 3: Time costs for testing 10 images on Flickr8K

Challenges in exact comparison. It is challenging to make a direct, extract comparison with related methods due to the differences in dataset division on MSCOCO. In principle, testing on smaller validation set can lead to better results, particularly in retrieval task. Since we strictly follow dataset splits as in [11], we compare to it in most cases. Another challenge is the visual model that utilized for encoding image inputs. Different models are employed in different works, to make a fair and comprehensive comparison, we select commonly used AlexNet and VggNet in our work.

5 Conclusions

We proposed a bidirectional LSTM model that generates descriptive sentence for image by taking both history and future context into account. We further designed deep bidirectional LSTM architectures to embed image and sentence at high semantic space for learning visual-language models. We also qualitatively visualized internal states of proposed model to understand how multimodal LSTM generates word at consecutive time steps. The effectiveness, generality and robustness of proposed models were evaluated on numerous datasets. Our models achieve highly completive or state-of-the-art results on both generation and retrieval tasks. Our future work will focus on exploring more sophisticated language representation (e.g. word2vec) and incorporating multitask learning and attention mechanism into our model. We also plan to apply our model to other sequence learning tasks such as text recognition and video captioning.

References

  • [1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
  • [2] X. Chen and C. Lawrence Zitnick. Mind’s eye: A recurrent visual representation for image caption generation. In CVPR, pages 2422–2431, 2015.
  • [3] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014.
  • [4] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, pages 2625–2634, 2015.
  • [5] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, and J. Platt. From captions to visual concepts and back. In CVPR, pages 1473–1482, 2015.
  • [6] F. Feng, X. Wang, and R. Li.

    Cross-modal retrieval with correspondence autoencoder.

    In ACMMM, pages 7–16. ACM, 2014.
  • [7] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, pages 2121–2129, 2013.
  • [8] A. Graves, A. Mohamed, and G. E. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pages 6645–6649. IEEE, 2013.
  • [9] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACMMM, pages 675–678. ACM, 2014.
  • [10] A. Karpathy, A. Joulin, and F-F. Li. Deep fragment embeddings for bidirectional image sentence mapping. In NIPS, pages 1889–1897, 2014.
  • [11] A. Karpathy and F-F. Li. Deep visual-semantic alignments for generating image descriptions. In CVPR, pages 3128–3137, 2015.
  • [12] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal neural language models. In ICML, pages 595–603, 2014.
  • [13] R. Kiros, R. Salakhutdinov, and R. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
  • [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
  • [15] G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Trans. on Pattern Analysis and Machine Intelligence(PAMI), 35(12):2891–2903, 2013.
  • [16] P. Kuznetsova, V. Ordonez, A. C. Berg, T. Berg, and Y. Choi. Collective generation of natural image descriptions. In ACL, volume 1, pages 359–368. ACL, 2012.
  • [17] P. Kuznetsova, V. Ordonez, T. Berg, and Y. Choi. Treetalk: Composition and compression of trees for image descriptions. Trans. of the Association for Computational Linguistics(TACL), 2(10):351–362, 2014.
  • [18] M. Lavie. Meteor universal: language specific translation evaluation for any target language. ACL, page 376, 2014.
  • [19] Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • [20] S. Li, G. Kulkarni, T. L. Berg, A. C. Berg, and Y. Choi.

    Composing simple image descriptions using web-scale n-grams.

    In CoNLL, pages 220–228. ACL, 2011.
  • [21] T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755. Springer, 2014.
  • [22] Xin Lu, Zhe Lin, Hailin Jin, Jianchao Yang, and James Z Wang. Rapid: Rating pictorial aesthetics using deep learning. In ACMMM, pages 457–466. ACM, 2014.
  • [23] J. H. Mao, W. Xu, Y. Yang, J. Wang, Z. H. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR, 2015.
  • [24] T. Mikolov, M. Karafiát, L. Burget, J. Cernockỳ, and S. Khudanpur. Recurrent neural network based language model. In INTERSPEECH, volume 2, page 3, 2010.
  • [25] T. Mikolov, S. Kombrink, L. Burget, J. H. Černockỳ, and S. Khudanpur. Extensions of recurrent neural network language model. In ICASSP, pages 5528–5531. IEEE, 2011.
  • [26] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daumé III. Midge: Generating image descriptions from computer vision detections. In ACL, pages 747–756. ACL, 2012.
  • [27] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Ng. Multimodal deep learning. In ICML, pages 689–696, 2011.
  • [28] K. Papineni, S. Roukos, T. Ward, and W. Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318. ACL, 2002.
  • [29] R. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013.
  • [30] J. C. Pereira, E. Coviello, G. Doyle, N. Rasiwasia, G. Lanckriet, R. Levy, and N. Vasconcelos. On the role of correlation and abstraction in cross-modal multimedia retrieval. IEEE Trans. on Pattern Analysis and Machine Intelligence(PAMI), 36(3):521–535, 2014.
  • [31] C. Rashtchian, P. Young, M. Hodosh, and J. Hockenmaier. Collecting image annotations using amazon’s mechanical turk. In NAACL HLT Workshop, pages 139–147. Association for Computational Linguistics, 2010.
  • [32] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, pages 568–576, 2014.
  • [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [34] R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng. Grounded compositional semantics for finding and describing images with sentences. Trans. of the Association for Computational Linguistics(TACL), 2:207–218, 2014.
  • [35] N. Srivastava and R. Salakhutdinov.

    Multimodal learning with deep boltzmann machines.

    In NIPS, pages 2222–2230, 2012.
  • [36] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112, 2014.
  • [37] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1–9, 2015.
  • [38] R. Vedantam, Z. Lawrence, and D. Parikh. Cider: Consensus-based image description evaluation. In CVPR, pages 4566–4575, 2015.
  • [39] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, pages 3156–3164, 2015.
  • [40] Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, and Thomas S. Huang. Deepfont: Identify your font from an image. In ACMMM, pages 451–459. ACM, 2015.
  • [41] X.Jiang, F. Wu, X. Li, Z. Zhao, W. Lu, S. Tang, and Y. Zhuang. Deep compositional cross-modal learning to rank via local-global alignment. In ACMMM, pages 69–78. ACM, 2015.
  • [42] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. ICML, 2015.
  • [43] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Trans. of the Association for Computational Linguistics(TACL), 2:67–78, 2014.
  • [44] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
  • [45] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, pages 818–833. Springer, 2014.