Multimodal Memory Modelling for Video Captioning

11/17/2016 ∙ by Junbo Wang, et al. ∙ 0

Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, e.g., convolutional neural networks (CNNs) and recurrent neural networks (RNNs), video captioning has made great progress. However, learning an effective mapping from visual sequence space to language space is still a challenging problem. In this paper, we propose a Multimodal Memory Model (M3) to describe videos, which builds a visual and textual shared memory to model the long-term visual-textual dependency and further guide global visual attention on described targets. Specifically, the proposed M3 attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. First, text representation in the Long Short-Term Memory (LSTM) based text decoder is written into the memory, and the memory contents will be read out to guide an attention to select related visual targets. Then, the selected visual information is written into the memory, which will be further read out to the text decoder. To evaluate the proposed model, we perform experiments on two publicly benchmark datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms the state-of-theart methods in terms of BLEU and METEOR.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: The overall framework of multimodal memory modelling (M) for video captioning. It contains a CNN-based video encoder, a multimodal memory and a LSTM-based text decoder which are denoted by dashed box in different color. The multimodal memory stores and retrieves both visual and textual information by interacting with video and sentence with multiple read and write operations. The proposed M with explicit memory modelling can not only model the long-term visual-textual dependency, but also guide global visual attention for effective video representation. (Best viewed in color)

Describing videos with natural sentences automatically also called video captioning is very important for bridging vision and language, which is also a very challenging problem in computer vision. It has plenty of practical applications, e.g., human-robot interaction, video indexing and describing videos for the visually impaired.

Video captioning involves in understanding both vision and language, and then builds the mapping from visual elements to words. As we know, video as image sequence contains rich information about actor, object, action, scene and their interactions. It is very difficult for existing methods using a single visual representation [33] to capture all these information over a long period. Yao et al. [41]

attempt to dynamically select multiple visual representations based on temporal attention mechanism which is driven by the hidden representations from a Long Short-Term Memory (LSTM) text decoder. The LSTM text decoder, which integrates the information from both words and selected visual contents, models the sentence generation and guides visual selection. However, recent work

[10] has pointed out that LSTM doesn’t work well when the sequence is long enough. Current video captioning benchmark datasets generally have long sentences to describe videos. The consequence is that the LSTM-based decoder can not model the long-term visual-textual dependency well and can not guide visual attention to select the described targets.

Recently, neural memory models have been proposed and successfully applied to question answering [39] and dialog systems [8], which pose potential advantages to long-term dependency modelling in sequential problems. In addition, as Wang et al. [36] said, visual working memory is one of the key factors to guide eye movements. That is to say, explicitly introducing memory into video captioning can not only model the long-term visual-textual dependency, but also guide visual attention for better video representation.

In this paper, we propose a Multimodal Memory Model (M

) to describe videos, which builds a visual and textual shared memory to model the long-term visual-textual dependency and further guide global visual attention on described targets. Similar to Neural Turing Machines

[10], the proposed M attaches an external memory to store and retrieve both visual and textual information by interacting with video and sentence with multiple read and write operations. Fig. 1 shows the overall framework of multimodal memory modelling for video captioning, which consists of three key components: convolutional neural networks (CNN) based video encoder, multimodal memory and Long Short-Term Memory (LSTM) based text decoder. (1) CNN-based video encoder first extracts video frame/clip features using pretrained 2D/3D CNNs which are often used for image/video classification. The extracted features form the original video representation. Similar to [41], temporal soft-attention is used to select most related visual information to each word. But very different from [41] using the hidden states from a LSTM decoder, we guide the soft-attention based on the content from a multimodal memory ( in Fig.1 denotes the content read from memory for attention). Then the selected visual information will be written into the memory ( denotes the content written to memory from selective attention). (2) LSTM-based text decoder models the sentence generation with a LSTM-RNN architecture, which predicts the word conditioned on not only previous hidden representation but also the content read from the multimodal memory ( denotes the content read from memory for decoder). Besides word prediction, the text decoder also writes the updated representation to the memory ( denotes the content written to memory from the decoder). (3) Multimodal memory contains a memory matrix to interact with video and sentence, e.g., write hidden representation from the LSTM decoder to memory , read memory contents for the decoder . Each write operation will update the multimodal memory, e.g., from to . In Fig.1, we illustrate the procedure of memory-video/sentence interactions: 1⃝ write hidden states to update memory, 2⃝ read the updated memory content to perform soft-attention, 3⃝ write selected visual information to update memory again, 4⃝ read the updated memory content for next word prediction.

We evaluate our model on two publicly benchmark datasets, e.g., Microsoft Research Video Description Corpus (MSVD) and Microsoft Research-Video to Text (MSR-VTT). The proposed M achieves the state-of-the-art performance, which demonstrates the effectiveness of our model.

2 Related Work

In this section, we briefly introduce some existing work that closely related to our proposed model.

Video Captioning     Video captioning has been researched for a long period due to its importance in bridging vision and language. Various methods have been proposed to solve this problem, which can be categorized into three classes. The first class [12, 20, 30] detect the attributes of given videos and derive the sentence structure with predefined sentence templates. Then probabilistic graphical models are used to align the phases to the attributes. Similar to image captioning, this kind of methods always generate grammatically correct sentences, but lose the novelty and flexibility of the sentence. The second class of methods [17, 37]

treat video captioning as a retrieval task, which retrieve similar videos from external databases and recompose the descriptions of the retrieved videos to gain target sentence. This kind of methods generally generate more natural sentences than the first class, but have a strong dependency upon external databases. The third class of methods inspired by Neural Machine Translation (NMT)

[19, 6] map video sequence to sentence by virtue of recent deep neural networks, e.g., CNNs and RNNs. Venugopalan et al. [33] apply average pooling to extract the features of multiple video frames and use a two-layer LSTM network on these features to generate descriptions. In order to enhance video representation, Ballas et al. [2] exploit the intermediate visual representation extracted from pre-trained image classification models, and Pan et al. [22] propose a hierarchical recurrent neural encoder to explore the temporal transitions with different granularities. In order to generate more sentences for each video, Yu et al. [42] exploit a hierarchical recurrent neural network decoder which contains a sentence generator and a paragraph generator. To emphasize the mapping from video to sentence, Yao et al. [41]

propose a temporal attention model to align the most relevant visual segments to the generated captions, and Pan et al.

[23] propose a long short-term memory with a visual-semantic embedding model. Recently, the third class of deep learning based methods have made great progress in video captioning. We augment existing deep learning based models with an external memory to model the long-term visual-textual dependency and guide global visual attention in this paper.

Memory Modelling     To extend the memory ability of traditional neural networks, Graves et al. [10] propose a Neural Turing Machine (NTM) which holds an external memory to interact with the internal state of neural networks by attention mechanism. NTM has shown the potential of storage and access of information over long time periods which has always been problematic for RNNs, e.g., copying, sorting and associative recall. Besides memory matrix in NTM, memory is also modelled as continuous and differentiable doubly-linked lists and stacks [18], queues and deques [11]. Different from exploring various forms of dynamic storages, Weston et al. [38] model large long-term static memory. The internal information stored in the static memory is not modified by external controllers, which is specially used for reading comprehension. These memory networks have been successfully applied to the tasks which need long-term dependency modelling, e.g., textual question answering [3, 14], visual question answering [39] and dialog systems [8]. As we know, few memory models have been proposed for video captioning. In this paper, we will propose an external multimodal memory to interact with video and sentence simultaneously.

3 Multimodal Memory Modelling for Video Captioning

In this section, we will first introduce three key components of our model including: 1) convolutional neural networks (CNN) based video encoder, 2) Long Short-Term Memory (LSTM) based text decoder, and 3) multimodal memory. Then we will explain the procedure of model training and inference in details.

3.1 CNN-Based Video Encoder

Convolutional neural networks (CNNs) have achieved great success in many computer vision tasks recently, e.g., image classification [21] and object detection [9]. Due to the power of representation learning, CNNs pre-trained by these tasks can be directly transferred to other computer vision tasks as generic feature extractors.

In our model, we consider using pre-trained 2D CNNs to extract appearance features of videos, and pre-trained 3D CNNs to obtain motion features of videos since the temporal dynamics is very important for video understanding. In particular for an input video, we first sample it with fixed number of frames/clips, and then exploit the pre-trained 2D CNNs/3D CNNs to extract features of each frame/clip. We denote the obtained video representation as , where is the number of sampled frames/clips.

3.2 LSTM-Based Text Decoder

Similar to previous work [1, 34, 33], we use a LSTM network [16]

as our language model to model the syntactic structure of sentence, which can deal with input word sequences of arbitrary length by simply setting the start and end tags. The LSTM network has the similar architecture as a standard RNN, except for the hidden unit is replaced by a LSTM memory cell unit. But better than standard RNNs, the LSTM network can considerably release the gradient vanishing problem, which has thus accomplished better performance in natural language processing applications

[1].

Although LSTM has many variants, here we use a widely used one described in [43]

. It includes a single memory cell, an input activation function, an output activation function and three gates (i.e., input gate, forget gate and output gate). The memory cell

records the history of all observed inputs up to the current time, by recurrently summarizing the previous memory cell and the candidate cell state , modulated by a forget gate and an input gate , respectively. The input gate utilizes the input to change the state of the memory cell , the forget gate allows the memory cell to adaptively remember or forget its previous state, and the output gate modulates the state of memory cell to output the hidden state.

Different from the commonly used unimodal LSTM, we incorporate the fused multimodal information

as another input, which is read from our multimodal memory during caption generation as demonstrated in next section. For given sentences, we use one-hot vector encoding to represent each word. By denoting the input word sequence as

, and the corresponding embedding vector of word as , the hidden activations at time can be computed as follows.

(1)
(2)
(3)
(4)
(5)
(6)

where the default operation between matrices is matrix multiplication, denotes an element-wise multiplication, , , and denote the shared weight matrices to be learned, and denotes the bias term. is the input to the memory cell , which is gated by the input gate .

denotes the element-wise logistic sigmoid function, and

denotes hyperbolic tangent function tanh.

For clear illustration, the process of language modelling mentioned above can be abbreviated as follows.

(7)

3.3 Multimodal Memory

Although LSTM network can well address the “the vanishing and exploding gradient” problem

[15], it cannot deal with very long sequences due to the limited capacity of memory cells. Considering that Neural Turing Machine (NTM) [10] can capture very long rang temporal dependency with external memory, we attach a shared multimodal memory between the LSTM-based language model and CNN-based visual model for long range visual-textual information interaction.

Our multimodal memory at time t is a matrix , where N denotes the number of memory locations and M denotes the vector length of each location. The memory interacts with the LSTM-based language model and CNN-based visual model via selective read and write operations. Since there exists bimodal information, i.e., video and language, we employ two independent read/write operations to guide the information interaction. As shown in Fig. 1, during each step of sentence prediction, the hidden representation in LSTM-based language model is first written into the multimodal memory, and the memory contents will be read out to guide a visual attention scheme to select relevant visual information. Then, the selected visual information is written into the memory, which will be further read out for language modelling. In the following, we will introduce the details of this procedure.

3.3.1 Memory Interaction

The interaction of visual information and textual elements is performed in the following order.

Writing hidden representations to update memory     Before predicting the next word during the process of caption generation, our LSTM-based language model will write previous hidden representations into the multimodal memory, to summarize the previous textual information. We denote the current textual weighting vector, textual erase vector and textual add vector as , and , respectively, all of which are emitted by the LSTM-based language model. The elements of textual erase vector lie in the range (0,1). The lengths of textual erase vector and textual add vector are both M. Since both the textual erase vector and textual add vector have M independent elements, the elements in every memory location can be erased or added in a fine-grained way. Then the textual information can be written into the memory as follows.

(8)

Reading the updated memory for temporal attention     After writing textual information into the memory, the updated memory content is read out to guide a visual attention model to select prediction-related visual information. Assuming that the current visual weighting vector over the N locations at time t is , which needs to be normalized as follows.

(9)

Then the visual read vector returned by the visual attention model is computed as a linear weighting of the row-vectors :

(10)

Temporal attention selection for video representation     After reading the updated memory content, we use it for our visual attention selection. In particular, for the features of video frames/clips, instead of directly feeding them to the latter LSTM decoder by simple average pooling [33], we apply a soft attention mechanism [41] to select most relevant appearance and motion features while largely preserving the temporal dynamics of video representation. Taking a video sequence for illustration, the attention model aims to focus on specific object and action at one time. Specially, the attention model first computes the unnormalized relevance scores between the i-th temporal feature and current content read from the multi-modal memory, which summarizes embedding information of all the previously generated words, i.e., . The unnormalized relevance score can be represented as follows.

(11)

where , , , and are the parameters that are learned together with all other modules in the training network. Different from [41], here we incorporate the content read from multimodal memory instead of the previous hidden state from LSTM network. We argue that the hidden state from LSTM network can not fully represent all the information of previous words, while our multimodal memory can well keep them. After computing unnormalized relevance scores

for all the frames in the input video, we use a softmax layer to obtain the normalized attention weights

:

(12)

Finally, we sum all the products of element-wise multiplication between the attention weights and appearance and motion features to get the final representation of input video:

(13)

The above attention model allows the LSTM-based language model to selectively focus on the specific frames by increasing corresponding weights, which is very effective when there exist explict visual-semantic mappings. However, when the high-level visual representation is not intrinsically relevant to the word in the generated sentence, e.g. “the”, “number” and “from”, the predicting word is not relevant to any frame feature in the input video. In this case, the attentive video representation could act as the noise for the LSTM-based language model. To avoid this issue, we append a blank feature whose values are all zeros to the video features along the temporal dimension. Therefore, we can keep the sum of attention weights less equal to one:

(14)

Writing selected visual information to update memory     After selecting visual information via the attention model above, the information will be written into the memory for updating. Similar to the operation of writing hidden representations into the memory, the current visual weighting vector , visual erase vector and visual add vector are all emitted by the visual attention model. The elements of visual erase vector lie in the range (0,1). The lengths of visual erase vector and visual add vector are both M. Then the visual information can be written into the memory as follows.

(15)

Reading the updated memory for LSTM-based language model     When finishing the above writing operation, the updated memory is read out for language modelling. Similarly, assuming that the textual weighting vector over the N locations at the current time is , which also has to be normalized as follows.

(16)

Then the textual read vector returned by the LSTM-based language model is computed as a linear weighting of the row-vectors :

(17)

Computing of RNN-based language model     After getting the reading information from the updated memory, we can compute the current hidden state of LSTM-based language model by calling the following function.

(18)

3.3.2 Memory Addressing Mechanisms

As stated in [27, 10], the objective function is hard to converge when using a location-based addressing strategy. Therefore, we use a content-based addressing strategy to update the above read/write weighting vector. During the process of content-based addressing, each read/write head first produces a key vector and a sharpening factor . The key vector is mainly used for comparing with each memory vector by a similarity measure function , and the sharpening factor is employed for regulating the precision of the focus. Then all of them can be computed as follows.

(19)
(20)
(21)
(22)
(23)

3.4 Training and Inference

Assuming that there are totally N training video-description pairs in the entire training dataset, where the description has a length of . The overall objective function used in our model is the averaged log-likelihood over the whole training dataset plus a regularization term.

(24)

where is a one-hot vector used to denote the input word, is all parameters to be optimized in the model, and

denotes the regularization coefficient. As all components in our model including multimodal memory components are differential, we can use Stochastic Gradient Descent (SGD) to learn the parameters.

Similar to most LSTM language models, we use a softmax layer to model the next word’s probability distribution over the whole vocabulary.

(25)
(26)

where ,,,,, and

are the parameters to be estimated. Based on the probability distribution

, we can recursively sample until obtaining the end of symbol in the vocabulary.

During caption generation, we could directly choose the word with maximum probability at each timestep. However, the resulting generated sentences usually have low quality due to the local optimum strategy. Ideally, we should traverse all possible word at each timestep during the caption generation. But the exhausted search has very high computational cost, so we choose a beam search strategy to generate the caption, which is a fast and effective method [42].

4 Experiments

To validate the effectiveness of the proposed model, we do extensive experiments on two public video captioning datasets. The one is Microsoft Video Description Dataset (MSVD) [4] which is tested by most of the state-of-the-art methods. The other is recently released Microsoft Research-Video to Text (MSR-VTT) [40] which is the largest dataset in terms of sentence and vocabulary.

4.1 Datasets

Microsoft Video Description Dataset     Microsoft Video Description Dataset (MSVD) [4] consists of 1970 videos which range from 10 seconds to 25 seconds. Each video has multi-lingual descriptions which are labelled by the Amazon’s Mechanical Turk workers. For each video, the descriptions depict a single activity scene with about 40 sentences. So there are about 80,000 video-description pairs. Following the standard split [41, 23], we divide the original dataset into a training set of 1200 videos, a validation set of 100 videos, and a test set of 670 videos, respectively.

Microsoft Research-Video to Text Dataset     Microsoft Research-Video to Text Dataset (MSR-VTT) is the recently released largest dataset in terms of sentence and vocabulary, which consists of 10,000 video clips and 200,000 sentences. Each video clip is labelled with about 20 sentences. Similar to MSVD, the sentences are annotated by Amazon’s Mechanical Turk workers. With the split in [40], we divide the original dataset into a training set of 6513 videos, a validation set of 497 videos and a testing set of 2990 videos, respectively.

4.2 Data Preprocessing

Video Preprocessing     Instead of extracting features for each video frame, we uniformly sample

frames from original video for feature extraction. When the video length is less than

, we pad zero frames at the end of original frames. Empirically, we set

to 28 for 98 frames per video in MSVD, and set to 40 for 149 frames per video in MSR-VTT. For the extensive comparisons, we extract features from both pretrained 2D CNN networks, e.g., GoogleNet [28], VGG-19 [25], Inception-V3 [29], ResNet-50 [13], and 3D CNN networks, e.g., C3D [31]. Specifically, we extract the features of the pool5/7x7s1 layer in GoogleNet, the fc7 layer in VGG-19, the pool3 layer in Inception-V3, the pool5 layer in ResNet-50 and the fc6 layer in C3D.

Description Preprocessing     The descriptions in MSVD and MSR-VTT are all converted into lower case. To reduce unrelated symbols, we tokenize all sentences by NLTK toolbox 111http://www.nltk.org/index.html and remove punctuations. The vocabulary in MSVD is about 13,000 while the vocabulary in MSR-VTT is about 29,000. For convenience, we set the vocabulary size to 20,000 for both datasets. So the rare words in MSR-VTT are eliminated to further reduce the vocabulary.

4.3 Evaluation Metrics

In this paper, we adopt two standard evaluation metrics: BLEU

[24] and METEOR [7]

, which are widely used in machine translation and image/video captioning. The BLEU metric measures the n-grams precision between generated sentence and original description, which correlates highly with human evaluation results. The METEOR metric measures the word correspondences between generated sentences and reference sentences by producing an alignment

[5]. METEOR is often used as a supplement to BLEU. To guarantee a fair comparison with previous methods, we utilize the Microsoft COCO Caption Evaluation tool [5] to gain all experimental results.

4.4 Experimental Settings

During model training, we add a start tag and an end tag to the sentence in order to deal with variable-length sentences. We also add masks to both sentences and visual features for the convenience of batch training. Similar to [41], the sentences with length larger than 30 in MSVD and the sentences with length larger than 50 in MSR-VTT are removed. For the unseen words in the vocabulary, we set them to unknown flags. Several other parameters, e.g., word embedding dimension (468), beam size (5) and the size of multimodal memory matrix (128,512), are set using the validation set. To reduce the overfitting during training, we apply dropout [26] with rate of 0.5 on the output of fully connected layers and the output of LSTMs but not on the recurrent transitions. To further prevent gradient explosion, we clip the gradients to [-10,10]. The optimization algorithm is ADADELTA [44] which we find fast in convergence.

4.5 Experimental Results

Method B@1 B@2 B@3 B@4 METEOR
FGM [30] - - - 13.68% 23.90%
LSTM-YT [33] - - - 33.29% 29.07%
SA [41] - - - 40.28% 29.00%
S2VT [32] - - - - 29.2%
LSTM-E [23] 74.9% 60.9% 50.6% 40.2% 29.5%
p-RNN [42] 77.3% 64.5% 54.6% 44.3% 31.1%
HRNE [22] 79.2% 66.3% 55.1% 43.8% 33.1%
BGRCN [2] - - - 49.63% 31.7%
M-c3d 77.30% 68.20% 56.30% 45.50% 29.91%
M-vgg19 77.70% 67.50% 58.90% 49.60% 30.09%
M-google 79.05% 68.74% 60.00% 51.17% 31.47%
M-res 80.80% 69.90% 60.40% 49.32% 31.10%
M-inv3 81.56% 71.39% 62.34% 52.02% 32.18%
Table 1: The performance comparison with the other eight state-of-the-art methods using single visual feature on MSVD. The results of the proposed M with five single features are shown at the bottom of the table. We compare the best single feature results of the other eight methods at the top of the table.
Method B@1 B@2 B@3 B@4 METEOR
SA-G-3C [41] - - - 41.92% 29.60%
S2VT-rgb-flow [32] - - - - 29.8%
LSTM-E-VC [23] 78.8% 66.0% 55.4% 45.3% 31.0%
p-RNN-VC [42] 81.5% 70.4% 60.4% 49.9% 32.6%
M-VC 81.90% 71.26% 62.08% 51.78% 32.49%
M-IC 82.45% 72.43% 62.78% 52.82% 33.31%
Table 2: The performance comparison with the other four state-of-the-art methods using multiple visual feature fusion on MSVD. Here V, C, I and G denote VGG-19 [25], C3D [31], Inception-V3 [29] and GoogleNet [28], respectively.
Method B@1 B@2 B@3 B@4 METEOR
SA-V [41] 67.82% 55.41% 42.90% 34.73% 23.11%
SA-C [41] 68.90% 57.50% 47.00% 37.40% 24.80%
SA-VC [41] 72.20% 58.90% 46.80% 35.90% 24.90%
M-V 70.20% 56.60% 44.80% 35.00% 24.60%
M-C 77.20% 61.30% 47.20% 35.10% 25.70%
M-VC 73.60% 59.30% 48.26% 38.13% 26.58%
Table 3: The performance comparison with SA [41] using different visual features on MSR-VTT. Here V, C, I and R denote VGG-19 [25], C3D [31], Inception-V3 [29] and ResNet-50 [13], respectively.

4.5.1 Experimental Results on MSVD

Figure 2: Descriptions generated by SA-google, our M-google and human-annotated ground truth on the test set of MSVD. From these sentences in the second and third videos, M-google can generate more relevant object terms than SA-google (“basketball” vs. “soccer ball”), and M-google has a global visual attention on the targets (“dog”), which differs from SA-google’s local visual attention on the non-targets (“guitar”).

For comprehensive experiments, we evaluate and compare with the state-of-the-art methods using single visual feature and multiple visual feature fusion, respectively.

When using single visual feature, we evaluate and compare our model with the other eight state-of-the-art approaches([30], [33], [41], [32], [23], [42], [22], [2]). The experimental results in terms of BLEU (n-gram) and METEOR are shown in Table 1. Here we give the best single feature results of the compared eight methods, and show the results of the proposed M together with five single features, e.g., VGG-19 [25], C3D [31], Inception-V3 [29], ResNet-50 [13] and GoogleNet [28]. Among these compared methods, SA [41] is the most similar method to ours, which also has an attention-driven video encoder and LSTM-based text decoder but no external memory. When both models use the same GoogleNet feature, our M-google can make a great improvement over SA by in the BLEU@4 score and by in the METEOR score, respectively. It can be concluded that the better performance of our model benefits from multimodal memory modelling. In addition, our five M models outperform all the other methods except HRNE [22] in terms of METEOR. It is because HRNE [22] specially focus on build a fine-grained video representation for captioning. To further compare the results of the five M models using different visual features, we can see that M-inv3 achieves the best performance, following by M-res, M-google and M

-vgg19. The performance rank is very similar to that of these methods’ image classification accuracy on ImageNet

[21], which proves that visual feature is very important for video captioning. Actually, the same conclusion has been drawn in image captioning where GoogleNet features obtain better results than VGG-19 features [35].

When using multiple visual feature fusion, we compare our model with the other four state-of-the-art approaches([41], [32], [23], [42]). The comparison results are shown in Table 2. SA-G-3C [41] uses the combination of GoogleNet feature and 3D-CNN feature. S2VT-rgb-flow [32] uses the two-stream features consisting of RGB feature extracted from VGG-16 networks and optical flow feature extracted from AlexNet [21]. Both LSTM-E-VC [23] and p-RNN-VC [42] combine VGG-19 feature and C3D feature. We propose M-VC and M-IC for comparison. M-VC also uses VGG-19 feature and C3D feature while M-IC uses Inception-V3 feature and C3D feature. They all perform better than the other methods in terms of the two metrics, which proves the effectiveness of our model on long-term dependency modelling.

Fig. 2 illustrates some descriptions generated by SA-google, our M-google and human-annotated ground truth on the test set of MSVD. From these sentences, we can see that both SA-google and M-google generate semantic relevant descriptions. However, it should be noted that our M-google can generate more relevant object terms than SA-google. For example, compared with the words “men” and ”soccer ball” generated by SA-google in the third video, the words “people” and “basketball” generated by M-google are more precise to express the video content. Moreover, our M-google has a global visual attention on the targets, which differs from SA-google’s local visual attention on the non-targets. For example, M-google predicts the target word “dog” in the second video while SA-google predicts the non-target word “guitar”. All these results further demonstrate the effectiveness of our method.

4.5.2 Experimental Results on MSR-VTT

MSR-VTT is a recently released benchmark dataset [40] which has the largest number of video-sentence pairs. Considering that there are few methods tested on this dataset, we compare our model with SA [41] which is the most similar work to ours. Similarly, we perform experiments with these two methods using single visual feature and multiple visual feature fusion simultaneously. The comparison results are reported in Table 3. SA-V and SA-C use the VGG-19 feature and C3D feature, respectively. SA-VC fuses these two kinds of features. Our M-V, M-C and M-VC use the same features with the corresponding SA methods. It can be seen that our methods consistently outperform the corresponding SAs. The improved performance proves the importance of multimodal memory in our M again. In addition, from either M or SA, we can see that the results from C3D feature are generally better than those using VGG-19 feature. It may be that the motion information is very critical for the video representation in this dataset, because C3D feature encodes both visual appearance and motion information in video.

5 Conclusions and Future Work

This paper proposes a Multimodal Memory Model (M) to describe videos, which builds a visual and textual shared memory to model the long-term visual-textual dependency and further guide global visual attention. The extensive experimental results on two publicly benchmark datasets demonstrate that our method outperforms the state-of-the-art methods in terms of BLEU and METEOR metrics.

As we can see from the experimental results, video representation is very important for the performance of video captioning. In the future, we will consider to improve video representation learning algorithm, and integrate video feature extraction networks with multimodal memory networks to form an end-to-end deep learning system.

References