Describing visual contents with natural language automatically has received increasing attention in both the computer vision and natural language processing communities. It can be applied in various practical applications, such as image and video retrieval[1, 2, 3, 4, 5], answering questions from images , and assisting people who suffer from vision disorders .
Previous work predominantly focused on describing still images with natural language [8, 9, 10, 11, 12, 13]. Recently, researchers have striven to generate sentences to describe video contents [14, 15, 16, 17, 18, 19, 20]. Compared to image captioning [21, 22], describing videos is more challenging as the amount of information (e.g., objects, scenes, actions, etc.) contained in videos is much more sophisticated than that in still images. More importantly, the temporal dynamics within video sequences need to be adequately captured for captioning, besides the spatial content modeling.
. However, the encoder-decoder architecture only relies on the forward flow (video to sentence), but does not consider the information from sentence to video, named as backward flow. Usually the encoder is a convolutional neural network (CNN) capturing the image structure to yield its semantic representation. For a given video sequence, the yielded semantic representations by a CNN are further fused together to exploit the video temporal dynamics and generate the video representation. The decoder is usually a long short-term memory (LSTM)
or a gated recurrent unit (GRU), which is popular in processing sequential data 
. LSTM and GRU generate the sentence fragments one by one, and ensemble them to form one sentence. The semantic information from target sentences to source videos is never included. Actually, the backward flow can be yielded by a dual learning mechanism that has been introduced into neural machine translation (NMT)[33, 34] and image segmentation . This mechanism reconstructs a source from a target when the target is achieved, and demonstrates that the backward flow from target to source improves the performance.
To well exploit the backward flow, we refer to the idea of dual learning and propose an encoder-decoder-reconstructor architecture shown in Fig. 1, denoted as RecNet, to address video captioning. Specifically, the encoder fuses the video frame features together to exploit the video temporal dynamics and generate the video representation, based on which the decoder generates the corresponding sentences. The reconstructor, realized by one LSTM, leverages the backward flow (sentence to video). That is, the reconstructor reproduces the video information based on the hidden state sequence generated by the decoder. Such an encoder-decoder-reconstructor can be viewed as a dual learning architecture, where video captioning is the primal task and reconstruction behaves as its dual task. In the dual task, a reconstruction loss measuring the difference between the reconstructed and original visual features is additionally used to train the primal task and optimize the parameters of the encoder and decoder. With such a reconstructor, the decoder is encouraged to embed more information from the input video sequence. Therefore, the relationship between the video sequence and caption can be further enhanced. And the decoder can generate more semantically correlated sentences to describe the visual contents of the video sequences, yielding significant performance improvements. As such, the proposed reconstructor, which further helps finetune the parameters of the encoder and decoder, is expected to bridge the semantic gap between the video and sentence.
Moreover, the reconstructor can help mitigate the discrepancy, also referred to as the exposure bias , between the training and inference processes, which widely exists in RNNs for the captioning task. The proposed reconstructor can help regularize the transition dynamics of RNNs, as the dual information captured by the RecNet provides a complementary view to the encoder-decoder architecture. As such, the reconstructor can help alleviate the exposure bias between training and inference and mitigate the discrepancy, as will be demonstrated in Sec. 4.4.4.
Besides, we intend to directly train the captioning models guided by evaluation metrics, such as BLEU and CIDEr, instead of the conventionally used cross-entropy loss. However, these evaluation metrics are discrete and non-differentiable, which makes it difficult to optimize using traditional methods. The self-critical sequence training is an excellent REINFORCE-based algorithm, specializing in processing the discrete and non-differentiable variables. In this paper, we resort to the self-critical sequence training algorithm to further boost the performance of the RecNet on the video captioning task.
To summarize, our main contributions of this work are as follows. We propose a novel reconstruction network and build an end-to-end encoder-decoder-reconstructor architecture to exploit both the forward (video to sentence) and backward (sentence to video) flows for video captioning. Two types of reconstructors are customized to recover the global and local structures of the video, respectively. Moreover, a joint model is presented to reconstruct both the global and local structures simultaneously for further improving the reconstruction of the video representation. Extensive results obtained by cross-entropy training and self-critical sequence training  on benchmark datasets indicate that the backward flow is well exploited by the proposed reconstructors, and considerable gains on video captioning are achieved. Besides, ablation studies show that the proposed reconstructor can help regularize the transition dynamics of RNNs, thereby mitigating the discrepancy between training and inference processes.
2 Related Work
In this section, we first introduce two types of video captioning: template-based approaches [38, 39, 40, 41, 14] and sequence learning approaches [25, 18, 19, 17, 23, 24, 42, 28, 20, 43, 29, 44], and then introduce the application of dual learning.
2.1 Template-based Approaches
Template-based methods first define some specific rules for language grammar, and then parse the sentence into several components such as subject, verb, and object. The obtained sentence fragments are associated with words detected from the visual content to produce the final description about an input video with predefined templates. For example, a concept hierarchy of actions was introduced to describe human activities in , while a semantic hierarchy was defined in  to learn the semantic relationship between different sentence fragments. In , the conditional random field (CRF) was adopted to model the connections between objects and activities of the visual input and generate the semantic features for description. Besides, Xu et al. proposed a unified framework consisting of a semantic language model, a deep video model, and a joint embedding model to learn the association between videos and natural sentences . However, as stated in , the aforementioned approaches highly depend on predefined templates and are thus limited by the fixed syntactical structure, which is inflexible for sentence generation.
2.2 Sequence Learning Approaches
Compared with the template-based methods, the sequence learning approaches aim to directly produce a sentence description about the visual input with more flexible syntactical structures. For example, in 
, a video representation was obtained by averaging each frame feature extracted by a CNN, and then fed to LSTMs for sentence generation. In, the relevance between video context and sentence semantics was considered as a regularizer in the LSTM. However, since simple mean pooling is used, the temporal dynamics of the video sequence are not adequately addressed. Yao et al. introduced an attention mechanism to assign weights to the features of each frame and then fused them based on the attentive weights . Venugopalan et al. proposed S2VT , which includes the temporal information with optical flow and employs LSTMs in both the encoder and decoder. To exploit both temporal and spatial information, Zhang and Tian proposed a two-stream encoder comprised of two 3D CNNs [45, 46] and one parallel fully connected layer to learn the features from the frames . Besides, Pan et al. proposed a transfer unit to model the high-level semantic attributes from both images and videos, which are rendered as the complementary knowledge to video representations for boosting sentence generation .
More recently, reinforcement learning has shown benefits on video captioning tasks. Pasunuru and Bansal employed reinforcement learning to directly optimize the CIDEt scores (a variant metric of CIDEr) and achieved state-of-the-art results on the MSR-VTT dataset . Wang et al. proposed a hierarchical reinforcement learning framework, where a manager guides a worker to generate semantic segments about activities to produce more detailed descriptions.
2.3 Dual Learning Approaches
As far as we know, dual learning mechanism has not been employed in video captioning but widely used in NMT [33, 34, 48]. In , the source sentences are reproduced from the target side hidden states, and the accuracy of reconstructed source provides a constraint for the decoder to embed more information of source language into target language. In , the dual learning is employed to train model of inter-translation of English-French, and obtains significant improvement on tasks of English to French and French to English.
In this paper, our proposed RecNet can be regarded as a sequence learning method. However, unlike the above conventional encoder-decoder models which only depend on the forward flow from video to sentence, RecNet can also benefit the backward flow from sentence to video. By fully considering the bidirectional flows between video and sentence, RecNet is capable of further boosting the video captioning. Besides, it is worth noting that this work is an extended version of . The main improvements of this version are elaborated as follows: First, this work takes one step forward and presents a new reconstruction model, named as RecNet, which considers both global and local structures to further improve the reconstruction of the video representation. Second, the exposure bias between training and inference processes is studied in this work. We demonstrate that the proposed reconstructor can help regularize the transition dynamics of the decoder, and mitigate the discrepancy between training and inference processes. Besides, more ablation studies on reconstructors are conducted, including training with the self-critical algorithm, visualization of the hidden states of the decoder, and curves of the training losses and metrics used to examine how well the reconstrutor works. Also, an additional dataset ActivityNet  is included to further verify the effectiveness of the proposed reconstructor.
We propose a novel RecNet with an encoder-decoder-reconstructor architecture for video captioning, which works in an end-to-end manner. The reconstructor imposes one constraint that the semantic information of one source video can be reconstructed from the hidden state sequence of the decoder. The encoder and decoder are thus encouraged to embed more semantic information about the source video. As illustrated in Fig. 2, the proposed RecNet consists of three components, specifically the encoder, the decoder, and the reconstructor:
Encoder. Given one video sequence, the encoder yields the semantic representation for each video frame.
Decoder. The decoder decodes the corresponding representations generated by the encoder into one caption describing the video content.
Reconstructor. The reconstructor takes the intermediate hidden state sequence of the decoder as input, and reconstructs the video global or local structure.
Moreover, our designed reconstructor can collaborate with different classical encoder-decoder architectures for video captioning. Our proposed reconstructor can be built on top of any classical encoder-decoder models for video captioning. In this paper, we employ the attention-based video captioning  and S2VT  as the classical encoder-decoder models. We first briefly introduce the encoder-decoder model for video captioning. Afterward, the proposed reconstructors with different architectures are described.
The aim of video captioning is to generate one sentence to describe the content of one given video
. Classical encoder-decoder architectures directly model the captioning generation probability word by word:
where keeps the parameters of the encoder-decoder model. denotes the length of the sentence, and (i.e., ) denotes the generated partial caption.
Encoder. To generate reliable captions, visual features need to be extracted to capture the high-level semantic information about the video. Previous methods usually rely on CNNs, such as AlexNet , GoogleNet , and VGG19  to encode each video frame into a fixed-length representation with the high-level semantic information. By contrast, in this work, considering a deeper network is more plausible for feature extraction, we advocate using Inception-V4  as the encoder. In this way, the given video sequence is encoded as a sequential representation , where denotes the total number of the video frames.
Decoder. Decoder aims to generate the caption word by word based on the video representation. LSTMs with the capabilities of modeling long-term temporal dependencies are used to decode video representation to video captions word by word. To further exploit the global temporal information of videos, a temporal attention mechanism  is employed to encourage the decoder to select the key frames/elements for captioning.
During the captioning process, the word prediction is generally made by the LSTM:
represents the LSTM activation function,is the hidden state computed in the LSTM, and denotes the
context vector computed with the temporal attention mechanism, which is used to decode theword. The temporal attention mechanism is used to assign weight to the representation of each frame at the time step as follows:
where denotes the number of the video frames and . In order to obtain the attentive weight at the time step for the video frame representation, we follow the traditional way in  to calculate the relevance score for the frame representation with respect to the hidden state :
where , , , and are the learnable parameters. The attentive weight is thereby obtained by:
The attentive weight reflects the relevance between the frame representation and the hidden state at the time step. The larger , the more relevant the video frame representation is to , which allows the decoder to focus more on the corresponding video frames to generate the word at the next time step.
As shown in Fig. 2, the proposed reconstructor is built on top of the encoder-decoder, which is expected to reproduce the video from the hidden state sequence of the decoder. However, due to the diversity and high dimension of the video frames, directly reconstructing the video frames seems to be intractable. Therefore, in this paper, the reconstructor aims at reproducing the sequential video frame representations generated by the encoder, with the hidden states of the decoder as input. The benefits of such a structure are two-fold. First, the proposed encoder-decoder-reconstructor architecture can be trained in an end-to-end fashion. Second, with such a reconstruction process, the decoder is encouraged to embed more information from the input video sequence. Therefore, the relationship between the video sequence and caption can be further enhanced, which is expected to improve the video captioning performance.
In practice, the reconstructor is realized by LSTMs. Two different architectures are customized to summarize the hidden states of the decoder for video feature reproduction. More specifically, one focuses on reproducing the global structure of the provided video, while the other pays more attentions to the local structure by selectively attending to the hidden state sequence. Moreover, to comprehensively reconstruct the video sequence, we propose a new architecture to reconstruct both the global and local structures of the video features.
3.2.1 Reconstructing Global Structure
The architecture for reconstructing the global structure of the video sequence is illustrated in Fig. 3. The whole sentence is fully considered to reconstruct the video global structure. Therefore, besides the hidden state at each time step, the global representation characterizing the semantics of the whole sentence is also taken as the input at each step. Several methods like LSTM and multiple-layer perception can be employed to fuse the hidden sequential states of the decoder to generate the global representation. Inspired by , the mean pooling strategy is performed on the hidden states of the decoder to yield the global representation of the caption:
where denotes the mean pooling process, which yields a vector representation with the same size as . Thus, the LSTM unit of the reconstructor is further modified as:
where , , , , and denote the input, forget, memory, output, and hidden states of each LSTM unit, respectively. and denote the logistic sigmoid activation and the element-wise multiplication, respectively.
To reconstruct the video global structure from the hidden state sequence produced by the encoder-decoder, the global reconstruction loss is defined as:
where denotes the mean pooling process on the video frame features, yielding the ground-truth global structure of the input video sequence. works on the hidden states of the reconstructor, indicating the global structure recovered from the captions. The reconstruction loss is measured by , which is simply chosen as the Euclidean distance.
3.2.2 Reconstructing Local Structure
The aforementioned reconstructor aims to reproduce the global representation for the whole video sequence, while neglects the local structures in each frame. In this subsection, we propose to learn and preserve the temporal dynamics by reconstructing each video frame as shown in Fig. 4
. Differing from the global structure estimation, we intend to reproduce the feature representation of each frame from the key hidden states of the decoder selected by the attention strategy[53, 25]:
where and denotes the weight computed for the hidden state from the decoder at time step by the attention mechanism. Here measures the relevance of the hidden state of the decoder given the previously reconstructed frame representations . Similar to Eqs. (4) and (5), to calculate the attentive weight , the relevance score for the hidden state from the decoder with respect to the previous hidden state in the reconstructor is first computed by:
where , , , and are the learnable parameters, and denotes the total number of hidden states from the decoder.
Such a strategy encourages the reconstructor to work on the hidden states selectively by adjusting the attentive weight and yield the context information at each time step as in Eq. (9). As such, the proposed reconstructor can further exploit the temporal dynamics and the word compositions across the whole caption. The LSTM unit is thereby reformulated as:
Differing from the global structure recovery step in Eq. (7), the dynamically generated context is taken as the input other than the hidden state and its mean pooling representation . Moreover, instead of directly generating the global mean representation of the whole video sequence, we propose to produce the feature representation frame by frame. The reconstruction loss is thereby defined as:
3.2.3 Reconstructing both Global and Local Structures
In this subsection, we step further and intend to reconstruct both global and local structures of the video sequence. The architecture is illustrated in Fig. 5.
Different from the aforementioned two methods, we first reconstruct the feature representation of each frame with the local information of the input video. After that, mean pooling is conducted on the reproduced frame representation and the global representation of the video is yielded. The global reconstruction loss and the local reconstruction loss are jointly considered as follows:
Such an architecture is designed to reproduce both the global and local information. It is expected to comprehensively exploit the backward information flow and further boost the performance of video captioning.
3.3.1 Training Encoder-Decoder
Traditionally, the encoder-decoder model can be jointly trained by minimizing the negative log likelihood to produce the correct description sentence given the video as follows:
Moreover, in order to directly optimize the metrics instead of the cross entropy loss, we consider the video contents and words as ”environment”, and the encoder-decoder model as ”agent” which interacts with the environment. At each step for agent LSTMs, the policy takes ”action” to predict a word followed by the updating of ”state”, which denotes the hidden states and cell states of LSTMs. When a sentence is generated, the metric score is computed and regarded as the ”reward” (here the CIDEr score is taken as the reward), which we intend to optimize by minimizing the negative expected reward:
where denotes the reward of the word sequence sampled from the model. The gradient of the non-differentiable loss can be obtained by Eq. (15) using the REINFORCE algorithm.
In fact is typically estimated with a single sample from . Therefore, for each sample, Eq. (16) can be represented as:
To deal with the high variance of the gradient estimate with a single sample, we also add a baseline reward into the reward-based loss to generalize the policy gradient given by REINFORCE, which could reduce the variance without changing the expected gradient:
where denotes the baseline reward.
Usually the baseline is estimated by a trainable network , which significantly increases the computational complexity. To handle such a drawback, Rennie et al. proposed a self-critical sequence training method , and the baseline is considered as the CIDEr score of the sentence, which is generated by the current encoder-decoder model under the inference mode. It proved to be more effective for training, as such a scheme not only brings a lower gradient variance, but also avoids learning an estimate of expected future rewards as the baseline. Hence, in this paper, we employ the self-critical sequence training method and take the metric score of the sentence greedily decoded by the current model with the inference algorithm as the baseline, i.e., . As such, by replacing the baseline with , Eq. (20) can be rewritten as:
Hence, if a sample has a reward higher than baseline , the gradient is negative and such a distribution is encouraged by increasing the probability of the corresponding word. Similarly, the sample distribution that has a low reward is discouraged.
3.3.2 Training Encoder-Decoder-Reconstructor
Formally, we train the proposed encoder-decoder-reconstructor architecture by minimizing the whole loss defined in Eq. (22), which involves both the forward (video-to-sentence) likelihood and the backward (sentence-to-video) reconstruction loss:
The loss for the encoder-decoder can be either the traditional cross entropy loss or the reinforcement loss . The reconstruction loss can be set as the global loss in Eq. (8), the local loss in Eq. (12), or the joint global and local loss in Eq. (13). The hyper-parameter is introduced to seek a trade-off between the encoder-decoder and the reconstructor.
The training of our proposed RecNet model proceeds in two stages. First, we rely on the forward likelihood to train the encoder-decoder component of the RecNet, which is terminated by the early stopping strategy. Afterward, the reconstructor and the backward reconstruction loss are introduced. We use the whole loss defined in Eq. (22) to jointly train the reconstructor and fine-tune the encoder-decoder. For the reconstructor, the reconstruction loss is calculated using the hidden state sequence generated by the LSTM units in the reconstructor as well as the video frame feature sequence.
3.4 RecNet vs. Autoencoder
The whole architecture of the RecNet can be regarded as one autoencoder. Specifically, the encoder-decoder framework acts as the “encoder” component in the autoencoder, and its aim is to generate the fluent sentences describing the video contents. The reconstructor performs as the “decoder”, and its aim is to ensure the semantic correlations between the generated caption and the input video sequence. Also, the reconstruction losses defined in Eqs. (8), (12), and (13) act similarly to the reconstruction error defined in the autoencoder, which can further guarantee that the model can learn effective feature representation from the input video sequence . Compared to the encoder-decoder, the proposed reconstructor is able to exploit the backward flow from sentence to video, and acts as one regularizer to make the encoder-decoder component produce captions semantically correlated to input video sequences. Consequently, the video captioning performance can be improved further.
4 Experimental Results
In this section, we evaluate the proposed RecNet on the video captioning task over the benchmark datasets such as Microsoft Research video to text (MSR-VTT) , Microsoft Research Video Description Corpus (MSVD) , and ActivityNet Captions (ActivityNet) . We compute the traditional evaluation metrics, namely BLEU-4 , METEOR , ROUGE-L , and CIDEr  with the codes released on the Microsoft COCO evaluation server111https://github.com/tylin/coco-caption  and ActivityNet evaluation server222https://github.com/ranjaykrishna/densevid_eval. In the following, we first introduce the benchmark datasets and the implementation details of the proposed method. Then, the experimental results are provided and discussed.
4.1 Datasets and Implementation Details
MSR-VTT. It is the largest dataset for video captioning so far in terms of the number of video-sentence pairs and the vocabulary size. In the experiments, we use the initial version of MSR-VTT, referred to as MSR-VTT-10K, which contains 10K video clips from 20 categories. Each video clip is annotated with 20 sentences by 1,327 workers from Amazon Mechanical Turk. Therefore, the dataset results in a total of 200K clip-sentence pairs and 29,316 unique words. We use the public splits for training and testing, i.e., 6,513 for training, 497 for validation, and 2,990 for testing.
MSVD. It contains 1,970 YouTube short video clips with each one depicting a single activity in 10 seconds to 25 seconds. Each video clip has roughly 40 English descriptions. Similar to the prior work [28, 25], we take 1,200 video clips for training, 100 clips for validation and 670 clips for testing, respectively.
It is a large-scale video benchmark dataset with the complex human activities for high-level video understanding tasks, including temporal action proposal, temporal action localization, and dense video captioning. Specifically, there are 10,024 videos for training, 4,926 for validation, and 5,044 for testing, respectively. Each video has multiple annotated events with starting and ending time-stamps as well as the corresponding captions.
4.1.2 Implementation Details
For the sentence preprocessing, we remove the punctuations in each sentence, split each sentence with blank space, and convert all words into lowercase. The sentences longer than 30 are truncated, and the word embedding size for each word is set to 468.
For the encoder, we feed all frames of each video clip into Inception-V4  which is pretrained on the ILSVRC-2012-CLS  classification dataset for feature extraction after resizing them to the standard size of , and extract the 1,536 dimensional semantic feature of each frame from the last pooling layer. Inspired by 
, we choose the equally-spaced 28 features from one video, and pad them with zero vectors if the number of features is less than 28. The input dimension of the decoder is 468, the same to that of the word embedding, while the hidden layer contains 512 units. For the reconstructor, the inputs are the hidden states of the decoder and thus have the dimension of 512. To ease the reconstruction loss computation, the dimension of the hidden layer is set to 1,536 same to that of the frame features produced by the encoder.
is applied with a learning rate 1e-5 under the RL. To help the models under the REINFORCE algorithm converge fast, we initialize them as the pre-trained models having the best CIDEr scores under cross entropy training. The training stops when the CIDEr value on the validation dataset stops increasing in the following 20 successive epochs. During the testing phase, beam search with size 5 is used for the final caption generation.
4.2 Performance Comparisons
4.2.1 Performance on MSR-VTT
In this section, we first test the impacts of different encoder-decoder architectures in video captioning, such as SA-LSTM and MP-LSTM. Both are popular encoder-decoder models and share similar LSTM structure, except that SA-LSTM introduced an attention mechanism to aggregate frame features, while MP-LSTM relies on the mean pooling. As shown in Table I, with the same encoder VGG19, SA-LSTM yielded 35.6 and 25.4 on the BLEU-4 and METEOR metrics respectively, while MP-LSTM only produced 34.8 and 24.7, respectively. The same results can be obtained when using AlexNet and GoogleNet as the encoder. Hence, it is concluded that exploiting the temporal dynamics among frames with the attention mechanism performed better in sentence generation than mean pooling on the whole video.
Besides, we also introduced Inception-V4 as an alternative CNN for feature extraction in the encoder. It is observed that with the same encoder-decoder structure SA-LSTM, Inception-V4 yielded the best captioning performance comparing to the other CNNs such as AlexNet, GoogleNet, and VGG19. This is probably because Inception-V4 is a deeper network and better at semantic feature extraction. Hence, SA-LSTM equipped with Inception-V4 is employed as the encoder-decoder model in the proposed RecNet.
By stacking the global or local reconstructor on the encoder-decoder model SA-LSTM, we can have the proposed encoder-decoder-reconstructor architecture: RecNet. Apparently, such a structure yielded significant gains to the captioning performance in all metrics. This proved that the backward flow information introduced by the proposed reconstructor could encourage the decoder to embed more semantic information and regularize the generated caption to be more consistent with the video contents. Actually, RecNet can be viewed as a dual learning model, where video captioning is the primal task and reconstruction behaves as its dual task. In the dual task, a reconstruction loss measuring the difference between the reconstructed and original visual features is employed additionally to train the primal task and optimize the parameters of the encoder and decoder. With such a reconstructor, the decoder is encouraged to embed more information from the input video sequence. Therefore, the relationship between the video sequence and caption can be enhanced. And the decoder can generate more semantically correlated sentences to describe the visual contents of the video sequences, yielding significant performance gains. More discussions and experimental results about the proposed reconstructor will be given in Section 4.4.
Moreover, we compared RecNet with two models [24, 66], which achieved great successes in 2016 MSR-VTT Challenge. As they utilized the multimodal features like aural and speech to boost captioning, which is beyond the focus of this work, we set the comparison on the C3D features only. Also, as they have reported results on the validation set of MSR-VTT, for a fair comparison we report out results on the same validation set. Our proposed RecNet shows superiority over these two methods as in Table II. Incorporating the multimodal features into the proposed RecNet would be an interesting direction for us to pursue in the future.
|Qin et al. ||36.9||27.3||58.4||41.9|
|Shetty et al. ||36.9||26.8||57.7||41.3|
Furthermore, we introduce the reinforcement learning into the SA-LSTM model and mix it with the proposed reconstructor. The results on MSR-VTT are shown in the bottom column of Table I. It can be observed that REINFORCE algorithm makes a big contribution to the performance improvement of the baseline model, especially on the CIDEr value which is directly optimized by the model. Once again, we see the importance of backward flow mined by reconstructor: even if the network, such as the SA-LSTM(RL), has achieved great performance, collaborating with the backward information flow leads to a further improvement.
4.2.2 Performance on MSVD
Also, we tested the proposed RecNet on the MSVD dataset , and compared it to more benchmark encoder-decoder models, such as GRU-RCN, HRNE, h-RNN, LSTM-E, aLSTMs, and LSTM-LS. The quantitative results are given in Table III. It is observed that the RecNet and RecNet with SA-LSTM performed the best and second best in all metrics, respectively. Besides, we introduced the reconstructor to S2VT to build another encoder-decoder-reconstructor model. The results show that both global and local reconstructors bring improvements to the original S2VT in all metrics, which again demonstrate the benefits of video captioning using bidirectional cue modeling.
One interesting point of the proposed reconstructor is about the capability of learning from limited amounts of data, which is brought by the dual learning mechanism. The benefit of RecNet on one limited training set can be demonstrated by comparing the results on MSVD and MSR-VTT. The gains on MSVD are more evident than those on MSR-VTT, while the size of MSVD is only one third of MSR-VTT. Taking the ResNet as an example, we halve the size of MSVD to further demonstrate the proposed model on learning from limited data. As shown in Table IV, the performance on CIDEr declines when the data size is reduced. However, a larger improvement is achieved, with gain of 9.3% on half of MSVD vs. 5.7% on full MSVD.
|Data Size||SA-LSTM (Inception-V4)||RecNet||Performance Gain|
4.2.3 Performance on ActivityNet
To further verify the effectiveness of the proposed RecNet, experiments on the ActivityNet dataset  are conducted and shown in Table V. We construct the video-sentence pairs by extracting the video segments indicated by the starting and ending timestamps as well as their associated sentences. As the ground-truth captions and timestamps of the test split are unavaliable, we validate our model on the validation split.
Similar to the results on MSR-VTT and MSVD, it can be observed that the proposed RecNet outperforms the base model SA-LSTM. The main reason can be attributed to that the backward flow is well captured by the proposed reconstructor. The same situation occurs when models are trained by the self-critical sequence training strategy. The methods realized in a traditional encoder-decoder architecture mainly focus on exploiting the forward flow (video to sentence), while neglect the backward flow (sentence to video). In contrast, the proposed RecNet, realized in a novel encoder-decoder-reconstructor architecture, leverages both the forward and backward flows. Therefore, the relationship between the video sequence and caption can be further enhanced, which can improve the video captioning performance.
4.3 Study on the Trade-off Parameter
In this section, we discuss the influence of the trade-off parameter in Eq. (22). With different values, the obtained BLEU-4 metric values are given in Fig. 6. First, it can be concluded again that adding the reconstruction loss () did improve the performance of video captioning in terms of BLEU-4. Second, there is a trade-off between the forward likelihood loss and the backward reconstruction loss, as too large may incur noticeable deterioration in the caption performance. Thus, needs to be more carefully selected to balance the contributions of the encoder-decoder and the reconstructor. As shown in Fig. 6, we empirically set to 0.2 and 0.1 for RecNet and RecNet, respectively.
4.4 Study on the Reconstructors
A deeper study on the proposed reconstructor is discussed in this section.
4.4.1 Performance Comparisons between Different Reconstructors
The quantitative results of RecNet and RecNet on MSR-VTT are given on the bottom two rows of Table I. It can be observed that RecNet performs slightly better than RecNet. The reason mainly lies in the temporal dynamic modeling. RecNet employs mean pooling to reproduce the video representation and misses the local temporal dynamics, while the attention mechanism is included in RecNet to exploit the local temporal dynamics for each frame reconstruction.
The performance gap on some metrics, such as METEOR and ROUGE-L, between RecNet and RecNet may be not significant. One possible reason is that the visual information of consecutive frames is very similar. As the video clips from the available datasets are short, the visual representations of frames are of small differences from each other. Therefore, the global and local structure information seems to be similar. Another possible reason is the complicated video-sentence relationship, which may lead to similar input information for RecNet and RecNet.
4.4.2 Effects of Reconstruction Order
Another interesting point about the proposed reconstructor is that we do not need to constrain the reconstruction order. In fact, the reconstruction ordering is unnecessary when reproducing the global structure of the video. We may discard the idea in RecNet, as we finally put the mean pooling operation on the reconstructed feature sequence to acquire the video global structure (mean frame feature), which will disrupt the original feature order. For the RecNet, we need the correct reconstruction order and address it in an implicit manner, where the employed LSTM in RecNet reconstructs the video features frame by frame and then the local losses are pooled together as in Eq. (12).
Besides, we have tested another method to constrain the correct reconstruction ordering for the RecNet. Specifically, we have tried replacing the attention mechanism in the reconstructor with the transposed attention matrix obtained in the decoder which has the ordering information about the input frame features. The results are given in Table VI. It is observed that to a certain extent transposing the attention matrix for feature reconstruction does help boost the baseline encoder-encoder model. However, it is inferior to the proposed RecNet. The reason is that simply matrix transposing cannot effectively exploit the complicated relationship between the video features and the hidden states of the decoder.
4.4.3 Jointly Reconstructing the Global and Local Structures
We also propose one new model named RecNet, which simultaneously considers both global and local structures to produce more reliable reconstruction of the video representation. The performances of RecNet on MSR-VTT, MSVD, and ActivityNet are illustrated in Table VII. Obviously, RecNet yields consistent performance improvements over both RecNet and RecNet, especially on the METEOR and CIDEr metrics. The consistent performance improvement can also be observed when RecNets are trained by the self-critical strategy. For example, on the MSR-VTT dataset, RecNet(RL) has a better CIDEr score (49.5) than RecNet(RL) (48.4) and RecNet(RL) (48.7), and the METEOR score of RecNet(RL)(27.7) also outperforms RecNet(RL) (27.5) and RecNet(RL) (27.5). Such improvements on different benchmark datasets show that jointly learning the global and local reconstructors can help exploit the backward flow (sentence to video) comprehensively, thereby further boosting the performance of video captioning.
4.4.4 Discussions on the Reconstructor
Moreover, we make more detailed studies on how well the reconstruction is performed. First, we record the training losses as well as the CIDEr scores during the training process (with and without the proposed reconstructor), as illustrated in Fig. 7 to examine the effects of the reconstructor.
It can be observed that the cross-entropy loss defined by Eq. (1) and the CIDEr score consistently converge during the training process of the baseline model. Relying on the early stopping strategy, the parameters of the baseline model are determined. Afterward, the reconstructor, specifically the RecNet, is employed to reconstruct the local structure of the original video from the hidden state sequence generated by the decoder. During such a process, the reconstruction loss defined by Eq. (12) is used to train the reconstructor and the total training loss defined by Eq. (22) is used to train the encoder and decoder. It can be observed that the reconstruction loss decreases as the training proceeds, while the cross-entropy loss increases slightly in the beginning and then consistently decreases. It is reasonable as the untrained reconstructor can hardly provide valid information to train the encoder and decoder in the first few iterations. The same observations can be obtained for the CIDEr score. Hence, it can be concluded that the proposed reconstructor can help train the encoder and decoder and thereby improves the video captioning performance. Also, the decrease of the reconstruction loss clearly demonstrates that such reconstruction can reproduce meaningful visual information from the hidden state sequence generated by the decoder.
Second, we examine how the hidden state in the decoder changes after the reconstructor is employed. Generally, in the video captioning models trained with the cross-entropy loss, the decoder is encouraged to maximize the likelihood of each word given the previous hidden state and the ground-truth word at the previous step. Whereas for inference, the errors are accumulated along the generated sentence, as no ground-truth word is observed by the decoder before predicting a word. As such, the discrepancy between training and inference is inevitably introduced, which is also referred to as the exposure bias .
To examine how the hidden state in the decoder changes after the reconstruction is introduced. We consider the distributions of the last hidden states of the sequence in the decoder as in , since they encode the necessary information about the whole sequential input. Specifically, we visualize the distributions of the last hidden states when the LSTM in the decoder reaches the maximum step or ‘’ (the end signal of the sentence) is predicted, in the training and inference stages, respectively. The hidden state visualizations are illustrated in Fig. 8. The hidden states are from the same batch with size 200. We reduce the dimension of the hidden states to 2 with T-SNEs . It is obvious that the baseline model (Fig. 8 (a)) in training and inference stages presents a large discrepancy, while the proposed RecNet (Fig. 8 (b)) significantly reduces such a discrepancy. This is also one of the reasons that RecNet performs better than the competitor models.
4.5 Qualitative Analysis
Besides, some qualitative examples are shown in Fig. 9. Still, it can be observed that the proposed RecNets with local and global reconstructors generally produced more accurate captions than the typical encoder-decoder model SA-LSTM. For example, in the second example, SA-LSTM generated “a woman is talking”, which missed the core subject of the video, i.e., “makeup”. By contrast, the captions produced by RecNet and RecNet are “a woman is talking about makeup” and “a women is putting makeup on her face”, which apparently are more accurate. RecNet even generates the word of “face”, which results in a more descriptive caption. More results can be found in the supplementary file.
In this paper, we proposed a novel RecNet in the encoder-decoder-reconstructor architecture for video captioning, which exploits the bidirectional cues between natural language description and video content. Specifically, to address the backward information from description to video, two types of reconstructors were devised to reproduce the global and local structures of the input video, respectively. An additional architecture by fusing the two types of reconstructors was also presented and compared with the models that can reproduce either the global or the local structure separately. The forward likelihood and backward reconstruction losses were jointly modeled to train the proposed network. Besides, we employed the REINFORCE algorithm to directly optimize the CIDEr score and fused the reward-based loss with the traditional loss from the reconstructor for further improving the captioning performance. The extensive experimental results on the benchmark datasets demonstrate the superiority of the proposed RecNet over the existing encoder-decoder models in terms of the typical metrics for video captioning.
This work was supported by the National Key Research and Development Plan of China under Grant 2017YFB1300205, NSFC Grant no. 61573222, and Major Research Program of Shandong Province 2018CXGC1503.
-  L. Ma, Z. Lu, L. Shang, and H. Li, “Multimodal convolutional neural networks for matching image and sentence,” in ICCV, 2015, pp. 2623–2631.
-  J. Wang, W. Liu, S. Kumar, and S.-F. Chang, “Learning to hash for indexing big data—a survey,” Proceedings of the IEEE, vol. 104, no. 1, pp. 34–57, 2016.
-  W. Liu and T. Zhang, “Multimedia hashing and networking,” IEEE MultiMedia, vol. 23, no. 3, pp. 75–79, July 2016.
-  J. Song, L. Gao, L. Liu, X. Zhu, and N. Sebe, “Quantization-based hashing: a general framework for scalable image and video retrieval,” Pattern Recognition, 2017.
-  J. Wang, T. Zhang, N. Sebe, H. T. Shen et al., “A survey on learning to hash,” TPAMI, 2017.
-  L. Ma, Z. Lu, and H. Li, “Learning to answer questions from image using convolutional neural network.” in AAAI, vol. 3, no. 7, 2016, p. 16.
-  V. Voykinska, S. Azenkot, S. Wu, and G. Leshed, “How blind people interact with visual content on social networking services,” pp. 1584–1595, 2016.
-  A. Karpathy, A. Joulin, and F. F. F. Li, “Deep fragment embeddings for bidirectional image sentence mapping,” in NIPS, 2014.
-  O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in CVPR, June 2015.
-  ——, “Show and tell: Lessons learned from the 2015 mscoco image captioning challenge,” TPAMI, vol. 39, no. 4, pp. 652–663, 2017.
-  Z. Ren, X. Wang, N. Zhang, X. Lv, and L.-J. Li, “Deep reinforcement learning-based image captioning with embedding reward,” arXiv preprint arXiv:1704.03899, 2017.
-  W. Jiang, L. Ma, X. Chen, H. Zhang, and W. Liu, “Learning to guide decoding for image captioning,” in AAAI, 2018.
-  L. Chen, H. Zhang, J. Xiao, L. Nie, J. Shao, W. Liu, and T.-S. Chua, “Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning,” in CVPR, 2017.
-  R. Xu, C. Xiong, W. Chen, and J. J. Corso, “Jointly modeling deep video and compositional text to bridge vision and language in a unified framework.” in AAAI, 2015.
-  L. Zhou, Y. Zhou, J. J. Corso, R. Socher, and C. Xiong, “End-to-end dense video captioning with masked transformer,” in CVPR, 2018.
-  Y. Li, T. Yao, Y. Pan, H. Chao, and T. Mei, “Jointly localizing and describing events for dense video captioning,” in CVPR, 2018.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in CVPR, 2015.
-  S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko, “Sequence to sequence - video to text,” in ICCV, 2015. [Online]. Available: https://doi.org/10.1109/ICCV.2015.515
S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko, “Translating videos to natural language using deep recurrent neural networks,” inNAACL, 2015.
-  Y. Pan, T. Yao, H. Li, and T. Mei, “Video captioning with transferred semantic attributes,” arXiv preprint arXiv:1611.07675, 2016.
-  W. Jiang, L. Ma, Y.-G. Jiang, W. Liu, and T. Zhang, “Recurrent fusion network for image captioning,” in ECCV, 2018, pp. 499–515.
-  X. Chen, L. Ma, W. Jiang, J. Yao, and W. Liu, “Regularizing rnns for caption generation by reconstructing the past with the present,” in CVPR, 2018, pp. 7995–8003.
-  V. Ramanishka, A. Das, D. H. Park, S. Venugopalan, L. A. Hendricks, M. Rohrbach, and K. Saenko, “Multimodal video description,” in ACM MM, 2016.
-  Q. Jin, J. Chen, S. Chen, Y. Xiong, and A. Hauptmann, “Describing videos using multi-modal fusion,” in ACM MM, 2016.
-  L. Yao, A. Torabi, K. Cho, N. Ballas, C. J. Pal, H. Larochelle, and A. C. Courville, “Describing videos by exploiting temporal structure,” in ICCV, 2015. [Online]. Available: https://doi.org/10.1109/ICCV.2015.512
-  L. Gao, Z. Guo, H. Zhang, X. Xu, and H. T. Shen, “Video captioning with attention-based lstm and semantic consistency,” IEEE Transactions on Multimedia, 2017.
-  J. Song, Z. Guo, L. Gao, W. Liu, D. Zhang, and H. T. Shen, “Hierarchical lstm with adjusted temporal attention for video captioning,” arXiv preprint arXiv:1706.01231, 2017.
-  Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui, “Jointly modeling embedding and translation to bridge video and language,” in CVPR, 2016.
-  Y. Liu, X. Li, and Z. Shi, “Video captioning with listwise supervision.” in AAAI, 2017.
-  S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014.
-  W. Zhang, X. Yu, and X. He, “Learning bidirectional temporal cues for video-based person re-identification,” TCSVT, 2017.
-  Z. Tu, Y. Liu, L. Shang, X. Liu, and H. Li, “Neural machine translation with reconstruction.” in AAAI, 2017, pp. 3097–3103.
-  D. He, Y. Xia, T. Qin, L. Wang, N. Yu, T. Liu, and W.-Y. Ma, “Dual learning for machine translation,” in NIPS, 2016, pp. 820–828.
-  P. Luo, G. Wang, L. Lin, and X. Wang, “Deep dual learning for semantic image segmentation,” in CVPR, 2017, pp. 2718–2726.
-  M. Ranzato, S. Chopra, M. Auli, and W. Zaremba, “Sequence level training with recurrent neural networks,” arXiv preprint arXiv:1511.06732, 2015.
-  S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self-critical sequence training for image captioning,” arXiv preprint arXiv:1612.00563, 2016.
-  A. Kojima, T. Tamura, and K. Fukunaga, “Natural language description of human activities from video images based on concept hierarchy of actions,” IJCV, vol. 50, no. 2, pp. 171–184, 2002.
-  S. Guadarrama, N. Krishnamoorthy, G. Malkarnenkar, S. Venugopalan, R. Mooney, T. Darrell, and K. Saenko, “Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition,” in ICCV, 2013.
-  M. Rohrbach, W. Qiu, I. Titov, S. Thater, M. Pinkal, and B. Schiele, “Translating video content to natural language descriptions,” in ICCV, 2013.
-  A. Rohrbach, M. Rohrbach, W. Qiu, A. Friedrich, M. Pinkal, and B. Schiele, “Coherent multi-sentence video description with variable level of detail,” in GCPR, 2014.
-  C. Zhang and Y. Tian, “Automatic video description generation via lstm with joint two-stream encoding,” in ICPR, 2016.
-  Z. Shen, J. Li, Z. Su, M. Li, Y. Chen, Y.-G. Jiang, and X. Xue, “Weakly supervised dense video captioning,” in CVPR, 2017.
-  J. Wang, W. Jiang, L. Ma, W. Liu, and Y. Xu, “Bidirectional attentive fusion with context gating for dense video captioning,” in CVRP, 2018, pp. 7190–7198.
-  D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “C3D: generic features for video analysis,” CoRR, vol. abs/1412.0767, 2014. [Online]. Available: http://arxiv.org/abs/1412.0767
-  A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. Li, “Large-scale video classification with convolutional neural networks,” in CVPR, 2014. [Online]. Available: https://doi.org/10.1109/CVPR.2014.223
-  R. Pasunuru and M. Bansal, “Reinforced video captioning with entailment rewards,” arXiv preprint arXiv:1708.02300, 2017.
-  Y. Xia, T. Qin, W. Chen, J. Bian, N. Yu, and T.-Y. Liu, “Dual supervised learning,” arXiv preprint arXiv:1707.00415, 2017.
-  B. Wang, L. Ma, W. Zhang, and W. Liu, “Reconstruction network for video captioning,” in CVPR, 2018, pp. 7622–7631.
-  F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. Carlos Niebles, “Activitynet: A large-scale video benchmark for human activity understanding,” in CVPR, 2015, pp. 961–970.
-  J. Xu, T. Mei, T. Yao, and Y. Rui, “Msr-vtt: A large video description dataset for bridging video and language,” in CVPR, 2016.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” inAAAI, 2017. [Online]. Available: http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14806
-  D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” CoRR, vol. abs/1409.0473, 2014. [Online]. Available: http://arxiv.org/abs/1409.0473
-  W. Zaremba and I. Sutskever, “Reinforcement learning neural turing machines-revised,” arXiv preprint arXiv:1505.00521, 2015.
-  G. E. Hinton and R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science, vol. 313, pp. 504–507, July 2006.
-  D. L. Chen and W. B. Dolan, “Collecting highly parallel data for paraphrase evaluation,” in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. ACL, 2011, pp. 190–200.
-  K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in ACL. Association for Computational Linguistics, 2002, pp. 311–318.
-  S. Banerjee and A. Lavie, “Meteor: An automatic metric for mt evaluation with improved correlation with human judgments,” in ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, vol. 29, 2005, pp. 65–72.
-  C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out: Proceedings of the ACL-04 workshop, vol. 8. Barcelona, Spain, 2004.
-  R. Vedantam, C. Lawrence Zitnick, and D. Parikh, “Cider: Consensus-based image description evaluation,” in CVPR, 2015.
-  X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick, “Microsoft coco captions: Data collection and evaluation server,” arXiv preprint arXiv:1504.00325, 2015.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,
A. Karpathy, A. Khosla, M. Bernstein et al.
, “Imagenet large scale visual recognition challenge,”IJCV, vol. 115, no. 3, pp. 211–252, 2015.
-  M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701, 2012.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  J. Xu, T. Mei, T. Yao, and Y. Rui, “Msr-vtt: A large video description dataset for bridging video and language [supplementary material].” CVPR, October 2016. [Online]. Available: https://www.microsoft.com/en-us/research/publication/msr-vtt-large-video-description-dataset-bridging-video-language-supplementary-material/
-  R. Shetty and J. Laaksonen, “Frame-and segment-level features and candidate pool evaluation for video caption generation,” in ACMMC. ACM, 2016, pp. 1073–1076.
-  N. Ballas, L. Yao, C. Pal, and A. Courville, “Delving deeper into convolutional networks for learning video representations,” arXiv preprint arXiv:1511.06432, 2015.
-  P. Pan, Z. Xu, Y. Yang, F. Wu, and Y. Zhuang, “Hierarchical recurrent neural encoder for video representation with application to captioning,” in CVPR, 2016, pp. 1029–1038.
-  H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu, “Video paragraph captioning using hierarchical recurrent neural networks,” in CVPR, 2016, pp. 4584–4593.
L. v. d. Maaten and G. Hinton, “Visualizing data using t-sne,”
Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008.