Video captioning refers to the automatic generation of a natural language description that summarizes an input video, it has widespread applications including human-computer interaction, video retrieval and so on. With the rapid development of deep learning techniques, intensive research interests have been paid for this topic.
Inspired by the success of the encoder-decoder framework in machine translation 
, most work on video captioning employs a convolutional neural network (CNN) as an encoder to obtain a fixed length vector representation, then a recurrent neural network (RNN) is employed as a decoder to generate a caption. These CNN plus RNN-based learning approaches translate directly from video representation to language without taking any high-level semantic concepts into account. Recent work[2, 3] improves the performance of visual captioning by adding explicit high-level semantic attributes of the image/video. However, most of methods only use attributes learnt from single modality, it is possible not to understand the key actors, objects and their interaction in the scene adequately. Some methods [2, 4, 5] integrating semantic attributes into the RNN-based caption generation process are mainly constrained initialization of the first step of the RNN or through soft attention. They represent the semantic vector as a whole, so couldn’t mine the meanings of individual words to generate the caption.
In order to solve the aforementioned limitations, we propose a novel deep architecture, named Multimodal Semantic Attention Network (MSAN), which takes advantage of incorporating multimodal semantic attributes into sequence learning for video captioning. We capture three modalities of features and their corresponding semantic attributes to represent videos. Take the given video in Figure 1 as an example, the semantic attributes learnt from image frames often depict static objects and scenes (e.g., “boy”, “child” and “guitar”), while the semantics extracted from optical flow frames and video clips often convey temporal dynamic motions (e.g., “playing”, “doing” , and “holding”). This has made the attributes mined from different modalities complementary to each other for the sentence generation (e.g., “a small child is playing the guitar”). Meanwhile, we investigate how the attributes from the three sources can be leveraged to enhance video captioning, and propose a new fusion method that extends each weight matrix of the conventional RNN to an ensemble of attributes-dependent weight matrices. Considering that different semantic attributes have a different impact on sentence generation, we adopt an attention-based fusion strategy to let the model selectively focus on different semantic information parts of the video each time it produces a word. In general, our main contributions can be summarized as follows:
1) We propose a new encoder-decoder network exploiting the multimodal semantic attributes for video captioning.
2) We add a multimodal semantic classification loss to our deep neural network, it is optimized with video captioning loss simultaneously.
3) We incorporate the attention mechanism into the LSTM decoder to automatically focus on different semantic attributes for caption generation.
4) We perform comprehensive evaluations on two popular video captioning benchmarks, demonstrating that proposed method outperforms previous state-of-the-art approaches.
2 Related work
In this section, we briefly review the related works in three aspects: video captioning, video captioning with attention, and video captioning with semantic attributes.
Video Captioning. The research on video captioning mainly includes two different dimensions: template-based language methods [6, 7] and sequence learning approaches [8, 9, 10]. The former predefines a set of templates for sentence generation following specific grammar rules, Li et al. 
extract the phrases related to detected objects, attributes and their relationships for video captioning. Obviously, this approach highly depends on the template of sentences and always generates sentences with the same syntactical structure. Different from template-based language methods, sequence learning method directly translates the video content into a sentence, and it learns the probability distribution in the common space of visual content and textual sentence.
Video captioning with attention. Attention mechanisms have been used to boost the network’s ability to select the relevant features from the corresponding parts of the input. In video captioning, people do not describe everything in a video, instead they tend to talk more about semantically important regions and objects.  utilizes a spatial attention-based mechanism to learn where to focus in the image. This work is followed by  which introduces a temporal attention-based mechanism module to exploit temporal structure for video captioning. Recently, a new use of multimodal fusion attention is proposed to fuse information across different modalities in .
Video captioning with semantic attributes. Attributes are properties observed in visual content with rich semantic cues, recent work [2, 3, 4, 5, 8, 14] show that adding high-level semantic concepts can further improve visual captioning. Multiple Instance Learning is used as an attribute detector in  and then generates sentence based on the outputs of attribute detector.  applies retrieved sentences as additional semantic information to guide the LSTM when generating captions. A transfer unit is designed in  to transfer the information of semantic attributes from image and video to boost video captioning. In , a model of semantic attention is proposed which selectively attends to semantic concepts through a soft attention mechanism, in which a score is assigned to each detected attribute based on its relevance with the previous predicted word, but our work regards the whole video as a label to learning semantic probability distribution. Generally, our work is different from most of the aforementioned sequence learning models, which only uses single modality, but our work extracts semantic attributes by multimodal features. Meanwhile, our work adds a multimodal semantic classification loss to deep neural network, it is optimized with video captioning loss simultaneously.
3 Multimodal Semantic Attention Network
The overall architecture of MSAN is shown in Figure 2, it mainly consists of two modules: LSTM-based encoder with multimodal semantic attributes and attention-based LSTM decoder. It is trained end-to-end with a joint loss on all aforementioned targets.
3.1 Encoder with Multimodal Semantic Attributes
In the encoding phase, we respectively extract three CNN feature sequences from frames, video clips, and optical flow frames for each video , then we employ three two-layer LSTM to model the obtained feature sequences and obtain the corresponding video representations , , and . The video is finally represented as by concatenating the three video representations. Moreover, we add a multimodal semantic attribute detector to exploit high-level semantic attributes for further improving video captioning.
Figure 3 shows the semantic attribute detector architecture. Once obtained visual features , , and , we adopt the multi-label classification approach to learn semantic attributes , and , which represent the probability distributions over the high-level attributes for video. Specifically, we first sort all words extracted from training and validation sets by their frequency, then remove some function words (such as “a” “the”) and finally selecte the top K words including verbs, nouns, adjs as semantic attributes. Suppose there are training examples, and is the label vector of the -th video, where if its caption includes the word , and otherwise. Let represent the -th video feature vector obtained from multiple LSTM layers. Then we employ a MLP to learn a function by the training examples , where is the number of dimensions for input and the number of dimensions for output is equal to the number of semantic attributes. Let be the predicted label vector for the -th video, which is namely the semantic attribute distribution we want to learn. The multi-label classification loss (loss) is defined as follows,
where is the parameters of the encoder model, is a -dimensional vector,
is logistic sigmoid function and
is implemented as a multilayer perceptron.
3.2 Attention-based LSTM Decoder
In the decoder phase, an attention LSTM model is proposed to generate the textual sentence by combining both the visual features and semantic features. Given a video, the goal of video captioning is to output a textual sentence , where consists of words. The video sentence generation problem can be formulated by minimizing the following captionign loss (loss) function as
which is the negative probability of the correct textual sentence given the video and the detected multimodal semantic attributes. In training phrase, the total loss being optimized, is sum of . By minimizing this total loss, the contextual relationship in the sentence can be guaranteed given video feature and its learnt multimodal semantic attributes.
As discussed in Equation (2), we employ the LSTM-based decoder to generate a sentence for each video. Given input word , last hidden state , and last memory cell , the LSTM is updated for time step as following :
Let denote one subscript among , and in the above equations. Here and represent the weight matrices, represent the states of input gate, forget gate, output gate, memory cell and squashed input, respectively, at time . is hyperbolic tangent function, and is an indicator function, which represents that video feature vector is fed into the LSTM at the beginning. For Simplify, bias terms are omitted throughout the paper.
To better exploit the complementary information from multiple semantic attributes, we propose to combine them to compute weight matrices by using a attention unit. We extend each weight matrix of the conventional LSTM to an ensemble of attributes-dependent weight matrices for mining the meanings of individual words to generate the caption. Namely, we replace with for each , where
is a multimodal semantic attributes vector. Specifically, we define two weight tensorsand , where is the number of hidden units and is the dimension of word embedding. and can be written as:
where and represent the -th 2D slice of and respectively, which are associated with probability , and is the -th element in . It implicitly specifies LSTMs in total. In order to combine LSTMs, we propose to learn an attention-based multimodal semantic attributes vector at each time step . It is defined as
where represents that we have learned three semantic attributes . The attention weight reflects the relevance of the -th semantical attribute in the input video given all the previously generated words. Hence, we design an attention unit to calculate that takes both the previous hidden state , and the -th semantic attribute vector as input and returns the unnormalized relevance score :
are the parameters that are estimated together with all the other parameters in networks. Once the relevance scoreare computed, we normalize them to obtain the
It can be seen that for different time step , the semantic attributes are different, which makes the model selectively focus on different semantic information parts of the video at each time when it produces a word.
Because of training such a model defined in (10) and (11) is the same as jointly training an ensemble of LSTMs, though appealing, the number of parameters is proportional to , which is unrealistic for large . We factorize and define in (10) and (11) as:
where represents the element-wise multiply operator. In (15) and (16), and are shared among all the captions, which can effectively capture common linguistic patterns. Meanwhile, the diagonal terms ( and ) are captured by , which accounts for the specific semantic attributes in of the video under test. are obtained the same as equation (17). Hence, via the decomposition in (15) and (16), our network effectively learn an ensemble of sets of LSTM parameters, one key word in corresponds to a set of parameters in one LSTM. By sharing and when composing each member of the ensemble, we can remedy this problem of a large .
We evaluate the proposed MSAN model on two standard video captioning benchmarks: MSVD  and MSR-VTT . The MSVD dataset consists of 1,967 short videos, we follow the setting used in prior works [6, 10], taking 1,200 videos for training, 100 ones for validation and 670 ones for test in our experiments. The MSR-VTT dataset contains 10,000 video clips in 20 well-defned categories, we use the data split defined in  in our experiments: 6,513 videos for training, 497 ones for validation, and 2,990 ones for test.
4.1 Experimental Settings
For training, all the parameters in the MSAN are initialized from a uniform distribution in [-0.05,0.05], all bias are initialized to zero. We set both the number of hidden units and the number of factors to 512, word embedding vectors are initialized with the publicly available word2vec vectors. The maximum number of epochs for all the two datasets is 20, gradients are clipped if the norm of the parameter vector exceeds 5. We use dropout and early stopping on validation sets, and the Adam algorithm with learning rateis utilized for optimization.
In test stage, we adopt the beam search strategy for caption generation and set beam size to 5. For quantitative evaluation of our proposed models, we adopt three common metrics in video captioning: BLEU@N , METEOR , and CIDEr-D . All metrics are computed by using the codes released by Microsoft COCO Evaluation Server .
4.2 Quantitative Analysis
At first, we aim to evaluate the effect of using different features and their combination specifically. In our MSAN model, we test six variants using six different semantic attributes, and concatenate as final video feature , which is fed into the LSTM decoder at the beginning. The “MSAN” only uses the semantic attributes , and the “MSAN” uses two semantic attributes and . The same denotations are used for other four models. denote semantic attrbutes and respectively. Table 1 shows the performances of different models on the MSVD dataset.
From Table 1, we can conclude the following points. 1) The results across six evaluation metrics consistently indicate that our proposed MSAN achieves the best performance than all the state-of-the-art methods. In particular, the METEOR and CIDEr-D of our MSAN can achieve 35.3% and 79.6% which are to date the highest performance reported on MSVD dataset. 2) Incorporating different attributes to MSAN, such as MSAN, MSAN, and MSAN, the results across six evaluation metrics are gradually increasing, which indicates that visual representations are augmented with multimodal semantic attributes and thus do benefit the learning of video sentence generation. Notably, MSAN improves performance of MSAN and MSAN, which demonstrates the advantage of leveraging the learnt multimodal semantic attributes for boosting video captioning. 3) Even only using one kind of semantic attributes , the MSAN achieves competitive results with LSTM-TSA across different evaluation metrics, which proves the effectiveness of the proposed MSAN framework. Namely.
Secondly, we further compare our MSAN model with other three models that adopt different attributes fusion strategy for caption generation. For fair comparison, all these methods are based on single semantic attributes , and use the same video feature . In the denotations of “LSTM-/LSTM-”, denotes video feature vector, denotes the semantic attributes vector that derives from video RGB frames, and denotes the concatenation of and , which is fed into a standard LSTM decoder only at the initial time step. Actually, the LSTM-  model can be treated as a baseline architecture, which doesn’t use semantic attributes. LSTM- is the model proposed in , which uses the concatenation of and to feed into a standard LSTM decoder. In LSTM-, the video feature vector is sent to a standard LSTM decoder at the first time step, while the semantic vector is sent to the LSTM decoder at every time step in addition to the input word. This model is similar to  without using semantic attention. MSAN is one of our model.
The results of above four methods are listed in Table 2. It can be seen that our MSAN achieves the best performance than all other models, which proves that our semantic fusion method can effectively integrate semantic attributes for captioning. Especially, the improvement of our MSAN compared with LSTM- is huge, which demonstrates it is very necessary to combine the semantic attributes. The performance of of our MSAN is higher than that of LSTM-, which demonstrate that using attention mechanism to obtain different semantic attribute vectors at different time steps fed into LSTM can further boost sentence generation compared with using the same semantic attribute vector at different time steps. Therefore, it is useful to combine the attention mechanism to incorporate semantic attributes into LSTM decoder. Finally, we test six variants of our MSAN model and other methods on the MSR-VTT dataset, which concludes the consistent results with the MSVD dataset in Table 3. Our models almost outperform all competing methods across all evaluation metrics, especially our MSAN achieves an improvement by a substantial margin.
4.3 Qualitative Analysis
Figure 4 shows four examples, including ground truth sentences (GT) and the sentences generated by three approaches. From these example results, it is obviously to see that our models can generate somewhat relevant and logically correct sentences. For instance, compared to the subject term “a person” and the verb term “playing” in the sentence generated by LSTM- for the first video, “a man” and “throwing” in our MSAN are more relevant to the video content, since the word “man” and “throwing” predicted as one attribute from different modalities which complement to each other.
Moreover, compared with MSAN, MSAN can generate more descriptive sentences by enriching the semantics with attributes. For instance, with the verb term “lying” learnt from and , the generated sentence “A woman is lying down in bed” of the third video depicts the video content more comprehensive. This confirms that video captioning is improved by leveraging complementary multimodal semantic attributes learnt from videos.
We have proposed a MSAN framework which explores both video representations and multimodal semantic attributes for video captioning. An attention-based LSTM decoder has been proposed to pay attention to different semantic attributes from different modalities for enhancing sentence generation. In particular, our deep network has been optimized with semantic classification loss and video captioning loss simultaneously. Extensive experimental results have validated the effectiveness of our models on two standard video captioning datasets.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio,
“Neural machine translation by jointly learning to align and translate,”in CoRR, 2014.
-  Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, and Anton van den Hengel, “What value do explicit high level concepts have in vision to language problems,” in CVPR, 2016.
-  Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al., “From captions to visual concepts and back,” in CVPR, 2015.
Xu Jia, Efstratios Gavves, Basura Fernando, and Tinne Tuytelaars,
“Guiding long-short term memory for image caption generation,”in ICCV, 2015.
-  Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo, “Image captioning with semantic attention,” in CVPR, 2016.
-  Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko, “Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition,” in ICCV, 2013.
Siming Li, Girish Kulkarni, Tamara L Berg, Alexander C Berg, and Yejin Choi,
“Composing simple image descriptions using web-scale n-grams,”in CoNLL, 2011.
-  Yingwei Pan, Ting Yao, Houqiang Li, and Tao Mei, “Video captioning with transferred semantic attributes,” in CVPR, 2017.
-  Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko, “Sequence to sequence-video to text,” in CVPR, 2015.
-  Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko, “Translating videos to natural language using deep recurrent neural networks,” in CoRR, 2014.
-  Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in ICML, 2015.
-  Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville, “Describing videos by exploiting temporal structure,” in ICCV, 2015.
-  Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R Hershey, Tim K Marks, and Kazuhiko Sumi, “Attention-based multimodal fusion for video description,” in ICCV, 2017.
-  Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng, “Semantic compositional networks for visual captioning,” in CVPR, 2017.
-  David L Chen and William B Dolan, “Collecting highly parallel data for paraphrase evaluation,” in ACL, 2011.
-  Jun Xu, Tao Mei, Ting Yao, and Yong Rui, “Msr-vtt: A large video description dataset for bridging video and language,” in CVPR, 2016.
-  Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu, “Bleu: a method for automatic evaluation of machine translation,” in ACL, 2002.
-  Satanjeev Banerjee and Alon Lavie, “Meteor: An automatic metric for mt evaluation with improved correlation with human judgments,” in ACL, 2005.
-  Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh, “Cider: Consensus-based image description evaluation,” in CVPR, 2015.
-  Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick, “Microsoft coco captions: Data collection and evaluation server,” in CoRR, 2015.
-  Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, and Yong Rui, “Jointly modeling embedding and translation to bridge video and language,” in CVPR, 2016.
-  Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville, “Delving deeper into convolutional networks for learning video representations,” in CoRR, 2015.
-  Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu, “Video paragraph captioning using hierarchical recurrent neural networks,” in CVPR, 2016.
-  Pingbo Pan, Zhongwen Xu, Yi Yang, Fei Wu, and Yueting Zhuang, “Hierarchical recurrent neural encoder for video representation with application to captioning,” in CVPR, 2016.
-  Junbo Wang, Wei Wang, Yan Huang, Liang Wang, and Tieniu Tan, “Multimodal memory modelling for video captioning,” in CVPR, 2018.