Incorporating temporal information into video representation has long been a fundamental problem in computer vision. Earlier works such as Dense Trajectories and improved Dense Trajectories (iDT)  typically utilize optical flow to extract temporal information and hand-crafted features to model appearances and motions. With the recent success of deep Convolutional Neural Networks (ConvNets) both in efficiency and efficacy, we have witnessed a new trend in leveraging ConvNets to infer video representation. Xu et al.  propose to utilize VLAD  to aggregate frame level ConvNet for video representation, which is unable to capture temporal structure. Simonyan and Zisserman  combine stacked optical flow frames and RGB streams to train ConvNets for video classification, which achieves comparable performance to iDT in action recognition. A limitation of two-stream ConvNets  and iDT  is that both algorithms require optical flow as input, which is expensive to extract (it takes usually 0.06 seconds to extract optical flow between a pair of frames ), but is only able to capture temporal information in video clips of short duration.
To avoid extracting optical flow, 3D ConvNets are proposed in  to generate a video representation, with emphasis on efficiency improvement. This approach, however, can only cope with short video clips of 16 frames or so 
. Very recently, Long Short-Term Memory (LSTM) has been applied to video analysis , inspired by the general recurrent encoder-decoder framework . A plausible feature of LSTM is that LSTM is capable of modeling data sequences. However, as this paper tries to cope with, there are still a few challenges remain unaddressed.
First, a large number of long-range dependencies are usually difficult to capture. Even though LSTM can deal with long video clips in principal, it has been reported that the favorable length of video clips to be feed into LSTM falls in the range of 30 to 80 frames [20, 37]. In order to model longer video clips while attaining similar good performance as in [20, 37]
, we propose to divide a long video clip into a few short frame chunks, feed the chunks into LSTM, and composite the LSTM outputs of the frame chunks into one vector, which can then be fed into another LSTM at a higher level to uncover the temporal information among the composited vectors over a longer duration. Such a hierarchical structure significantly reduces the length of input information flow but is still capable of exploiting temporal information over longer time afterwards at a higher level.
Second, additional non-linearity has been demonstrated helpful for improving model training for visual tasks such as image and video classification [27, 31, 20]. A straightforward way of adding non-linearity into LSTM is stacking [31, 20]. Despite of the improved performance, a major disadvantage of stacking is that it introduces a long path from the input to the output video vector representation, thereby resulting in heavier computational cost. As we will discuss in details later, Hierarchical Recurrent Neural Encoder (HRNE) proposed in this paper dramatically shortens the path with the capability of adding non-linearity, providing a better trade-off between efficiency and effectiveness.
Third, video temporal structures are intrinsically layered. Suppose a video of birthday party consists of three actions, i.e., blowing candles, cutting cake, and eating cake. As the three actions usually take place sequentially, i.e., there are strong temporal dependencies among them, we need to appropriately model the temporal structure among the three actions. In the meantime, the temporal structure within each action should also be exploited. To this end, we need to model video temporal structure with multiple granularities. Unfortunately, straightforward implementation of LSTM can not achieve this goal.
The proposed HRNE framework models video temporal information using a hierarchical recurrent encoder and can effectively deal with the three aforementioned challenges. While HRNE is a generic video representation, we apply it to video captioning to test the performance, because temporal information plays a key role in video captioning. Two widely-used video captioning datasets, the Microsoft Research Video Description Corpus (MSVD)  and the Montreal Video Annotation Dataset (M-VAD) , are used in our experiments, which demonstrate the effectiveness of the proposed method.
2 Related Works
Dense Trajectories  and its improved version: improved Dense Trajectories  have dominated the filed of action recognition and general video classification tasks such as complex event detection. Dense Trajectories applies dense sampling to get the interest points along the video and then tracks the points in a short time period. Local descriptors such as HOG, HOF and MBH are extracted along the tracklets. Bag-of-Words (BoWs)  and Fisher vector encoding  are then applied to accumulate the local descriptors and generate the video representation.
Besides the hand-crafted visual features like Dense Trajectories, researchers have started exploring the Convolutional Neural Networks (ConvNets) on video representation recently. Karpathy et al.  first introduce ConvNets which are similar with Krizhevsky et al.  into video classification, and different fusion strategies are explored to combine information over the temporal domain in this work. In order to better capture temporal information in action recognition, Simonyan and Zisserman  propose to utilize stacked optical flow frames as inputs to train the ConvNets, which, together with the RGB stream, achieves comparable performance as the state-of-the-art hand-crafted features  on action recognition. Tran et al.  utilize 3D ConvNets to learn temporal information without optical flows, which is inspired by Ji et al.  and Simonyan and Zisserman . Xu et al.  propose to utilize VLAD 
aggregation on frame-level ConvNet features and it directly adapts ImageNet pretrained image classification model to video representation.
All these works mentioned above utilize either average pooling or encoding methods such as Fisher vector and VLAD over time to generate a global video feature from a set of local features. However, time dependency information is lost since average pooling and encoding methods always ignore the order of the input sequences, i.e., taking the local features as a set rather than a sequence. To tackle this problem, Ng et al.  introduce Long Short-Term Memory (LSTM) to model the temporal order, inspired by the general sequence to sequence learning neural model proposed by Sutskever et al. . Stacked LSTM is applied in , where each layer of the LSTM retains the same time scale. Different from the standard approach which stackes LSTM layers into multilayered one and simply aims to introduce more non-linearity into the neural model, the Hierarchical Recurrent Neural Encoder proposed in this work aims to abstract the visual information at different time scales, and learns the visual features with multiple granularities.
In the application of video captioning, Donahue et al.  introduce the LSTM into this task by feeding the Conditional Random Field (CRF) outputs of objects, subjects, and verbs into the LSTM to generate video description. The same as , other works such as [38, 42, 21]
utilize the LSTM essentially as a recurrent neural networklanguage model to generate video descriptions, which conditions on either the average pooled frame-level features or the context vector linearly blended by the attention mechanism . In contrast to these works, we study better video content understanding from the visual feature aspects instead of language modeling ones. Based on stacked LSTM, Venugopalan et al.  is the only attempt to utilize LSTMs as both visual encoder and language decoder in the video captioning task, which is inspired by the general neural encoder-decoder framework [31, 7] as well.
In the area of query suggestion, Sordoni et al.  propose a hierarchical recurrent neural network for context-aware query suggestion in a search engine. In this model, the text query in a session is firstly abstracted by one RNN layer into the query-level state, another RNN layer is used to learn session-level dependency and then, the session-level hidden states is utilized to make suggestions for users.
Contemporary to this work, Yu et al.  introduce a hierarchical RNN decoder
, specifically Gated Recurrent Unit (GRU), into the video captioning system. A sentence generator consisting of a GRU layer conditions on visual feature, and then a paragraph generator accepts sentence vector and the context to generate paragraph level description, which essentially learns the time dependencies between sentences, and works on the language processing aspects. In contrast, this work is focusing on learning good visual feature, i.e., the encoder part, but not the language processing, i.e., the decoder part.
3 The Proposed Approach
We propose a Hierarchical Recurrent Neural Encoder (HRNE) model for video processing tasks. Assume we have frames in the video, based on the HRNE model, we develop a general video encoder which takes the frame-level visual features from a video sequence as input and outputs a single vector as the representation for the whole video. For the video captioning task specifically, we keep the single layer LSTM decoder as a recurrent neural network language model , which conditions on the video feature vector , similar to previous works [37, 42]. By keeping the similar LSTM decoder, we can make a fair comparison with other neural network video captioning systems and demonstrate the power of our hierarchical recurrent neural encoder for the visual information extraction.
3.1 The Recurrent Neural Network
The recurrent neural network is a natural extension of feedforward neural networks on modeling sequence. Given an input sequence , a standard RNN computes the output sequence by iterating the following equations:
The RNN can map the inputs to the outputs whenever the alignment between inputs and outputs is provided. The standard RNN would work principally, but it is really difficult to train the standard RNN due to the vanishing gradient problem. The Long Short-Term Memory (LSTM) is known to learn patterns with wider range temporal dependencies. We now introduce the LSTM model.
The core of the LSTM model is a memory cell which records the history of the inputs observed up to that time step. is a summation of the previous memory cell modulated by a sigmoid gate , and , a function of previous hidden state and the current input modulated by another sigmoid gate . The sigmoid gates can be thought as knobs that LSTM learns to selectively forget its memory or accept current input. The cell has three gates. The input gate controls whether the LSTM will consider current input . The forget gate is used to control whether LSTM will forget the previous memory . The output gate controls how much information will be transferred from memory to hidden state . There are several widely used LSTM variants and we use the LSTM unit described in  (see also Figure 1) in our model, which iterates as follows:
3.2 Hierarchical Recurrent Neural Encoder
It has been reported that adding more non-linearity is helpful for vision tasks . The performance of LSTM can be improved if additional non-linearity is added. A straightforward way is stacking multiple layers, which, however, will increase computation operations. Inspired by the ConvNet operations in spatial domain, we propose a Hierarchical Recurrent Neural Encoder (HRNE) model. As shown in Figure 2, in a ConvNet model, a filter is used to explore the spatial visual information of an image by performing convolution calculation between image patch matrix and a learnable filter matrix :
where denotes the number of columns of the filter matrix, denotes the number of rows, denotes the matrix item located in the -th row and -th column and is the convolution calculation result. The filter is applied over the whole image to generate the filtered image, which is further forwarded into the next layer. Similarly, in temporal domain, we introduce an additional layer, instead of stacking, by which only short LSTM chains need to be dealt with. Like the filters in ConvNet’s convolutional layer are well suited for exploring local spatial structure, using temporal filter to explore the local temporal structure is presumed to be beneficial since videos always consist of several incoherence clips.
The main difficulty of introducing additional layers into temporal modeling is finding a proper temporal filter. In spatial domain, the output of filter is independent from spatial location, and a matrix can be used as a filter. Differently, in temporal domain, there is certain temporal dependencies between consecutive items. As a result, a matrix is not sufficient to be used as a temporal filter. Since RNN is well suited for temporal dependency modeling, we adopt short RNN chains as the temporal filters in our HRNE model. Specifically, we use LSTM chains in this paper and take the mean of all LSTM chain’s hidden states as the filtering result.
We first divide an input sequence into several chunks , , …, , where
is stride and it denotes the number of temporal units two adjacent chunks are apart. After inputting these subsequences into the LSTM filter, we will get a sequence of feature vectors, where denotes the least integer among those integers which are larger than . Each feature vector in gives a proper abstract of its corresponding clip. Since what we actually need is the feature vector of the whole video, we must figure out a method to summarize all these feature vectors. We propose to use another LSTM layer to deal with this task. We combine these two LSTM layers and build our HRNE model. The first LSTM layer serves as a filter and it is used to explore local temporal structure within subsequences. The second LSTM learns the temporal dependencies among subsequences. We note that more complex HRNE model could be adding more layers to build multiple time-scale abstraction of the visual information.
A large number of long-range dependencies are usually difficult to capture. Even though LSTM can deal with long video clips in principal, we compare HRNE with stacked multilayered LSTM in Figure 3. The red line in the Figure 3 shows how the input at flows though the model to the final output. We are used to set the stride to be the same as the LSTM filter length. For an input sequence of length and a LSTM filter of length , the red line in HRNE model goes through LSTM units, which means the input at will only flow through steps to the output rather than steps if stacked RNN is used. If and we set to be 30, then HRNE will only go through 64 steps rather than 1,001 steps. Fewer steps an input will go through before it reaches the output means that it’s easier to backtrack, so our HRNE is easier for stochastic gradient methods via Back-propagation Through Time (BPTT) to train.
Since the recently proposed soft attention mechanism from  has achieved great success in several sequence modeling tasks, we integrate the attention mechanism into our HRNE model. We next introduce the attention mechanism part.
The core of the soft attention mechanism is that instead of just inputting the original sequence into a LSTM layer, dynamic weights are used to generate a new sequence :
where and will be calculated by an attention neural network at each time step .
The attention weight actually measures the relevance between the -th element of the input sequence and the history information recorded by the LSTM . Hence a function is needed to calculate the relevance score:
where are all parameters and is the hidden state of the LSTM at -th time step.
We need to calculate for and then could be calculated by:
The attention mechanism could make the LSTM pay attention to different temporal locations of the input sequence according to its backprop information, and when the input sequence and the output sequence are not aligned strictly, attention would especially be helpful. We add attention units in three different positions in our video caption model: between the visual input and the LSTM filter, between the output of the filter and the second LSTM layer, between the output of our HRNE and the description decoder.
3.3 Video Captioning
Our HRNE can be applied to several video processing tasks where feature vectors are required to represent videos. In this paper, we use video captioning, where temporal information plays an important role, to showcase the advantage of the proposed method.
We develop our video captioning model based on the general sequence to sequence model , i.e., encoder-decoder framework, which is same as the previous works [42, 37]. We use the general video encoder to map video sequences to feature vectors and then one-layer LSTM decoder conditioned on the video feature vector to generate description for the video.
The overall objective function we are optimizing is the log-likelihood over the whole training set,
where is a one-hot vector (1-of-N coding, where N is the size of the word vocabulary) used to represent the word at the -th time step, is the feature vector output by the video encoder and represents the video captioning model’s parameters.
and and are all the parameters.
4 Experimental Setup
We utilize two standard video captioning benchmarks to validate the performance of our proposed method in the experiments: the widely used Microsoft Video Description Corpus (MSVD)  and one recently proposed dataset the Montreal Video Annotation Dataset (M-VAD) .
4.1 The Datasets
The Microsoft Video Description Corpus (MSVD): The Microsoft Video Description Corpus (MSVD)  contains 1,970 videos with multiple descriptions labeled by the Amazon Mechanical Turkers. Annotators are requested to provide a single sentence description to a picked up short clips. The total number of clip-description pairs is about 80,000. The original dataset consists of multi-lingual descriptions while we only focus on the English description as the previous works [38, 37, 42]. We utilize the standard splits provided in  for fair comparisons with state-of-the-art video captioning systems [38, 37, 42], which separate the original dataset into training, validation and testing with 1,200 clips, 100 clips, and the remaining clips, respectively.
The Montreal Video Annotation Dataset (M-VAD): The Montreal Video Annotation Dataset (M-VAD) is a newly collected large-scale video description dataset from the DVD descriptive video service (DVS) narrations. There are 92 DVD movies in the M-VAD dataset, which is further divided into 49,000 video clips. Each clip in the video has one corresponding narration as the groundtruth of the clip description. Since the narrations are generated in a semi-automatically transcribed way, the grammar used in the description is much more complicated than the one in MSVD. Same as previous works [42, 37], we utilize the standard splits provided in , which consists of 39,000 clips in the training set, 5,000 clips in the validation set, and 5,000 clips in the testing set. We note that the M-VAD dataset is much more challenging than the MSVD dataset and current state-of-the-art video description still produces very poor performance.
Visual Features: We use GoogLeNet 
to extract the frame-level features in our experiment. All the videos’ lengths are kept to 160. For a video with more than 160 frames, we drop the extra frames. For a video without enough frames, we pad zero frames, following the common practices. Instead of directly inputting the features into HRNE, we learn a linear embedding of the features as the input of our model.
Description preprocessing: We convert all descriptions to lower case, and use the PTBTokenizer in Stanford CoreNLP tools111version 3.4.1  to tokenize sentences and remove punctuation. This yields a vocabulary of 12,976 in size for the MSVD dataset and a vocabulary of 15,567 in size for the M-VAD dataset.
4.3 Evaluation Metrics
Several standard metrics such as BLEU , METEOR , ROUGE-L  and CIDEr  are used commonly for evaluating visual captioning tasks, mainly following the machine translation field. The authors of  evaluated the above four metrics in terms of the consistency with human judgment, and found that METEOR is always better than BLEU and ROUGE. Thus, METEOR is used as the main metric in the evaluation. We utilize the Microsoft COCO evaluation server  to obtain all the results reported in this paper, which makes our results directly comparable with the previous works.
4.4 Compared Algorithms
FGM : It first obtains confidences on subject, verb, object and scene elements. Then a factor graph model is used to infer the most likely (subject, verb, object) tuple in the video. Finally it generates sentence based on a template.
Average pooling + LSTM decoder  (denoted as Mean pool): The average pooling frame-level features is treated as the representation of the whole video. LSTM is utilized as a recurrent language model to produce the description given the visual feature.
S2VT : This is an encoder-decoder model. It consists of two phases. In the first phase, it serves as a video encoder and in the second phase, it stops accepting video sequence and begins generating video descriptions.
Temporal Attention  (SA): It applies attention mechanism on temporal locations and then utilizes the recurrent language model LSTM to generate the video description.
LSTM embdding  (LSTM-E): It uses embedding layers to project the visual feature and text feature into one space, with a modified loss between description and visual features.
Paragraph RNN decoder  (p-RNN): It introduces a hierarchical structure in decoder for language processing and introduce the paragraph description in addition to the standard sentence description.
4.5 Training Details
In the training phase, we add a begin-of-sentence tag BOS to start each sentence and an end-of-sentence tag EOS to end each sentence, so that our captioning model can deal with sentences of varying lengths. In the testing phase, we input BOS into video decoder to start generating video descriptions and during each step, we choose the word with the maximun probability after softmax until we reach EOS.
We adopt different parameter settings to train different datasets. When we are training on MSVD, we use the following settings: All the LSTM units are set to 1,024, the visual feature embedding size and the word embedding size are set as 512 empirically. When training on M-VAD, we find our HRNE is easier to overfit than in MSVD, so we set all the LSTM units to be 512 and still keep the visual feature embedding size and the word embedding size to be half of the number of LSTM units. As the videos in the two datasets are very short, a two-layer HRNE is sufficient to capture the temporal structure of videos. Nevertheless, one may use HRNE with more layers to deal with longer videos.
The length of the LSTM chain at the bottom layer is 8, and we set the stride to be 8 in all the experiments. We set the size of mini-batch as 128. We apply the first-order optimizer ADAM to minimize the negative log-likelihood loss for the training process and we set the learning rate , the decay parameters , as defaulted in Kingman and Ba , which generally shows good performance and does not need heavily tuned. Since we observe serious overfitting problems when training our model on M-VAD dataset, we apply the simple yet effective neural model regularization method Dropout  with rate of 0.5 on the input and the output of LSTMs but not on the recurrent transitions as suggested by Zaremba et al. . We find that the proposed model has better generalization ability in this way, empirically.
5 Experimental Results
We evaluate our HRNE model on video captioning on both MSVD and M-VAD. We report results on MSVD in Table 1 and Table 2. Note that only RGB GoogLeNet features are adopted by our HRNE in all the experiments. To make a fair comparison, we firstly report the results only using static frame-level features in Table 1. We additionally compare our HRNE with only one ConvNet feature to other video captioning systems which combine multiple ConvNet features in Table 2. Lastly, we conduct the experiment on the more challenging dataset M-VAD, and report the results in Table 4 .
5.1 Experiment results on the MSVD dataset
We report experiment results where only static frame-level features are used in Table 1 on the MSVD dataset. Both Mean pool and SA ignore temporal dependencies along video sequences. They adopt the weighted averages of frame-level features to represent videos. Our HRNE outperforms both Mean pool and SA, due to the exploration of temporal information of videos. Hierarchical description decoder is adopted in p-RNN to generate complex descriptions, while our HRNE has better performance than p-RNN, which indicates exploring temporal information of videos is more important for video captioning. S2VT uses one-layer LSTM as video encoder to explore videos’ temporal information. Our HRNE achieves better result than S2VT, because the hierarchical structure in HRNE reduces the length of input flow and composites multiple consecutive input at a higher level, which increases the learning capability and enables our model encode richer temporal information of multiple granularities. To further improve our HRNE, we add attention mechanism, which again improves its performance.
We additionally compare our HRNE to other video captioning systems with fusion in Table 2. In this experiment, our HRNE only uses one ConvNet feature as input but the compared systems combine multiple ConvNet features. Mean pool model achieves the best result on AlexNet with the model pre-trained on COCO  and it achieves 29.1% METEOR. S2VT achieves the best result with RGB frames on VGGNet and optical flows on AlexNet and it achieves 29.8% on METEOR. SA model gets the best result with GoogLeNet and 3D-CNN. It achieves 29.6% in METEOR and 41.9% in BLEU-4. Our HRNE achieves the best result in METEOR. It means although adding more features helps improve video captioning systems’ performance, our method still achieves the best performance. This result confirms the effectiveness of our HRNE. We notice that p-RNN outperforms our HRNE in terms of BLEU. However, our method outperforms p-RNN in almost all other cases (see Table 1 and Table 2) and, more importantly, as demonstrated in , METEOR is more reliable than BLEU.
In Table 3, we show a few examples of the descriptions generated by our method. We notice that our HRNE can generate an accurate description of the video even in some difficult cases. In addition, the results with the attention mechanism is generally better than those without the attention mechanism, which is consistent with the results reported in Table 1 and Table 2.
|Mean pool ||28.7||-||-||-||38.7|
|HRNE with attention||33.1||79.2||66.3||55.1||43.8|
|HRNE with attention-(G)||33.1||79.2||66.3||55.1||43.8|
5.2 Experiment results on the M-VAD dataset
Table 4 reports the results on M-VAD. Compared with MSVD, M-VAD is a more challenging dataset, because it contains more visual concepts and complex sentence structures. Since the result on BLEU metric is close to 0222SA  achieves only 0.7% BLEU-4 on this dataset., we do not consider BLEU metric in this experiment. Our HRNE achieves 5.8% in METEOR, which outperforms both S2VT and SA333Only S2VT and SA have reported result on this challenging dataset.. After adding the attention mechanism, our performance (in METEOR) is further improved from 5.8% to 6.8%. Such performance even outperforms S2VT which combines M-VAD and MPII-MD  for training. Because combining two datasets introduces much more training data than just one dataset as the standard setting we used for training, this result again validates the effectiveness of our HRNE.
|HRNE: A man is swimming in the water.||HRNE： A man is playing a guitar||HRNE: A woman is adding noodles into a pot|
|HRNE with attention: A dog is swimming.||HRNE with attention: A man is playing a guitar.||HRNE with attention: A woman is cooking.|
|Ground truth: A dog is swimming in a pool.||Ground truth: A boy is playing a guitar.||Ground truth: A woman dips a shrimp in batter.|
HRNE: A man is playing a guitar.
|HRNE: A group of people are dancing.||HRNE: A person is preparing an egg.|
|HRNE with attention: A man is playing a guitar||HRNE with attention: A group of people are dancing||HRNE with attention: A woman is peeling a mango.|
|Ground truth: A man plays a guitar.||Ground truth: A group of young girls are dancing on stage.||Ground truth: A mango is being sliced.|
HRNE: A girl is talking.
|HRNE: A man is riding a bike.||HRNE: A man is doing a machine.|
|HRNE with attention: A young girl is talking on a telephone||HRNE with attention: A man is riding a bike||HRNE with attention: A man is doing a dance.|
|Ground truth: A woman dials a cell phone.||Ground truth: A biker rides along the beach.||Ground truth: A basketball player is doing a hook shot.|
|SA-GoogleNet+3D-CNN 444 notes that  achieves 4.1% METEOR with the same evaluation script as , while the 5.7% METEOR reported in  is caused by different tokenization.||4.1|
|HRNE (with attention)||6.8|
6 Conclusions and Future Work
In this paper, we proposed a new method, namely Hierarchical Recurrent Neural Encoder (HRNE), to generate video representation with emphasis on temporal modeling. Compared to existing approaches, the proposed HRNE is more capable of video modeling because 1) HRNE reduces the length of input information flow and exploits temporal structure in longer range at a higher level; 2) more non-linearity and flexibility are added in HRNE; and 3) HRNE exploits temporal transitions with multiple granularities. Extensive experiments in video captioning demonstrate the efficacy of HRNE.
Last but not least, the proposed video representation is generic which can be applied to a wide range of video analysis applications. We will explore the application of the encoder on video classification in the future work, which plugs with a softmax classifier upon the encoder and video labels instead of the LSTM language decoder in this work to validate the generalization capability of this framework.
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
-  F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio. Theano: new features and speed improvements. NIPS Workshop, 2012.
-  Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166, 1994.
-  J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In SciPy), June 2010.
-  D. L. Chen and W. B. Dolan. Collecting highly parallel data for paraphrase evaluation. In ACL, 2011.
-  X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
-  K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, 2015.
-  M. Denkowski and A. Lavie. Meteor universal: Language specific translation evaluation for any target language. In EACL, 2014.
-  J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
-  I. Goodfellow, D. Warde-farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013.
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
-  H. Jégou, M. Douze, C. Schmid, and P. Pérez. Aggregating local descriptors into a compact image representation. In CVPR, 2010.
-  S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. TPAMI, 35(1):221–231, 2013.
-  A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
-  D. Kingma and J. Ba. ADAM: A method for stochastic optimization. In ICLR, 2015.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
-  C.-Y. Lin. ROUGE: A package for automatic evaluation of summaries. In ACL workshop, 2004.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV. 2014.
C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and
The Stanford CoreNLP natural language processing toolkit.In ACL, 2014.
-  J. Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
-  Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui. Jointly modeling embedding and translation to bridge video and language. arXiv preprint arXiv:1505.01861, 2015.
-  K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: a method for automatic evaluation of machine translation. In ACL, 2002.
-  R. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013.
-  A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. A dataset for movie description. In CVPR, 2015.
-  J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the fisher vector: Theory and practice. IJCV, 105(3):222–245, 2013.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
-  J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In CVPR, 2003.
-  A. Sordoni, Y. Bengio, H. Vahabi, C. Lioma, J. G. Simonsen, and J.-Y. Nie. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In CIKM, 2015.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, 2014.
-  I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
-  J. Thomason, S. Venugopalan, S. Guadarrama, K. Saenko, and R. Mooney. Integrating language and vision to generate natural language descriptions of videos in the wild. In COLING, 2014.
-  A. Torabi, C. Pal, H. Larochelle, and A. Courville. Using descriptive video services to create a large data source for video annotation research. arXiv preprint arXiv:1503.01070, 2015.
-  D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri. C3D: generic features for video analysis. In ICCV, 2015.
-  R. Vedantam, C. L. Zitnick, and D. Parikh. CIDEr: Consensus-based image description evaluation. In CVPR, 2015.
-  S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko. Sequence to sequence - video to text. arXiv preprint arXiv:1505.00487v2.
-  S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko. Translating videos to natural language using deep recurrent neural networks. In NAACL-HLT, 2015.
-  H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In CVPR, 2011.
-  H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
-  Z. Xu, Y. Yang, and A. G. Hauptmann. A discriminative CNN video representation for event detection. In CVPR, 2015.
-  L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In ICCV, 2015.
-  H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu. Video paragraph captioning using hierarchical recurrent neural networks. arXiv preprint arXiv:1510.07712, 2015.
-  W. Zaremba and I. Sutskever. Learning to execute. In ICLR, 2015.
-  W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.