Enriching Video Captions With Contextual Text

07/29/2020 ∙ by Philipp Rimle, et al. ∙ ETH Zurich 8

Understanding video content and generating caption with context is an important and challenging task. Unlike prior methods that typically attempt to generate generic video captions without context, our architecture contextualizes captioning by infusing extracted information from relevant text data. We propose an end-to-end sequence-to-sequence model which generates video captions based on visual input, and mines relevant knowledge such as names and locations from contextual text. In contrast to previous approaches, we do not preprocess the text further, and let the model directly learn to attend over it. Guided by the visual input, the model is able to copy words from the contextual text via a pointer-generator network, allowing to produce more specific video captions. We show competitive performance on the News Video Dataset and, through ablation studies, validate the efficacy of contextual video captioning as well as individual design choices in our model architecture.



There are no comments yet.


page 1

page 6

Code Repositories


Video dataset with contextual information based on movie scripts, used in the paper: "Enriching Video Captions With Contextual Text"

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Understanding video content is a substantial task for many vision applications, such as video indexing/navigation [21], human-robot interaction [56], describing movies for the visually impaired people [33], or procedure generation for instructional videos [57]. There are many difficult challenges due to the open domain and diverse set of objects, actions, and scenes that may be present in the video with complex interactions and fine motion details. Furthermore, the required contextual information may not be present in the concerned video section at all, which needs to be extracted from some other sources.

While significant progress has been made in video captioning, stemming from release of several benchmark datasets [4, 40, 13, 33, 34] and various neural algorithmic designs, the problem is far from being solved. Most, if not all, existing video captioning approaches can be divided into two sequential stages that perform visual encoding and text decoding respectively [45]. These stages can be coupled further by additional transformations [12, 49] where the models are limited by the input visual content or the vocabulary of a specific dataset. Some approaches [32] consider the preceding or succeeding video clips to extract contextual relation in the visual content to generate coherent sentences in a storytelling way. In general, these approaches focus on a domain specific dataset not reflecting the whole real world, but only a subset that is missing a lot of information needed to produce human comparable results. Consequently, most captions still tend to be generic like “someone is talking to someone” and the knowledge about who, where and when is missing. We try to overcome this issue by providing contextual knowledge in addition to the video representation. This allows us to produce more specific captions like “Forrest places the Medal of Honor in Jenny’s hand.” instead of just “Someone holds someone’s hand.” as illustrated in Figure 1.

To address these limitations, we propose an end-to-end differentiable neural architecture for contextual video captioning, which exploits the required contextual information from a relevant contextual text. Our model extends a sequence-to-sequence architecture [45] by employing temporal attention in the visual encoder, and a pointer generator network [35] at the text decoder which allows for extraction of background information from a contextual text input to generate rich contextual captions. The contextual text input can be any text that is relevant to the video up to some degree without strict limitations. This could be a part of the script for a movie section, an article for a news video, or a user manual for a section of an instructional video.

Fig. 1: Captions for a video clip from the movie Forrest Gump. a) with traditional methods, b) with contextual video captioning that exploits the movie script as an additional input.

Contributions.  The contributions of this paper are three-fold. First, we propose a method for contextual video captioning which learns to attend over the context in raw text and generates out of vocabulary words by copying via pointing. The source code for the full framework will be publicly available111https://github.com/primle/S2VT-Pointer. Second, we augment the LSMDC dataset [34] by pairing video sections with the corresponding parts in the movie scripts, and share this new split with the community222https://github.com/primle/LSMDC-Context. Third, we show competitive performance both with respect to the prior state-of-the-art and ablation variants of our model. Through ablations we validate the efficacy of contextual captioning as well as individual design choices in our model.

Ii Related Work

Our goal of contextual caption generation is related to multiple topics. We briefly review the most relevant literature below.

Unimodal Representations. 

It has been observed that deep neural networks such as VGG 

[37], ResNet [15], GoogLeNet [39] and even automatically learned architectures [58], can learn suitable image features to be transferred to various vision tasks [9, 54]. Generic representations for video and text have been receiving considerable attention. Pooling and attention over frame features  [47, 52, 53], neural recurrence between frames and spatiotemporal 3D convolution are among the common video encoding techniques [10, 30, 41]. On the language side, distributed word representations [22, 27] and recent attention-based architectures [8, 29] provide effective and generalisable representations modeling sentential semantics.

Joint Reasoning of Video and Text.  Popular research topics in joint reasoning of image/video and text include video captioning  [48, 52, 25], retrieval of visual content  [20, 1] and text grounding in images/videos [20, 11, 31, 28]

. Most approaches along these lines can be classified as belonging to either (i) joint language-visual embeddings or (ii) encoder-decoder architectures. The joint vision-language embeddings facilitate image/video or caption/sentence retrieval by learning to embed images/videos and sentences into the same space 

[25, 51]. The encoder-decoder architectures [43] are similar, but instead attempt to encode images into the embedding space from which a sentence can be decoded  [46, 55, 12]. Most of these approaches yield generic video captions without any context due to lack of background knowledge.

Contextual video captioning has not received great attention yet besides few attempts [44, 5, 42] which might be due to lack of suitable datasets. [50] presents a dataset of news videos and captions that are rich in knowledge elements and employs Knowledgeaware Video Description Network (KaVD) that incorporates entities from topically related text documents. Similar to [50], we incorporate relevant text data, with the use of pointer networks [35]

, for a given video to produce richer and contextual captions. In contrast to KaVD, we propose a model which directly operates on raw contextual text data. Our model learns to attend over the relevant words, based on visual input which allows the model not only to learn contextual entities and events, but also the interaction between them. Further, it allows both video captioning with background knowledge, as well as text summarization based on visual information. We also eliminate the additional preprocessing overhead of name/event discovery and linking systems.

Iii Approach

Fig. 2: Model Overview. A stack of two LSTM blocks is used for both encoding (red) and decoding (green) the visual input and textual output respectively. The bottom LSTM (green) layer additionally uses a temporal attention to identify the relevant frames. The contextual text input of variable size is encoded using another bidirectional LSTM to build a visual and context aware vocabulary distribution with the use of a pointer generator network.

We now present our neural architecture for contextual video captioning. An overview of our model is shown in Figure 2. The input video clip consists of a number of consecutive frames . The contextual text sequence consists of a number of consecutive words . Our task is to find a function that encodes the input sequences and decodes a contextual caption as a sequence of consecutive words . We rely on a sequence-to-sequence architecture to handle variable input and output length. A stack of two LSTM [17] blocks as proposed in [45] is used for both encoding and decoding, which allows parameter sharing between the two stages. The stack consists of a bidirectional and a unidirectional LSTM which are mainly effective in encoding and decoding, respectively. During decoding, the bottom LSTM layer additionally uses a temporal attention over the hidden states of the top LSTM layer to identify the relevant frames. Next to the visual input, a contextual text input of variable size is encoded using another bidirectional LSTM. We use a pointer generator network [35] to attend over the contextual text and build a visual and context aware vocabulary distribution. In addition, the pointer generator network allows us to copy context words directly into the output caption, which enables extracting specific background knowledge not available from only visual input.

Iii-a Encoder-Decoder Network

The baseline architecture consists of two main blocks: a bidirectional LSTM block stacked on top of a unidirectional LSTM block, modeling the input frame and output word sequences, respectively. The top LSTM takes an embedded video feature vector

at time step as input, and passes its hidden state concatenated with the embedding of the previously predicted word input and a frame context vector to the bottom LSTM block:


where , are the respectively memory cells for the top and bottom LSTM.

The time axis of the stacked LSTMs can be split into the encoding and decoding stage. During encoding, each video frame is passed into a pretrained Convolutional Neural Network (CNN) to obtain frame level features, from where the linear embedding

to a lower dimensional space is learned. Since there is no previously predicted word input and no frame context vector during this stage, a padding vector of zeros is used for

and .

The decoding stage begins after the fixed amount of encoding time steps , by feeding the beginning of sentence (BOS) tag to the model. The BOS tag is used to signal the model to start decoding its latent representation of the video as a sentence. Since there is no video frame input in this stage, a padding vector is passed to the top LSTM. To obtain the frame context vector at decoding timestep , a temporal attention with an additive alignment score function [2] over the hidden states of the top LSTM is applied:


The output of the bottom LSTM is then passed to the pointer generator network, generating the output word . During the encoding stage, no loss is computed and the output of the LSTM is not passed to the pointer generator network.

Iii-B Pointer Generator Network

We use a bidirectional LSTM to learn a representation of the contextual text. At each context encoder timestep , the embedded word is passed to the LSTM layer, producing a sequence of context encoder hidden states . These hidden states are used to build a soft attention distribution over the context word representations per decoder timestep , similar to [24]:


where , , and are learned parameters. To overcome the general issue of tendency to produce repetition in sequence to sequence models, [43, 35] proposed a coverage model, which keeps track of the attention history. At each decoder timestep , we follow the same procedure by introducing a coverage vector , which is the sum of the previous attention distributions:


This vector informs the model about the degree of attention that the context words have received so far, and helps the model not to attend over the same words repeatedly. The coverage vector is fed to the pointer-generator network as an additional input, and the attention score calculation from Equation 6 is modified as:


where is a learned parameter vector of the same shape as . The resulting context vector , computed as


is then concatenated with the decoder hidden state and passed to two fully connected linear layers to produce the vocabulary output distribution :


where , , and are learned parameters. At each decoder timestep

, we additionally calculate a generation probability

, as proposed in [35], based on the context vector , the decoder hidden state , and the embedded decoder word input :



is the sigmoid function and the vectors

, , and the scalar are learned parameters. The generation probability is used to weight the vocabulary distribution and the attention distribution at timestep . For a word , the final distribution is given as:


Note that if a word is not in the contextual text, is zero, and similarly if is not in the global vocabulary, is zero.

The loss function per decoder timestep

is given as:


where is the target word and is a parameter of the model to weight the additional coverage loss [35], used to penalize attending over the same contextual word representation multiple times.

As the coverage mechanism penalizes repeated attention on the contextual text, but not on the global vocabulary, we introduce an additional penalization at inference time. At timestep , the output probability of a word is multiplied by the factor , if it already occurs in the predicted sentence .

Iv Datasets

Dataset Domain # Videos # Clips Avg. Duration # Sentences Vocab Size
MPII-MD [33] Movie 94 68’337 3.9s 68’375 21’700
LSMDC [34] Movie 202 118’114 4.8s 118’081 23’442
News Video [50] News 2’883 52.5s 3’302 9’179
LSMDC* Movie 177 114’039 4.1 114’039 25’204
LSMDC-Context-AD Movie 23 14’464 4.2 14’464 8’162
LSMDC-Context-Script Movie 26 17’954 3.9 17’954 11’997
TABLE I: Summary Statistics of the Datasets

We test our approach on two datasets that provide both video and contextual text input.

Iv-a News Video

To the best of our knowledge, the News Video Dataset [50] is the only publicly available dataset consisting both visual and contextual background information for video captioning. The dataset is composed of news videos from the AFP YouTube channel333https://www.youtube.com/user/AFP with the given descriptions as ground-truth captions. The videos cover a variety of topics such as protests, attacks, natural disasters and political movements from October, 2015 to November, 2017. Furthermore, the authors retrieved topically related news documents using the video meta-data tags. The official release comes with the video URLs only. However, upon our request, they kindly shared their collected news articles with us, which we use as an contextual text input in our experiments.

Iv-B LSMDC-Context

The Large Scale Movie Description Challenge (LSMDC) dataset [34] is a combination of the MPII-MD [33] and the M-VAD [40] datasets, consisting of a large set of video clips taken from Hollywood movies with paired audio description (AD) sentences as groundtruth captions. Sentences in the original AD are filtered and manually aligned to the corresponding portions for a better precision by the authors. The released dataset comes with the original captions as well as a pre-processed version, where all the character names were replaced with someone or people. The latter version is also the most used in related research and benchmarks as the character names come from the movie context rather than the visual input.

To adapt it to our problem, we augmented the LSMDC dataset with additional contextual text by using publicly available movie scripts. The scripts were downloaded from the Internet Movie Script database444https://www.imsdb.com, parsed in a similar way to the public code of Adrien Luxey555https://github.com/Adrien-Luxey/Da-Fonky-Movie-Script-Parser. The extracted text from the scripts were stored in a location-scene structure and later used to narrow down the contextual text input while generating a caption for a short video clip within the movie. Next, we downloaded the movie subtitles666https://subscene.com/ and built a coarse script scene, video time mapping using the dialogues in scripts. Note that public movie scripts are rare and can be either a draft, a final, or a shooting version. Therefore the stage directions and especially the dialogues may differ from the subtitles a lot. To overcome this issue, we built the mapping in multiple rounds and eliminated the movies and scripts which do not have sufficient correspondences between the video and the script. In the end, we assign a coarse time interval from the movie for each scene in the script which could be used as contextual text input.

Iv-B1 AD-Captions with Context

Roughly 40 movies from the LSMDC dataset with AD sentences have an available movie script in the form of a draft, a shooting or a final version. In the first step, we analyzed how many words of the AD-captions can be recovered by the provided movie script context. In the second step, we removed the movies with an average caption/context overlap less than 33.3% to create a smaller split with better context richness. This way we can improve the average overlap by 7% in trade of a smaller but higher quality dataset. The resulting dataset contains 23 movies with a total of video clips. As one expects, experiments have shown that keeping the movies with almost no useful additional context is rather obstructive than helpful in the training process.

Iv-B2 Script-Captions with Context

A part of LSMDC dataset is composed of movies that are paired with script sentences as groundtruth captions, instead of AD sentences. We used this split as our toyset to see how well our model can recover a caption when the ground truth caption is in the contextual text. We select the movies with an available movie script and filter out the movies with a caption/context overlap less than 90%. The resulting dataset contains 26 movies with a total of video clips.

Iv-B3 Lsmdc*

The splits above cover a small percentage of the original LSMDC dataset. We denote the bigger split of remaining movies as LSMDC*, which is to be used to pretrain the encoder-decoder network (without contextual text input) to apply transfer learning for the relatively small splits. The new split contains all video-sentence pairs from the original dataset except the test set, since the groundtruth sentences are not available for the original test set.

For these three splits, we created our own training, test, and validation sets considering the number of clips per movie as well as the movie genres. Table I shows the statistics of the datasets used.

V Experiments

V-a Video and Text Representation

In all experiments, text data is lower-cased and tokenized into words. For the News Video Dataset, numbers, dates and times are replaced with special tokens following [50]. A vocabulary is built for each respective dataset and clipped by taking account of the occurrence frequency of words. Each word is mapped to an index and the text input to our model is represented as one-hot vector. Further, we use a pretrained Word2Vec model [22, 23], trained on a subset of Google News dataset777https://code.google.com/archive/p/word2vec, to have good initialization to our word embedding layer.

We perform video representation differently depending on the used dataset due to different content and style of videos.

V-A1 News Video Dataset

For each video clip, we sample one frame per second, as the video clips from the News Video Dataset are longer (up to two minutes) and short-term temporal information is less significant due to the news video style of rapid scene changes. All frames (RGB images) are smoothed with a Gaussian filter before down-scaling to the size of , to avoid aliasing artifacts. The preprocessed frames are fed into the VGG-16 [36]

pretrained on the ImageNet dataset 


, and the output of the second dense layer (fc2-layer, after applying the ReLU non-linearity and before the softmax layer) is fed into the top LSTM of our model.

V-A2 LSMDC Dataset

The Large Scale Movie Description Challenge published precomputed video features, which we directly use in all our experiments. They provide two types of features: the output of ResNet-152 [14] pretrained on ImageNet [6] before applying softmax, and the output of the I3D model [3] pretrained on ImageNet and Kinetics [18]. I3D makes use of multiple frames and optical flow using 3D CNN, therefore a single feature vector input to our LSTM captures a segment of multiple frames. The concatenation of the two feature vectors is fed into the top LSTM of our model.

V-B Training Setup

In all our experiments, the video features and text (word) inputs are embedded into a -dimensional and -dimensional space respectively. The LSTMs in the encoder-decoder network have a hidden state size of , and the LSTM block used to encode the contextual text in the pointer generator network has a hidden state size of . During training, dropout [38] rate of is applied on the video feature input, embedded word input, embedded context input, and all LSTM outputs. The training is performed with the Adam [19] optimizer using a learning rate of .

V-B1 News Video Dataset

Fig. 3: Sample Prediction on News Video Dataset. Article: the green shading represents the final value of the coverage vector (the sum of the attention distribution for each timestep). A more intense green corresponds with a higher coverage value. Prediction: the yellow shading and the number below represent the generation probability

We unroll the stacked LSTMs to timesteps: for video encoding and for caption decoding. Note that the News Video Dataset contains longer reference captions than the LSMDC* dataset and mostly includes several sub sentences. Further, we unroll the LSTM for the contextual text to a fixed size of timesteps, following [35]. Articles are sentence-wise cropped at the end to fit the maximum length of tokens. For video clips with multiple articles, we create a sample per article and train on all of them. During inference, we take the prediction of the sample/article pair with the highest probability (i.g. most confident). To use transfer learning in some experiments, the complete News Video vocabulary and the most frequent words of the CNN/Daily Mail dataset [16, 24] were combined together and cropped at . We first train the pointer-generator network on the bigger CNN/Daily Mail Dataset, and the sequence-to-sequence model on the News Video Dataset. Secondly, we combine the pretrained models and train on the News Video Dataset. For the final model, we use a coverage loss weight of . At inference time, we use beamsearch with a beamwidth of and a repetition penalization of .

V-B2 LSMDC-Context Dataset

We unroll the stacked LSTMs to timesteps: for video encoding and for caption decoding. Further, we unroll the LSTM for the contextual text to a fixed size of timesteps for AD-captions and timesteps for script-captions. Movie script scenes are cropped sentence-wise from the beginning and end, to fit the maximum length of tokens. The complete LSMDC* vocabulary is used for the final models that are trained on LSMDC-Context splits. We first train the sequence-to-sequence model on the bigger LSMDC* dataset with someone-captions. Next, we fix the weights of the top LSTM (modeling the video), while training on LSMDC-Context-AD (LSMDC-Context-Script, respectively) with name-captions. This procedure provides a good initialization for the pointer-generator network. In a last step, we release all the weights and train the full framework end-to-end. Coverage loss weight is used in the final models. At inference time, we do not use beamsearch (i.e. beamwidth of ), but a repetition penalization of .

V-C Evaluation

We use METEOR [7]

as our quantitative evaluation metric. It is based on the harmonic mean of unigram precision and recall scores, and considers how well the predicted and the reference sentences are aligned. METEOR improves the shortcomings of BLEU 

[26] and makes use of semantic matching like stemmed word matches, synonym matches and paraphrase matches next to exact word matches. In all experiments, we use METEOR 1.5888http://www.cs.cmu.edu/~alavie/METEOR as done in [45].

V-D Results and Analysis

Model METEOR [%] ROUGE-L [%] CIDEr [%]
KaVD [50] 10.2 18.9
Video-only 7.1 16.4 10.2
Article-only 9.3 17.2 20.5
S2VT-Pointer 10.8 18.6 25.7
TABLE II: Performance evaluation on the News Video Dataset.
Fig. 4: Sample Prediction on LSMDC-Context-AD. Article: the green shading represents the final value of the coverage vector (the sum of the attention distribution for each timestep). A more intense green corresponds with a higher coverage value. Prediction: the yellow shading and the number below represent the generation probability

We report the performance of our model on the News Video Dataset in Table II. In order to understand the benefits of the individual components of our model, we also present an ablation study where blocks stacks are removed. Our full model performs significantly better than the video-only and the article-only model which are missing the pointer generator network and the video encoder respectively. Comparing the results between KaVD [50] and our full model is difficult as the authors of KaVD and News Video Dataset only published the ratio of train, validation and test splits, but not the exact sets. The authors did not report the CIDEr score in  [50].

We show a qualitative result in Figure 3 to highlight the capabilities of our model which presents a semantically correct summary of the article based on the visual input. While the article focuses on the hush money investigation, the model correctly uses this information to augment the visual caption of protesters doing a demonstration in a street. This can be seen in the weighting () of the attention distribution and the global vocabulary distribution: words related to the event of protesting are taken from the global vocabulary and entities like rio de janeiro or michel temer, as well as additional information are successfully extracted from the article.

The performance of our model on LSMDC-Context-AD is shown in Table III. The model is able to recover of the character names on average. Figure 4 shows an example where the model correctly extracts the name and scene location from the movie script. The difference between the predicted caption (visually correct) and the groundtruth caption shows the difficulties of the LSMDC dataset in general. Analyzing some example prediction shows that the model occasionally substitutes someone with a wrong character name. There are many reasons for this behaviour. Firstly, the movie script context does not necessarily include the video scene, nor the character name. Secondly, the dataset is too small and does not let the model learn a good context model at the pointer generator network. In contrast to experiments on the News Video Dataset that are pretrained on CNN/Daily Mail Dataset, the pointer generator network is missing a good initialization due to lack of larger text corpora with similar content and style for the experiments on LSMDC-Context-AD.

Table IV shows the performance on LSMDC-Context-Script. The model is able to learn the mapping between the video and the groundtruth caption that is mostly available in the contextual text. Analyzing some example predictions has shown the issue of the script based captions and why the scores remain relatively low. In LSMDC, consecutive samples tend to have almost identical visual input. Yet, the reference sentences describe different levels of scene details (e.g. lester, carolyn and jane are eating dinner by candlelight vs. red roses are bunched in a vase at the center of the table). Without the awareness of the sequence of samples, a correct mapping between the script sentences and the reference sentences is ambiguous. This is because a reasonable system would always go for the most likely sentence.

As the ground truth captions from the LSMDC-Context splits highly depend on the respective video clip, we omit the results of the Movie-Script-only model. In contrast to the News Video Dataset, the captions do not reflect a possible summary of the text input and therefore the results are uninformative.

Model Name-Recovery METEOR ROUGE-L CIDEr
Video-only 3.4 10.1 5.2
S2VT-Pointer 37.4 5.8 14.0 15.3
TABLE III: Performance evaluation on the LSMDC-Context-AD dataset.
Model Name-Recovery METEOR ROUGE-L CIDEr
Video-only 3.5 10.1 4.7
S2VT-Pointer 60.0 13.8 25.3 13.4
TABLE IV: Performance evaluation on the LSMDC-Context-Script dataset.

Vi Conclusion

In this paper, we proposed an end-to-end trainable contextual video captioning method that can extract relevant contextual information from a supplementary contextual text input. Extending a sequence-to-sequence model with a pointer generator network, our model attends over the relevant background knowledge and copy corresponding vocabulary from the given text input. Results on the News Video Dataset and LSMDC-Context validate the competitive performance of our model which directly operates on the raw contextual text data without the need of additional tools unlike prior methods. Furthermore, we make the source code of our framework and LSMDC-Context publicly available for other researchers. The performance of the presented method is naturally limited by the level of correspondence between the video and the chosen contextual text. In future, we plan to involve multiple contextual resources to extract the relevant contextual information with more confidence and precision.


  • [1] L. Anne Hendricks, O. Wang, E. Shechtman, J. Sivic, T. Darrell, and B. Russell (2017)

    Localizing moments in video with natural language

    In ICCV, Cited by: §II.
  • [2] D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §III-A.
  • [3] J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In

    proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 6299–6308. Cited by: §V-A2.
  • [4] D. L. Chen and W. B. Dolan (2011) Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, Cited by: §I.
  • [5] C. Chunseong Park, B. Kim, and G. Kim (2017)

    Attend to you: personalized image captioning with context sequence memory networks

    In CVPR, Cited by: §II.
  • [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, Cited by: §V-A1, §V-A2.
  • [7] M. Denkowski and A. Lavie (2014) Meteor universal: language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, Cited by: §V-C.
  • [8] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §II.
  • [9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In ICML, Cited by: §II.
  • [10] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell (2015) Long-term recurrent convolutional networks for visual recognition and description. In CVPR, Cited by: §II.
  • [11] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach (2016) Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Cited by: §II.
  • [12] L. Gao, X. Li, J. Song, and H. T. Shen (2019) Hierarchical lstms with adaptive attention for visual captioning. PAMI. Cited by: §I, §II.
  • [13] S. Guadarrama, N. Krishnamoorthy, G. Malkarnenkar, S. Venugopalan, R. Mooney, T. Darrell, and K. Saenko (2013) Youtube2text: recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In ICCV, Cited by: §I.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §V-A2.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §II.
  • [16] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom (2015) Teaching machines to read and comprehend. In NeurIPS, Cited by: §V-B1.
  • [17] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8). Cited by: §III.
  • [18] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §V-A2.
  • [19] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §V-B.
  • [20] D. Lin, S. Fidler, C. Kong, and R. Urtasun (2014) Visual semantic search: retrieving videos via complex textual queries. In CVPR, Cited by: §II.
  • [21] O. Marques and B. Furht (2002) Content-based image and video retrieval. Vol. 21, Springer Science & Business Media. Cited by: §I.
  • [22] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013)

    Efficient estimation of word representations in vector space

    arXiv preprint arXiv:1301.3781. Cited by: §II, §V-A.
  • [23] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NeurIPS, Cited by: §V-A.
  • [24] R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang, et al. (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Cited by: §III-B, §V-B1.
  • [25] Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui (2016) Jointly modeling embedding and translation to bridge video and language. In CVPR, Cited by: §II.
  • [26] K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, Cited by: §V-C.
  • [27] J. Pennington, R. Socher, and C. D. Manning (2014) Glove: global vectors for word representation. In EMNLP, Cited by: §II.
  • [28] B. A. Plummer, A. Mallya, C. M. Cervantes, J. Hockenmaier, and S. Lazebnik (2017) Phrase localization and visual relationship detection with comprehensive image-language cues. In ICCV, Cited by: §II.
  • [29] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. https://s3-us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Cited by: §II.
  • [30] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra (2014) Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604. Cited by: §II.
  • [31] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele (2016) Grounding of textual phrases in images by reconstruction. In ECCV, Cited by: §II.
  • [32] A. Rohrbach, M. Rohrbach, W. Qiu, A. Friedrich, M. Pinkal, and B. Schiele (2014) Coherent multi-sentence video description with variable level of detail. In German conference on pattern recognition, Cited by: §I.
  • [33] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele (2015) A dataset for movie description. In CVPR, Cited by: §I, §I, §IV-B, TABLE I.
  • [34] A. Rohrbach, A. Torabi, M. Rohrbach, N. Tandon, C. Pal, H. Larochelle, A. Courville, and B. Schiele (2017) Movie description. IJCV. Cited by: §I, §I, §IV-B, TABLE I.
  • [35] A. See, P. J. Liu, and C. D. Manning (2017) Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Cited by: §I, §II, §III-B, §III-B, §III, §V-B1.
  • [36] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §V-A1.
  • [37] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §II.
  • [38] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting.

    The journal of machine learning research

    15 (1).
    Cited by: §V-B.
  • [39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In CVPR, Cited by: §II.
  • [40] A. Torabi, C. Pal, H. Larochelle, and A. Courville (2015) Using descriptive video services to create a large data source for video annotation research. arXiv preprint arXiv:1503.01070. Cited by: §I, §IV-B.
  • [41] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2015) Learning spatiotemporal features with 3d convolutional networks. In ICCV, Cited by: §II.
  • [42] V. Tran and L. Nguyen (2017) Natural language generation for spoken dialogue system using rnn encoder-decoder networks. arXiv preprint arXiv:1706.00139. Cited by: §II.
  • [43] Z. Tu, Z. Lu, Y. Liu, X. Liu, and H. Li (2016) Modeling coverage for neural machine translation. arXiv preprint arXiv:1601.04811. Cited by: §III-B.
  • [44] S. Venugopalan, L. A. Hendricks, R. Mooney, and K. Saenko (2016) Improving lstm-based video description with linguistic knowledge mined from text. arXiv preprint arXiv:1604.01729. Cited by: §II.
  • [45] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko (2015) Sequence to sequence – video to text. In ICCV, Cited by: §I, §I, §III, §V-C.
  • [46] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko (2015) Sequence to sequence-video to text. In ICCV, Cited by: §II.
  • [47] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko (2014)

    Translating videos to natural language using deep recurrent neural networks

    arXiv preprint arXiv:1412.4729. Cited by: §II.
  • [48] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan (2015) Show and tell: a neural image caption generator. In CVPR, Cited by: §II.
  • [49] J. Wang, W. Jiang, L. Ma, W. Liu, and Y. Xu (2018)

    Bidirectional attentive fusion with context gating for dense video captioning

    In CVPR, pp. 7190–7198. Cited by: §I.
  • [50] S. Whitehead, H. Ji, M. Bansal, S. Chang, and C. Voss (2018) Incorporating background knowledge into video description generation. In EMNLP, Cited by: §II, §IV-A, TABLE I, §V-A, §V-D, TABLE II.
  • [51] B. Xu, Y. Fu, Y. Jiang, B. Li, and L. Sigal (2016) Heterogeneous knowledge transfer in video emotion recognition, attribution and summarization. IEEE Transactions on Affective Computing 9 (2). Cited by: §II.
  • [52] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. In ICML, Cited by: §II, §II.
  • [53] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville (2015) Describing videos by exploiting temporal structure. In ICCV, Cited by: §II.
  • [54] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson (2014) How transferable are features in deep neural networks?. In NeurIPS, Cited by: §II.
  • [55] H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu (2016) Video paragraph captioning using hierarchical recurrent neural networks. In CVPR, Cited by: §II.
  • [56] H. Zhong, J. Shi, and M. Visontai (2004) Detecting unusual activity in video. In CVPR, Vol. 2. Cited by: §I.
  • [57] L. Zhou, C. Xu, and J. J. Corso (2018) Towards automatic learning of procedures from web instructional videos. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    Cited by: §I.
  • [58] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In CVPR, Cited by: §II.