Video Moment Localization using Object Evidence and Reverse Captioning

by   Madhawa Vidanapathirana, et al.
Simon Fraser University

We address the problem of language-based temporal localization of moments in untrimmed videos. Compared to temporal localization with fixed categories, this problem is more challenging as the language-based queries have no predefined activity classes and may also contain complex descriptions. Current state-of-the-art model MAC addresses it by mining activity concepts from both video and language modalities. This method encodes the semantic activity concepts from the verb/object pair in a language query and leverages visual activity concepts from video activity classification prediction scores. We propose "Multi-faceted VideoMoment Localizer" (MML), an extension of MAC model by the introduction of visual object evidence via object segmentation masks and video understanding features via video captioning. Furthermore, we improve language modelling in sentence embedding. We experimented on Charades-STA dataset and identified that MML outperforms MAC baseline by 4.93 R@1 and R@5metrics respectively. Our code and pre-trained model are publicly available at



There are no comments yet.


page 2

page 7


MAC: Mining Activity Concepts for Language-based Temporal Localization

We address the problem of language-based temporal localization in untrim...

Localizing Moments in Video with Temporal Language

Localizing moments in a longer video via natural language queries is a n...

Learning to Discretely Compose Reasoning Module Networks for Video Captioning

Generating natural language descriptions for videos, i.e., video caption...

Should I take a walk? Estimating Energy Expenditure from Video Data

We explore the problem of automatically inferring the amount of kilocalo...

Probabilistic Semantic Retrieval for Surveillance Videos with Activity Graphs

We present a novel framework for finding complex activities matching use...

Charades-Ego: A Large-Scale Dataset of Paired Third and First Person Videos

In Actor and Observer we introduced a dataset linking the first and thir...

QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Detecting customized moments and highlights from videos given natural la...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Imagine being able to search for the moment in a video where an adorable kitten sneezes, even though the uploader has not tagged its timestamp. Finding such important events in a video is called a temporal moment localization. Videos contain many untagged topics and hence, addressing temporal localization problem is versatile. For instance, progress on temporal moment localization can benefit video search, video summarization, action moment detection and many other areas.

We address the problem of language based action localization in untrimmed videos, where the task is to identify the temporal location within a video that is described by a given natural language query. For example in Figure 1, if we were to query “Person drinking a cup of coffee”, the network should locate frames in the video where a person is drinking a coffee. This problem is challenging due to the complexity of action identification in an untrimmed video that may contain a diverse combination of actors, actions and objects over time.

A prior work, Temporal Activity Localization via Language (TALL) [2] compares visual features with sentence embeddings via a Multi-Modal Processing Unit (MPU). It maps the visual and textual features to a single feature space by performing element-wise addition, element-wise multiplication and concatenation. Then, the concatenated output of these operations is used to generate a visual-semantic alignment score along with a location regression result for each video clip. However, visual features used by [2] are trained for activity classification and not to compare with text.

Recent work by Ge et al. [3], argues that TALL [2] model ignores rich semantic information about activities in videos and queries. They improved TALL architecture and proposed Mining Activity Concepts (MAC), which extracts activity concepts from verb/object pairs of query sentence and videos. The verb/object pair represents semantic activity concepts that identify the action of a query. The video features from the last layer (FC8) of a C3D model [6], trained on Kinetics activity classification dataset, represents visual activity concepts. These activity concepts can be combined using a MPU to predict video moments. Although some Kinetics classes identify an object (e.g. playing guitar), it ignores a large number of object classes contained in a video.

To overcome above mentioned limitations, we propose Multi-Faceted Moment Localizer (MML) that uses object features from semantic segmentation model and video understanding features from a video captioning model. Our contributions can be summarized as follows:

  • Introducing frame-level object evidence via semantic object segmentation features to explicitly identify the relationship between the query object and visible objects in video frames.

  • Introducing video captioning features as an additional feature for joint-embedding between video clip and query text.

  • Improved language modelling of query text via the introduction of BERT sentence embedding. Experiments on improving Verb/Object pair (VO pair) encoding via BERT which did not improve results, and we provide the reasons that may have caused this counter-intuitive outcome in Section 2.1.

We performed extensive ablation study to validate each architectural component we introduced. Our code and pre-trained model is available at

Figure 1: An example for query “Person drinking a cup of coffee”. Green, red and blue lines indicate the ground truth frames, baseline [3] prediction and prediction by our method respectively.

2 Approach

The proposed work Multi-Faceted Moment Localizer (MML) is based on the prior work MAC [3]

, which is the current state-of-the-art method for text based video moment localization. We developed our model on top of a PyTorch implementation of MAC available here 

111 We focused on improving the MAC architecture by introducing additional features that support video moment localization. Figure 2 shows the architecture of our model in which the blocks with green and blue colour text indicate our contributions.

Similar to the approach taken in MAC, we first divide a video into overlapping video clips. Then an alignment score and location offset pair is calculated considering the query sentence and each video clip. This calculation involves: 1) comparison of low-level video clip features (C3D FC6 features from MAC and video captioning features we introduced) with sentence embedding, 2) comparison of high-level video clip features (visual activity concepts from MAC and object segmentation features we introduced) with VO pair glove embedding.

Two Multi Modal Processing Units [2] ( and ) are used for above mentioned comparisons. The outputs from two MPU’s are concatenated and passed through an additional fully connected layer (MLP) to obtain alignment score and location offsets. Following the same approach as MAC, the alignment score is then multiplied by the actionness score, which indicates the likelihood of the candidate video clip to have meaningful activities. The location offsets are added to video clip time bounds to obtain a particular prediction.

Figure 2: Architecture diagram of MML. Blocks with blue text indicate components we introduced. Blocks with green text indicate components we improved. Blocks with red text indicate components considered for improvements. Blocks with black text indicate other components from baseline [3].

2.1 Contribution 1: Improvements on language modelling

Both TALL [2] and MAC [3]

models use skip-thought vectors for sentence embedding. In our approach, we replaced skip-thought embeddings using Google’s pre-trained BERT 

[1] model features. The intuition behind this change is two-fold: 1) Skip-thoughts is a 4800-dimensional vector whereas BERT is a 768-dimensional vector, thereby reducing the parameter count and improving the generalization of the model; and 2) BERT is trained on a significantly larger datasets: BooksCorpus (800M words) and Wikipedia (2,500M words). As explained in Section 3, the introduction of BERT sentence embeddings improved the results of our model. We also tried Facebook AI’s RoBERTa [5]

, a derivative of BERT. But RoBERTa reduced the performance of our model. This reduction is probably because RoBERTa has removed the next sentence prediction training objective, which had made BERT versatile for sentence embedding.

The MAC [3] model uses GloVe embedding for VO word pairs. We tried using BERT as a substitute for GloVe, but that only degraded results. This is probably because BERT embedding (768d) is much larger than GloVe embedding (300d), which causes the number of parameters in the model to increase, and it may have influenced the generalization of the model.

2.2 Contribution 2: Introducing object segmentation features

The baseline [3] uses FC8 features (i.e. action class predictions) of a C3D model trained on Kinetics dataset as visual activity concepts used to identify actions in a clip. However, only 15.82% of the objects mentioned in the query text of Charades-STA dataset are covered by Kinetics classes. To overcome this issue, we used object segmentation features to explicitly include object information contained in clip.

For identifying objects in video frames, we considered semantic segmentation models trained on ADE20K dataset [7] which consists of 150 object classes. From the provided pre-trained object segmentation models 222, we used “MobileNetV2dilated + C1_deepsup” in the interest of time, which provided required features in about 5 days. We sampled every

frame of video clips and obtained each frame’s class distribution using the semantic segmentation model. We obtained the class means of each frame and then max pooled across the time dimension. The above process resulted in a 150 dimension vector (

) for each video clip. Figure 3

demonstrates the aforementioned object feature extraction process. At training time,

is normalized, scaled (using scale ratio ) and concatenated with Visual Activity Concept features (), before being passed to the MLP that generate the input for . In order to address over-fitting, dropout layers (with dropout ratios and ) were used for and . The input to the MLP is as follows:

In order to identify suitable hyper-parameter values for , and , we used a 3 axis parameter sweep as explained in Section 3.4.2.

Figure 3: Process of extracting object segmentation features.

2.3 Contribution 3: Introducing video captioning features

Existing methods for video moment localization use C3D features from activity classification domain. However, the task of moment localization is different from activity classification as we have to compare data from two different domains: highly complex video domain and a relatively simple text domain. Thus, it may be helpful to use features from a model that compares videos with text, and thus we incorporate video captioning features into our model.

We use Temporal Shift Module (TSM) [4] for this task, as it is the state of the art method in video understanding. We have used TSM ResNet50 333 model pre-trained on frames to extract video captioning features. This model was trained on Kinetics-400 dataset. Hence, it outputs features per frame. We perform average pooling across all frames of a clip to obtain one feature vector of size for each video clip. These features denote a low-level representation of the video clip. Hence, we concatenate these features with FC6 features of C3D model to calculate the alignment score and frame offset. Result of using video captioning features is given in Table 1.

3 Experiments

3.1 Dataset

We used Charades-STA [2] dataset for training and evaluating the MML model. This dataset contains around 10,000 videos where each video contains clip-level sentence descriptions coupled with start/end time-stamps. In total, there are 13898 clip-sentence pairs in Charades-STA training set and 4233 clip-sentence pairs in test set. To keep our results comparable, we used the same train/test splits of Charades STA as baseline[3] for all the experiments.

3.2 Evaluation Metrics

We adopted the evaluation metrics

R@1 on IOU=0.5 and R@5 on IOU=0.5 used by baseline[3]. R@N on IOU=u can be calculated as where is the alignment result for query . Here, indicates correct alignment and indicates wrong alignment, considering the top scored video clips having a temporal IOU greater than or equal to with the ground truth of . The value represents the total number of queries and thus R@n on IOU=u is an averaged figure. A higher value is preferred for R@n on IOU=u. All measurements in this paper are at IOU = 0.5.

3.3 Baseline

The baseline model by MAC [3] authors exhibit R@1 of and R@5 of on Charades-STA dataset. However, the PyTorch implementation 444 of the same model provide slightly different values at for R@1 and for R@5. The MML was developed from the code base of this PyTorch implementation. Thus, we consider both of these as baselines for our experiments.

3.4 Experimental results

The best models we identified provide 4.93% (R@1) and 1.70% (R@5) improvements over MAC author’s baseline. In this subsection, we first discuss the effect of each component of the architecture using an ablation study. Then we provide details of hyper-parameter tuning of object segmentation features and video captioning features, followed by a qualitative analysis of the proposed model.

R@1 R@5
(authors baseline)
SkipThought GloVe - - 0.304 0.648
(PyTorch baseline)
SkipThought GloVe - - 0.297 0.641
Model 1 BERT GloVe - - 0.299 0.647
Model 2 SkipThought GloVe - 0.308 0.646
[HTML]3166FF Model 3 [HTML]3166FF BERT [HTML]3166FF GloVe [HTML]3166FF [HTML]3166FF - [HTML]3166FF 0.313 [HTML]3166FF 0.659
Model 4 SkipThought BERT - 0.302 0.642
Model 5 BERT BERT - 0.301 0.647
Model 6 RoBERTa GloVe - 0.238 0.574
[HTML]3166FF Model 7 [HTML]3166FF BERT [HTML]3166FF GloVe [HTML]3166FF [HTML]3166FF [HTML]3166FF 0.319 [HTML]3166FF 0.651
Table 1: Effect of different features on performance of MML. R@1 and R@5 are used as evaluation metric. Best performing models are highlighted in blue color.

3.4.1 Ablation study on feature selection

We perform extensive ablation study to validate the effectiveness of individual components of the proposed model. Table 1 illustrates R@1 and R@5 scores of various ablations studies. Introduction of BERT sentence embedding and object segmentation features help outperform the baseline. The best models we identified (Model 3 and Model 7) necessarily includes BERT sentence embedding, object segmentation features and GloVe VO embeddings. Introduction of video captioning features further improves R@1. These best models provide 4.93% (R@1 of Model 7) and 1.70% (R@5 of Model 3) improvements over MAC authors baseline. Considering the model 3 alone, we obtain 2.96% (R@1) and 1.70% (R@5) improvements.

The models 3 and 7, which are the best performing MML models, achieve 7.41% and 4.93% improvements in R@1 over MAC PyTorch baseline. The introduction of object segmentation features (model 2 vs. baseline) and video captioning features (model 7 vs. model 2) boosts R@1 by 3.70% and 2.02% respectively. Although BERT sentence embedding alone contribute only a 0.67% increase in R@1 score, its combination with object segmentation features provide a further improvement of 1.01%.

3.4.2 Hyper-parameter tuning on object segmentation and video captioning features

As mentioned in Section 2.2, we consider multiple values for hyper-parameters , and to address over-fitting. We used a 3 axis parameter sweep to find a suitable set of hyper-parameters considering candidate values , and . We effectively validated models that use BERT sentence embedding, GloVe VO embedding and no video captioning features, and identified and as the best configuration. This configuration resulted in model 3 in Table 1. The complete parameter sweep took about 2 days to complete on a machine with RTX 2080Ti GPU, having cached all the features in the system memory.

(a) R@1 (IOU=0.5) on test set
(b) R@5 (IOU=0.5) on test set
Figure 4: Metrics at various and . is kept at 0. BERT sentence embedding, GloVe VO embedding and no video captioning features are used. Baseline by MAC authors is provided.

Figure 4

shows the plots of R@1 and R@5 of best models at various scale ratios and dropout ratios of object features. The best models are identified by validating after each epoch.

is set to 0 (no visual activity concept dropout) as it is identified to be the best configuration. A significant reduction in R@1 and R@5 is observed when is increased, indicating the models as over-fitting.

We performed a similar hyper-parameter search for finding a suitable scale for incorporating video captioning features, based on the best performing hyper-parameters for object segmentation features. We obtained the best results with a scale of . The results are shown in model 7 of Table 1.

3.4.3 Qualitative results

Figures 5 and 6 show qualitative examples from MML compared to the MAC baseline. These examples are from Model 7 in Table 1. Below each example, ground truth (green stripe), MAC prediction (red stripe) and MML prediction (blue stripe) are indicated.

Figure 5 shows examples where the use of MML improved results over the MAC baseline. The improvement on examples 4(a), 4(b), 4(c), 4(d) and 4(e) can be largely attributed to the introduction of object segmentation features. This is because, in these examples, the objects referred by the queries enter/leave visibility at the vicinity of predictions. In the example 4(f), improvements are made despite the object (“shoe”) being on the scene throughout the video. Thus, BERT sentence embedding and video captioning features may have played a role in these improvements.

Figure 6 shows two examples where improvements of MML model did not improve the results. In Figure 5(a), the object does not change visibility and therefore, the results are identical to the baseline. Figure 5(b) shows an example where MML has performed worse than MAC baseline. In this case, there is no object in the sentence.

(a) person hold the shoes
(b) person puts a pillow on their head
(c) the person is playing with a phone
(d) person finally eating a sandwich in a living room
(e) the person takes a paper towel from the table
(f) the person washes dishes over the sink
Figure 5: Selected top-1 results with improvements. Green lines, red lines and blue lines indicate ground truths, MAC results and MML results respectively. Queries are provided below each figure.
(a) person puts shoes on
(b) the person stands up
Figure 6: Selected top-1 results with no improvements. Green lines, red lines and blue lines indicate ground truths, MAC results and MML results respectively. Queries are provided below each figure.

4 Conclusion

In this project, we addressed the problem of video moment localization and proposed a novel method Multi-faceted Video Moment Localization (MML). Our model is built on top of the current state-of-the-art model MAC [3]. We introduced BERT sentence features for the text query, and object segmentation features and video captioning features for the video, thereby improving language based localization of moments in a given video. We performed an extensive ablation study to validate the effectiveness of each component introduced in MML. Our experiments show the significance of our proposed method over the current baselines.


  • [1] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §2.1.
  • [2] J. Gao, C. Sun, Z. Yang, and R. Nevatia (2017) TALL: temporal activity localization via language query.

    2017 IEEE International Conference on Computer Vision (ICCV)

    , pp. 5277–5285.
    Cited by: §1, §1, §2.1, §2, §3.1.
  • [3] R. Ge, J. Gao, K. Chen, and R. Nevatia (2018) MAC: mining activity concepts for language-based temporal localization. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 245–253. Cited by: Video Moment Localization using Object Evidence and Reverse Captioning, Figure 1, §1, Figure 2, §2.1, §2.1, §2.2, §2, §3.1, §3.2, §3.3, §4.
  • [4] J. Lin, C. Gan, and S. Han (2019) Tsm: temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7083–7093. Cited by: §2.3.
  • [5] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §2.1.
  • [6] D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2014) C3D: generic features for video analysis. ArXiv abs/1412.0767. Cited by: §1.
  • [7] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba (2017) Scene parsing through ade20k dataset.

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 5122–5130.
    Cited by: §2.2.