wMAN: Weakly-supervised Moment Alignment Network for Text-based Video Segment Retrieval

09/27/2019 ∙ by Reuben Tan, et al. ∙ berkeley college 23

Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations. To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations. The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG). Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing. Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Video understanding has been a mainstay of artificial intelligence research. Recent work has sought to better reason about videos by learning more effective spatio-temporal representations

(tran2015learning; carreira2017quo). The video moment retrieval task, also known as text-to-clip retrieval, combines language and video understanding to find activities described by a natural language sentence. The main objective of the task is to identify the video segment within a longer video that is most relevant to a sentence. This requires a model to learn the mapping of correspondences (alignment) between the visual and natural language modalities.

In the strongly-supervised setting, existing methods (hendricks17iccv; chen2018temporally; ghosh2019excl) generally learn joint visual-semantic representations by projecting video and language representations into a common embedding space and leverage provided temporal annotations to learn regressive functions (gao2017tall) for localization. However, such temporal annotations are often ambiguous and expensive to collect. mithun2019weakly seeks to circumvent these problems by proposing to address this task in the weakly-supervised setting where only full video-sentence pairs are provided as weak labels. However, the lack of temporal annotations renders the aforementioned approaches infeasible. In their approach (Figure 1a), mithun2019weakly proposes a Text-Guided Attention (TGA) mechanism to attend on segment-level features w.r.t. the sentence-level representations. However, such an approach treats the segment-level visual representations as independent inputs and ignores the contextual information derived from other segments in the video. More importantly, it does not exploit the fine-grained semantics of each word in the sentence. Consequently, existing methods are not able to reason about the latent alignment between the visual and language representations comprehensively.

In this paper, we take another step towards addressing the limitations of current weakly-supervised video moment retrieval methods by exploiting the fine-grained temporal and visual relevance of each video frame to each word (Figure 1b). Our approach is built on two core insights: 1) The temporal occurrence of frames or segments in a video provides vital visual information required to reason about the presence of an event; 2) The semantics of the query are integral to reasoning about the relationships between entities in the video. With this in mind, we propose our Weakly-Supervised Moment Alignment Network (wMAN). An illustrative overview of our model is shown in Figure 2. The key component of wMAN is a multi-level co-attention mechanism that is encapsulated by a Frame-by-Word (FBW) interaction module as well as a Word-Conditioned Visual Graph (WCVG). To begin, we exploit the similarity scores of all possible pairs of visual frame and word features to create frame-specific sentence representations and word-specific video representations. The intuition is that frames relevant to a word should have a higher measure of similarity as compared to the rest. The word representations are updated by their word-specific video representations to create visual-semantic representations. Then a graph (WCVG) is built upon the frame and visual-semantic representations as nodes and introduces another level of attention between them. During the message-passing process, the frame nodes are iteratively updated with relational information from the visual-semantic nodes to create the final temporally-aware multimodal representations. The contribution of each visual-semantic node to a frame node is dynamically weighted based on their similarity. To learn such representations, wMAN also incorporates positional encodings (vaswani2017attention) into the visual representations to integrate contextual information about their relative positions. Such contextual information encourage the learning of temporally-aware multimodal representations.

Figure 1: Given a video and a sentence, our aim is to retrieve the most relevant segment (the red bounding box in this example). Existing methods consider video frames as independent inputs and ignore the contextual information derived from other frames in the video. They compute a similarity score between the segment and the entire sentence to determine their relevance to each other. In contrast, our proposed approach aggregates contextual information from all the frames using graph propagation and leverages fine-grained frame-by-word interactions for more accurate retrieval. (Only some interactions are shown to prevent overcrowding the figure.)

To learn these representations, we use a Multiple Instance Learning (MIL) framework that is similar in nature to the Stacked Cross Attention Network (SCAN) model (lee2018stacked). The SCAN model leverages image region-by-word interactions to learn better representations for image-text matching. In addition, the WCVG module draws inspiration from the Language-Conditioned Graph Network (LCGN) by hu2019language which seeks to create context-aware object features in an image. However, the LCGN model works with sentence-level representations, which does not account for the semantics of each word to each visual node comprehensively. wMAN also distinguishes itself from the above-mentioned models by extracting temporally-aware multimodal representations from videos and their corresponding descriptions, whereas SCAN and LCGN only work on images.

Figure 2: An overview of our combined wMAN model which is trained end-to-end. We use the outputs of the GRU as word representations where its inputs are word embeddings. The visual representations are the outputs of the LSTM unit where its inputs are the extracted features from a pretrained CNN. The visual representations are concatenated with positional encodings to integrate contextual information about their relative positions in the sequence. Our model consists of a two-stage multimodal interaction mechanism - Frame-By-Word Interactions and the WCVG.

The contributions of our paper are summarized below:

  • We propose a simple yet intuitive MIL approach for weakly-supervised video moment retrieval from language queries by exploiting fine-grained frame-by-word alignment.

  • Our novel Word-Conditioned Visual Graph learns richer visual-semantic context through a multi-level co-attention mechanism.

  • We introduce a novel application of positional embeddings in video representations to learn temporally-aware multimodal representations.

To demonstrate the effectiveness of our learned temporally-aware multimodal representations, we perform extensive experiments over two datasets, Didemo (hendricks17iccv) and Charades-STA (gao2017tall), where we outperform the state-of-the-art weakly supervised model by a significant margin and strongly-supervised state-of-the-art models on some metrics.

2 Related Work

Most of the recent works in video moment retrieval based on natural language queries (hendricks17iccv; ghosh2019excl; xu2019multilevel; zhang2019man; chen2018temporally; yuan2019find; chen2019semantic; chen2019localizing; ge2019mac) are in the strongly-supervised setting, where the provided temporal annotations can be used to improve the alignment between the visual and language modalities. Among them, the Moment Alignment Network (MAN) introduced by zhang2019man utilizes a structured graph network to model temporal relationships between candidate moments, but one of the distinguishing factors with our wMAN is that our iterative message-passing process is conditioned on the multimodal interactions between frame and word representations. The TGN (chen2018temporally) model bears some resemblance to ours by leveraging frame-by-word interactions to improve performance. However, it only uses a single level of attention which is not able to infer the correspondence between the visual and language modalities comprehensively. In addition, we reiterate that all these methods train their models using strong supervision, whereas we address the weakly supervised setting of this task.

There are also a number of closely-related tasks to video moment retrieval such as temporal activity detection in videos. A general pipeline of proposal and classification is adopted by various temporal activity detection models (xu2017r; zhao2017temporal; shou2016temporal) with the temporal proposals learnt by temporal coordinate regression. However, these approaches assume you are provided with a predefined list of activities, rather than an open-ended list provided via natural language queries at test time. Methods for visual phrase grounding also tend to be provided with natural language queries as input (chen2017query; liu2017referring; faghri2018vse++; nam2017dual; karpathy2015deep; plummer2018conditional), but the task is performed over image regions to locate a related bounding box rather than video segments to locate the correct moment.

3 Weakly-Supervised Moment Alignment Network

In the video moment retrieval task, given a ground truth video-sentence pair, the goal is to retrieve the most relevant video moment related to the description. The weakly-supervised version of this task we address can be formulated under the multiple instance learning (MIL) paradigm. When training using MIL, one receives a bag of items, where the bag is labeled as a positive if at least one item in the bag is a positive, and is labeled as a negative otherwise. In weakly-supervised moment retrieval, we are provided with a video-sentence pair (i.e., a bag) and the video segments are the items that we must learn to correctly label as relevant to the sentence (i.e., positive) or not. Following mithun2019weakly, we assume sentences are only associated with their ground truth video, and any other videos are negative examples. To build a good video-sentence representation, we introduce our Weakly-Supervised Moment Alignment Network (wMAN), which learns context-aware visual-semantic representations from fine-grained frame-by-word interactions. As seen in Figure 2, our network has two major components - (1) representation learning constructed from the Frame-By-Word attention and Positional Embeddings (vaswani2017attention), described in Section 3.1, and (2) a Word-Conditioned Visual Graph where we update video segment representations based on context from the rest of the video, described in Section 3.2. These learned video segment representations are used to determine their relevance to their corresponding attended sentence representations using a LogSumExp (LSE) pooling similarity metric, described in Section  3.3.

3.1 Learning Tightly Coupled Multimodal Representations

In this section we discuss our initial video and sentence representations which are updated with contextual information in Section 3.2. Each word in an input sentence is encoded using GloVe embeddings (pennington2014glove)

and then fed into a Gated Recurrent Unit (GRU) 

(cho2014learning). The output of this GRU is denoted as where

is the number of words in the sentence. Each frame in the input video is encoded using a pretrained Convolutional Neural Network (CNN). In the case of a 3D CNN this actually corresponds to a small chunk of sequential frames, but we shall refer to this as a frame representation throughout this paper for simplicity. To capture long-range dependencies, we feed the frame features into a Long Short-Term Memory (LSTM)

(hochreiter1997long). The latent hidden state output from the LSTM are concatenated with positional encodings (described below) to form the initial video representations, denoted as where is the number of frame features for video .

Positional Encodings (PE). To provide some notion of the relative position of each frame we include the PE features which have been used in language tasks like learning language representations using BERT (devlin2018bert; vaswani2017attention). These PE features can be thought of similar to the temporal endpoint features (TEF) used in prior work for strongly supervised moment retrieval task (e.g.hendricks17iccv), but the PE features provide information about the temporal position of each frame rather than the rough position in segment level. For the desired PE features of dimension , let indicates the temporal position of each frame, is the index of the feature being encoded, and is a scalar constant, then the PE features are defined as:

(1)

Through experiments, the hyper-parameter ,000 works well for all videos. These PE features are concatenated with the LSTM encoded frame features at corresponding frame position before going to the cross-modal interaction layers.

3.1.1 Frame-By-Word Interaction

Rather than relating a sentence-level representation with each frame as done in prior work (mithun2019weakly), we aggregate similarity scores between all frame and word combinations from the input video and sentence. These Frame-By-Word (FBW) similarity scores are used to compute attention weights to identify which frame and word combinations are important for retrieving the correct video segment. More formally, for video frames and words in the input, we compute:

(2)

Note that now represents the concatenation of the video frame features and the PE features.

Frame-Specific Sentence Representations. We obtain the normalized relevance of each word w.r.t. to each frame from the FBW similarity matrix, and use it to compute attention for each word:

(3)

Using the above-mentioned attention weights, a weighted combination of all the words are created, with correlated words to the frame gaining high attention. Intuitively, a word-frame pair should have a high similarity score if the frame contains a reference to the word. Then the frame-specific sentence representation emphasizes words relevant to the frame and is defined as:

(4)

Note that these frame-specific sentence representations don’t participate in the iterative message-passing process (Section 3.2). Instead, they are used to infer the final similarity score between a video segment and the query (Section 3.3).

Word-Specific Video Representations. To determine the normalized relevance of each frame w.r.t. to each word, we compute the attention weights of each frame:

(5)

Similarly, we attend to the visual frame features with respect to each word by creating a weighted combination of visual frame features determined by the relevance of each frame to the word. The formulation of each word-specific video-representation is defined as:

(6)

These word-specific video representations are used in our Word-Conditioned Visual Graph, which we will discuss in the next section.

3.2 Word-Conditioned Visual Graph Network

Given the sets of visual representations, word representations and their corresponding word-specific video representations, the WCVG aims to learn temporally-aware multimodal representations by integrating visual-semantic and contextual information into the visual features. To begin, the word representations are updated with their corresponding video representations to create a new visual-semantic representation by concatenating each word and the word-specific video representation . Intuitively, the visual-semantic representations not only contain the semantic context of each word but also a summary of the video with respect to each word. A fully connected graph is then constructed with the visual features and the embedded attention of visual-semantic representations as nodes.

Iterative Word-Conditioned Message-Passing The iterative message-passing process introduces a second round of FBW interaction similar to that in Section 3.1.1 to infer the latent temporal correspondence between each frame and visual-semantic representation . The goal is to update the representation of each frame with the video context information from each word-specific video representation . To realize this, we first learn a projection

followed by a ReLU of

to obtain a new word representation to compute a new similarity matrix on every message-passing iteration, namely, we obtain a replacement for in Eq. (2) via .

Updates of Visual Representations During the update process, each visual-semantic node sends its message (represented by its representation) to each visual node weighted by their edge weights. The representations of the visual nodes at the t-th iteration are updated by summing up the incoming messages as follows:

(7)

where is obtained by applying Eq. (3) to the newly computed FBW similarity matrix , and is a learned projection to make the same dimensions as the frame-specific sentence representation (refer to Eq. (4) ) which are finally used to compute a sentence-segment similarity score.

3.3 Multimodal Similarity Inference

The final updated visual representations are used to compute the relevance of each frame to its attended sentence-representations. A segment is defined as any arbitrary continuous sequence of visual features. We denote a segment as where is the number of frame features contained within the segment . We adopt the LogSumExp (LSE) pooling similarity metric used in SCAN, to determine the relevance each proposal segment has to the query:

(8)

is a hyperparameter that weighs the relevance of the most salient parts of the video segment to the corresponding frame-specific sentence representations. Finally, following 

mithun2019weakly, given a triplet , where is a positive pair and a negative pair, we use a margin-based ranking loss to train our model which ensures the positive pair’s similarity score is better than the negative pair’s by at least a margin. Our model’s loss is then defined as:

(9)

is used as the similarity metric between positive and negative pairs. During test-time, the pooled similarity scores will also be used to rank the candidate temporal segments generated by sliding windows, and the top scoring segments will the localized segments corresponding to the input query sentence.

4 Experiments

We evaluate the capability of wMAN to accurately localize video moments based on natural language queries without temporal annotations on two datasets - DiDeMo and Charades-STA. On the DiDeMo dataset, we adopt the mean Intersection-Over-Union (IOU) and Recall@N at IOU threshold = . Recall@N represents the percentage of the test sliding window samples which have a overlap of at least with the ground-truth segments. mIOU is the average IOU with the ground-truth segments for the highest ranking segment to each query input. On the Charades-STA dataset, only the Recall@N metric is used for evaluation.

4.1 Datasets

Charades-STA The Charades-STA dataset is built upon the original Charades [sigurdsson2016hollywood] dataset which contains video-level paragraph descriptions and temporal annotations for activities. Charades-STA is created by breaking down the paragraphs to generate sentence-level annotations and aligning the sentences with corresponding video segments. In total, it contains 12,408 and 3,720 query-moment pairs in the training and test sets respectively. For fair comparison with the weakly model TGA (mithun2019weakly), we use the same non-overlapping sliding windows of sizes 128 and 256 frames to generate candidate temporal segments.

DiDeMo The videos in the Distinct Describable Moments (DiDeMo) dataset are collected from Flickr. The training, validation and test sets contain 8395, 1065 and 1004 videos respectively. Each query contains the temporal annotations from at least 4 different annotators. Each video is limited to a maximum duration of 30 seconds and equally divided into six segments with five seconds each. With the five-second segment as basic temporal unit, there are 21 possible candidate temporal segments for each video. These 21 segments will used to compute the similarities with the input query and the top scored segment will be returned as the localization result.

Training iou = 0.3 iou = 0.5 iou = 0.7
Method Supervision R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10
(a) CTRL (gao2017tall) Strong - - - 23.63 58.92 - 8.89 29.52 -
MLVI (xu2019multilevel) Strong 54.7 95.6 99.2 35.6 79.4 93.9 15.8 45.4 62.2
MAN (zhang2019man) Strong - - - 46.53 86.23 - 22.72 53.72 -
(b) TGA (mithun2019weakly) Weak 29.68 83.87 98.41 17.04 58.17 83.44 6.93 26.80 44.06
wMAN (ours) Weak 48.04 89.01 99.57 31.74 72.17 86.58 13.71 37.58 45.16
Upper Bound - - - 99.84 - - 88.17 - - 46.80
Table 1: Moment retrieval performance comparison on the Charades-STA test set. (a) contains representative results of strongly-supervised methods reported in prior works while (b) reports the performance of weakly-supervised methods including our approach.

4.2 Implementation Details

For fair comparison, we utilize the same input features as the state-of-the-art method (mithun2019weakly). Specifically, the word representations are initialized with GloVe embeddings and fine-tuned during the training process. For the experiments on DiDeMo, we use the provided mean-pooled visual frame and optical flow features. The visual frame features are extracted from the fc7 layer of VGG-16 [simonyan2014very

] pretrained on ImageNet [

deng2009imagenet]. The input visual features for our experiments on Charades-STA are C3D [tran2015learning] features. We adopt an initial learning rate of and a margin used in our model’s triplet loss (Eq. 9). In addition, we use three iterations for the message-passing process. Our model is trained end-to-end using the ADAM optimizer.

4.3 Results

4.3.1 Charades-STA

The results in Table  1 show that our full model outperforms the TGA model by a significant margin on all metrics. In particular, the Recall@1 accuracy when IOU = 0.7 obtained by our model is almost doubled that of TGA. It is notable that we observe a consistent trend of the Recall@1 accuracies improving the most across all IOU values. This not only demonstrates the importance of richer joint visual-semantic representations for accurate localization but also the superior capability of our model to learn them. Our model also performs comparably to the strongly-supervised MAN model on several metrics.

To better understand the contributions of each component of our model, we present a comprehensive set of ablation experiments in Table  2. Note that our combined wMAN model is comprised of the FBW and WCVG components as well as the incorporation of PEs. The results obtained by our FBW variant demonstrate that capturing fine-grained frame-by-word interactions is essential to inferring the latent temporal alignment between these two modalities. More importantly, the results in the second row (FBW-WCVG) show that the second stage of multimodal attention, introduced by the WCVG module, encourages the augmented learning of intermodal relationships. Finally, we also observe that incorporating positional encodings into the visual representations (FBW-WCVG + PE) are especially helpful in improving Recall@1 accuracies for all IOU values. We provide results for a model variant that include TEFs which encode the location of each video segment. In Table  2, our experiments show that TEFs actually hurt performance slightly. Our model variant with PEs (FBW-WCVG + PE) outperforms the model variant with TEFs (FBW-WCVG + TEF) on all of the metrics. We theorize that the positional encodings aid in integrating temporal context and relative positions into the learned visual-semantic representations. This makes it particularly useful for Charades-STA since its videos are generally much longer.

To gain insights into the fine-grained interactions between frames and words, we provide visualizations in Figure 3. Our model is able to determine the most salient frames with respect to each word relatively well. In both examples, we observe that the top three salient frames with respect to each word are generally distributed over the same subset of frames. This seems to be indicative of the fact that our model leverages contextual information from all video frames as well as words in determining the salience of each frame to a specific word.

4.3.2 DiDeMo

Table  3 reports the results on the DiDeMo dataset. In addition to reporting the state-of-the-art weakly supervised results, we also include the results obtained by strongly-supervised methods. It can be observed that our model outperforms the TGA model by a significant margin, even tripling the Recall@1 accuracy achieved by them. This demonstrates the effect of learning richer joint visual-semantic representations on the accurate localization of video moments. In fact, our full model outperforms the strongly-supervised TGN and MCN models on the Recall@1 metric by approximately 10%.

We observe a consistent trend in the ablation studies (Table 4) as with those of Charades-STA. In particular, through comparing the ablation models FBW and FBW-WCVG, we demonstrate the effectiveness of our multi-level co-attention mechanism in WCVG where it improves the Recall@1 accuracy by a significant margin. Similar to our observations in Table  2, PEs help to encourage accurate latent alignment between the visual and language modalities, while TEFs fail in this aspect.

iou = 0.3 iou = 0.5 iou = 0.7
Method R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10
FBW 41.41 93.79 99.23 26.91 72.19 85.97 10.83 34.85 45.20
FBW-WCVG 43.99 90.85 99.19 28.24 70.70 86.14 11.64 34.85 45.20
FBW-WCVG + TEF 43.99 88.03 98.99 28.01 69.19 86.01 11.20 35.29 44.45
FBW-WCVG + PE (wMAN) 46.05 91.25 99.19 29.00 69.46 86.26 13.30 36.99 45.32
Table 2: Charades-STA ablation experiment results on a held-out validation set.
Method Training Supervision R@1 R@5 mIOU
(a) MCN hendricks17iccv Strong 28.10 78.21 41.08
TGN (chen2018temporally) Strong 28.23 79.26 42.97
(b) TGA (mithun2019weakly) Weak 12.19 39.74 24.92
wMAN Weak 38.07 63.94 38.37
Upper Bound - 74.75 100.00 96.05
Table 3: Moment retrieval performance comparison on the DiDeMo test set. (a) contains representative results of strongly-supervised methods reported in prior works while (b) reports the performance of weakly-supervised methods including our approach.
Method R@1 R@5 MIOU
FBW 30.19 66.74 39.06
FBW-WCVG 39.93 66.53 39.19
FBW-WCVG + TEF 37.55 66.36 39.11
FBW-WCVG + PE (wMAN) 41.62 66.57 39.20
Table 4: DiDeMo ablation experiment results on the validation set.

(a).1in-2.73in (b).1in-2.73in

Figure 3: Visualization of the final relevance weights of each word in the query with respect to each frame. Here, we display the top three weights assigned to the frames for each phrase. The colors of the three numbers (1,2,3) indicate the correspondence to the words in the query sentence. We also show the ground truth (GT) temporal annotation as well as our predicted weakly localized temporal segments in seconds. The highly correlated frames to each query word generally fall into the GT temporal segment in both examples.

5 Conclusion

In this work, we propose our weakly-supervised Moment Alignment Network with Word-Conditioned Visual Graph which exploits a multi-level co-attention mechanism to infer the latent alignment between visual and language representations at fine-grained word and frame level. Learning context-aware visual-semantic representations helps our model to reason about the temporal occurrence of an event as well as the relationships of entities described in the natural language query. Finally, our experimental results empirically demonstrate the effectiveness of such representations on the accurate localization of video moments.

Acknowledgements: This work is supported in part by DARPA and NSF awards IIS-1724237, CNS-1629700, CCF-1723379.

References