Jointly Cross- and Self-Modal Graph Attention Network for Query-Based Moment Localization

08/04/2020 ∙ by Daizong Liu, et al. ∙ Dalian University of Technology Huazhong University of Science u0026 Technology Columbia University 0

Query-based moment localization is a new task that localizes the best matched segment in an untrimmed video according to a given sentence query. In this localization task, one should pay more attention to thoroughly mine visual and linguistic information. To this end, we propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph. Specifically, the joint graph consists of Cross-Modal interaction Graph (CMG) and Self-Modal relation Graph (SMG), where frames and words are represented as nodes, and the relations between cross- and self-modal node pairs are described by an attention mechanism. Through parametric message passing, CMG highlights relevant instances across video and sentence, and then SMG models the pairwise relation inside each modality for frame (word) correlating. With multiple layers of such a joint graph, our CSMGAN is able to effectively capture high-order interactions between two modalities, thus enabling a further precise localization. Besides, to better comprehend the contextual details in the query, we develop a hierarchical sentence encoder to enhance the query understanding. Extensive experiments on four public datasets demonstrate the effectiveness of our proposed model, and GCSMAN significantly outperforms the state-of-the-arts.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Localizing activities in videos (Regneri et al., 2013; Yuan et al., 2016; Gavrilyuk et al., 2018; Feng et al., 2018, 2019) is an important topic in multimedia information retrieval. However, in realistic scenario, YouTube videos normally contain complicated background contents, and cannot be directly indicated by a pre-defined list of action classes. To address this problem, query-based moment localization is proposed recently (Gao et al., 2017; Anne Hendricks et al., 2017) and attracts increasing interests from the multimedia community (Liu et al., 2018b; Zhang et al., 2019b). It aims to ground the most relevant video segment according to a given sentence query. This task is challenging because most part of video contents are irrelevant to the query while only a short segment matches the sentence. Therefore, video and sentence information need to be deeply incorporated to distinguish the fine-grained details of different video segments and perform accurate segment localization.

Figure 1. Given a query and an untrimmed video, CSMGAN considers cross-modal relation for highlighting relevant instances (brown rectangles), and self-modal relation for correlating sequential elements (red rectangle) and distinguishing components near the boundary (green rectangle).

Most existing methods (Liu et al., 2018a; Chen et al., 2018; Zhang et al., 2019c; Yuan et al., 2019b, a) for this task focus on learning the cross-modal relations between video and sentence. Specifically, they develop attention based interaction mechanisms to enhance the video representation with sentence information. Meanwhile, few algorithms (Chen et al., 2017a; Zhang et al., 2019c) attempt to learn the self-modal relations. For example, Zhang et al. (Zhang et al., 2019c) leverage self-attention to capture long-range semantic dependencies just in video encoding. However, the cross- and self-modal relations are never jointly investigated in a joint framework for this task. As shown in Figure 1, for the video modality, each frame should not only obtain information from its associated words in the query to highlight relevant frames (brown rectangles), but also need to correlate these highlighted frames to infer the sequential activity (red rectangle). At the same time, as the adjacent frame (green rectangle) near the boundary shows different visual appearance, such self-modal relation also contributes to distinguishing the segment boundaries for more precise localization. Similarly, for the query modality, a better understanding of sentence can be acquired in conjunction with both frames and other words. Such cases motivate us to propose a joint framework for modelling both cross- and self-modal relations.

In this paper, we develop a novel cross- and self-modal graph attention network (CSMGAN) for query-based moment localization, which recasts this task as an end-to-end, message passing based joint graph information fusion procedure. The joint graph consists of a cross-modal relation graph (CMG) and a self-modal relation graph (SMG), and represents both video frames and sentence words as nodes. Specially, in each joint graph layer, CMG first establishes the edges between each word-frame pair for cross-modal information passing, where the directed pair-wise relations are efficiently captured by a heterogeneous attention mechanism. Subsequently, SMG is designed to capture the complex self-modal relations by establishing the edges within each modality. The combination of CMG and SMG makes it possible to obtain more contextual representations by correlating highlighted cross-modal instances with sequential elements. Moreover, by stacking multiple layers to recursive propagate messages over the joint graph, our CSMGAN can capture higher-level relationships among multi-modal representations, and comprehensively integrates the localization information for precise moment retrieval.

Besides, traditional methods (Zhang et al., 2019a; Yuan et al., 2019a; Wang et al., 2020; Mithun et al., 2019; Yuan et al., 2019b; Chen and Jiang, 2019) adopt RNN for sentence query embedding. However, they fail to explicitly consider the multi-granular textual information, such as specific phrases which are crucial to understanding the sentence. To capture the fine-grained query representations, we build a hierarchical structure to understand the query at three levels: word-, phrase- and sentence-level. These hierarchical representations are then merged to stand for a more informative understanding of the sentence query.

In summary, the main contributions of our work are:

  • We present a cross- and self-modal graph attention network (CSMGAN), which is made up of cross- and self-modal graph for localizing desired moments. To the best of our knowledge, it is the first time that a joint framework is proposed to consider both cross- and self-modal relations for query-based moment localization.

  • We design a hierarchical structure to capture the fine-grained sentence representation at three different levels: word-level, phrase-level and sentence-level.

  • We conduct experiments on Activity Caption and TACoS datasets and our CSMGAN outperforms the state-of-the-arts with clear margins.

2. Related works

Query-based localization in images. Early works of localization task mainly focus on localizing the image region corresponding to a language query. They first generate candidate image regions using image proposal method (Ren et al., 2015), and then find the matched one with respect to the given query. Some works (Mao et al., 2016; Hu et al., 2016; Rohrbach et al., 2016)

try to extract target image regions based on description reconstruction error or probabilities. There are also several studies

(Yu et al., 2016; Chen et al., 2017b, a; Zhang et al., 2018) considering incorporating contextual information of region-phrase relationship into the localization model. (Wang et al., 2016)

further models region-region and phrase-phrase structures. Some other methods exploit attention modeling in queries, images, or object proposals

(Endo et al., 2017; Deng et al., 2018; Yu et al., 2018).

Figure 2. Illustration of our proposed CSMGAN. We first utilize a self-attention based video encoder and a hierarchical sentence encoder to extract corresponding features. Then, a jointly cross- and self-modal graph is devised for multi-modal interaction. In the joint graph, words and frames both represented as nodes first construct CMG to mine cross-modal relations and update their states through ConvGRU. Then the nodes are reorganized as SMG to model the self-modal relationships and updated for the next layer graph input. At last, we conduct multi-modal integration and perform moment localization.

Query-based moment localization in videos. It is a new task introduced recently (Gao et al., 2017; Anne Hendricks et al., 2017), which aims to localize the most relevant video segment from a video with text descriptions. Traditional methods (Liu et al., 2018a; Gao et al., 2017) sample candidate segments from a video first, and subsequently integrate query with segment representations via a matrix operation. To further mine the cross-modal interaction more effectively, some works (Xu et al., 2019; Chen and Jiang, 2019; Ge et al., 2019; Zhang et al., 2020) integrate the sentence representation with those video segments individually, and then evaluated their matching relationships through the integrated features. For instance, Xu et al. (Xu et al., 2019) introduce a multi-level model to integrate visual and textual features earlier and further re-generate queries as an auxiliary task. Ge et al. (Ge et al., 2019) and Chen et al. (Chen et al., 2018) capture the evolving fine-grained frame-by-word interactions between video and query to enhance the video representation understanding. Recently, other works (Chen et al., 2018; Wang et al., 2020; Zhang et al., 2019c, a; Yuan et al., 2019a; Mithun et al., 2019) propose to directly integrate sentence information with each fine-grained video clip unit, and predict the temporal boundary of the target segment by gradually merging the fusion feature sequence over time. Zhang et al. (Zhang et al., 2019a)

model relations among candidate segments produced from a convolutional neural network with the guidance of the query information. To modulate temporal convolution operations, Yuan

et al.(Yuan et al., 2019a) and Mithun et al. (Mithun et al., 2019) introduce the sentence information as a critical prior to compose and correlate video contents. Although these methods achieve relatively superior performances by capturing cross-modal information, they ignore to utilize the self-modal relation which is complementary to the cross-modal relation. Different from them, we propose a cross- and self-modal graph attention network to jointly consider both cross- and self-modal relations. The successive cross- and self-modal graphs enable our model to capture much higher-level interactions.

Graph neural networks. Graph neural network (GNN) (Scarselli et al., 2008) is an extension for recursive neural networks and random walk based models for graph structured data. As a follow-up work, Gilme et al. (Gilmer et al., 2017) further adapt GNN to sequential outputs with a learnable message passing module. As GNN is wildly used in sequential information processing, in this paper, we design a novel GNN module for cross- and self-modal relations mining. Different from original GNN, we represent edge weights by an attention mechanism and aggregate messages with a gate function. Moreover, we utilize a ConvGRU (Ballas et al., 2015) layer for node state updating.

3. The proposed CSMGAN framework

3.1. Overview

Given an untrimmed video and a sentence query , the task aims to determine the start and end timestamps of specific video segment referring to the sentence query. Formally, we represent the video as frame-by-frame, where is the -th frame in the video and is the total frame number. We also denote the given sentence query as word-by-word, where is the -th word. With the training set , we aim to learn to predict the most relevant video segment boundary which conforms to the sentence query information.

We present our method CSMGAN in Figure 2. First of all, a self-attention based video encoder and a hierarchical sentence encoder are utilized to extract contextual sentence and video embeddings. Then, in order to better interact multi-modal features, we capture both cross- and self-modal relations by developing a jointly cross- and self-modal graph network. Specially, in the joint graph, a cross-modal relation graph (CMG) establishes weighted edges between frame-word pairs to pass the message flows across the modalities. Following it, a self-modal relation graph (SMG) reorganizes the previous nodes and edges to model the relationships within each modality. With the self-relation complemented to the cross-relation, the joint graph can perform richer interaction of multi-modalities. Moreover, with multiple layers of such joint graph, the CSMGAN can capture higher-order relationships. At last, the enhanced two modal representations are integrated to score different candidate video segments by a moment localization module.

Figure 3. Illustration of our cross-modal relation graph and self-modal relation graph. Both two graphs compute attention matrices to stand for the attention weights on the corresponding edges. Each node aggregates the messages from its neighbor nodes in an edge-weighted manner and updates its state with both aggregated message and current state through ConvGRU. We apply a gate function in cross-modal graph and consider the temporal position for all nodes in self-modal graph.

3.2. Video and Sentence Encoder

Video encoder. Following (Zhang et al., 2019c), we first extract the frame-wise features by a pre-trained C3D network (Tran et al., 2015), and then employ a self-attention (Vaswani et al., 2017) module to capture the long-range dependencies among video frames. Considering the sequential characteristic in video, a bi-directional GRU (Chung et al., 2014) is further utilized to capture the contextual information in time series. We denote the encoded representation as , where the is the feature of the -th frame.
Sentence encoder.

Most previous works generally adopt recurrent neural networks to model the contextual information for each word during the sentence encoding process. However, considering the query “He continues playing the instrument”, it is reasonable to focus on the phrase “continues playing” instead of each single word to obtain more detailed temporal clues for precise localization. Therefore, to fully mine the guiding information, we develop a hierarchical structure with word-, phrase-, and sentence-level feature extracting for sentence query encoding.

We first generate the word-level features for the query by using the Glove word2vec embedding (Pennington et al., 2014), and denote them as , where is the number of words in the sentence and

is the Glove embedding dimension. To discover the potential phrase-level features, we apply 1D convolutions on the word-level features with different window sizes. Specially, at each word location, we compute the inner product of the word feature vectors with convolution filters of three kinds of window sizes, which captures unigram, bigram and trigram features. To maintain the sequence length after convolution process, we zero-pad the sequence vectors when convolution window size is larger than one. The output of the

-th word location with window size is formulated as follows:

(1)

where operates on the windowed features with kernels. is the phrase-level feature corresponding to -th word location with window size

. To find the most contributed phrase at each word location, we then apply max-pooling to obtain the final phrase-level feature

by:

(2)

After obtaining the phrase-level feature vector , we encode them with a bi-directional GRU network to produce the sentence-level feature . At last, we concat these three-level features and leverage another bi-directional GRU network to integrate them by:

(3)

Here the contextual query representation is projected from the length to to keep same dimension as video representation. After the hierarchical embedding structure, the given query can obtain comprehensive understanding and provide robust representation for later localization.

3.3. Jointly Cross- and Self-modal Graph

As shown in Figure 3, we develop a jointly cross- and self-modal graph to capture both cross- and self-modal information for multi-modal representation interaction. Specially, the joint graph consists of two subgraphs: cross-modal relation graph (CMG) and self-modal relation graph (SMG). In the CMG, each frame (word) integrates information from the other modality according to the cross-modal attentive relations. Subsequently, in the SMG, the self-attentive contexts within modality are further captured. By stacking layers of such joint graphs, we can comprehensively perform the interaction between two modalities. Next, we will describe the detailed process of each subgraph in the -th joint graph layer.

3.3.1. Cross-Modal Relation Graph


Graph construction. In the CMG, we build a directed graph as , where , containing all frames and words as nodes, and is the edge set between all word-frame node pairs, namely edge represents the cross-modal interaction from word node to frame node and denotes the reverse interaction. To initialize the input features for each node, we set the encoded video representation of nodes and query representation of nodes as initial hidden states: and in the CMG, respectively.
Cross-modal attention. To update CMG, the first step is to compute the attention weights between frame and word nodes , which represent their pair-wise relations. As in Figure 3 (a), the attention weight on each pair-wise edge can be computed as below:

(4)

and are the feature vectors for word and frame nodes in -th layer. As and come from different feature distributions, are linear projection used to embed the heterogeneous nodes (Hu et al., 2020) into a joint latent space instead of direct computing in the node embedding space. Each row of denotes the similarity of all frame nodes to the specific word node , and each column represents the similarity of all word nodes to the specific frame node .
Node message aggregation. For message aggregation, we aggregate the assigned features for each node from its neighbors in an edge-weighted manner (Veličković et al., 2017). Here we introduce Figure 3 (b) which aggregates all neighboring word nodes for frame node , and Figure 3 (c) performs the reverse aggregation process. For word node in , the assigned feature to is:

(5)

The softmax procedure makes the sum of all word nodes’ attention vectors to one. However, not all neighborhood nodes share same semantic importance, several neighborhood nodes contribute less to target node. For example, word “the” in the query is not informative enough to the frame, and the frames only containing one stationary basketball should have less significance to highlight “playing basketball”. To emphasize informative neighborhood nodes and weaken inessential ones, we apply a learnable gate function to measure the confidence of each neighbor message by:

(6)

where

is the sigmoid function,

and are the trainable weight parameter and bias. Then, we can aggregate the gated messages for node by:

(7)

With the help of such gate mechanism, the irrelevant aggregated messages are filtered and messages from relevant node pairs are further enhanced.
Node representation update. After aggregating the information from all neighbors, node gets a new state by taking into account its current state and its received messages . To preserve the sequential information conveyed in the prior state and the messages, we do not utilize a simple element-wise matrix addition on and . Instead, we leverage a ConvGRU (Ballas et al., 2015) layer to update the node state with two inputs by:

(8)

This ConvGRU is proposed as a convolutional counterpart to original fully connected GRU (Cho et al., 2014). In the same way, the representations for nodes of two modalities can be updated.

3.3.2. Self-Modal Relation Graph


Graph construction. Following CMG, our SMG aims to capture the complex self-modal relations within each modality. It only connects edges between word-word or frame-frame node pairs. Like CMG, we denote this graph as , where is the node set containing all frames and words, and each edge in edge set indicates the self-modal relation.
Self-modal attention. Figure 3 (d) and (e) depict the process of self-modal information passing. Given a node in Figure 3 (d) for example, we first compute a self-attention matrix to stand for the relations from its neighbor nodes to itself. To better correlate the relevant nodes, in this stage, we consider both the semantic information as well as the temporal position in the sequence of each node. We argue that the temporal index of the node is critical to our localization task as less attention should be given to distant frame (word) nodes, even if they are semantically similar to the current node. Inspired by Transformer’s positional encoding (Vaswani et al., 2017), we denote the position encoding for each node as:

(9)

where , and varies from 1 to dimension. With the positional and semantic information combined, the self-attention matrix can be calculated by:

(10)

where calculates the similarity for all frame nodes to current nodes , is to balance the two types of information.
Node message aggregation. The aggregation process can be formulated as following:

(11)

Here we only aggregate the information from as positional information is designed for auxiliary similarity computing.
Node representation update. Similar to CMG, we also exploit a ConvGRU layer to update the node state and get the output as:

(12)

At last, following the same procedure, we can get the final representations for all nodes of each modality in the -th jointly cross- and self-modal graph layer as and . Subsequently, these two modal features will be feed to the -th joint graph layer as input.

3.4. Multi-Modal Integration Module

After joint graph layer, we can get mutual sentence-aware video representation and video-aware sentence representation

. To integrate both two representations, we first compute the cosine similarity between each pair of word feature

and frame feature as:

(13)

where is the linear parameter. is used to further extract the implicit query clues for each frame. Once we get the similarity scores between entire words and a specific frame , we integrate the query information for frame as:

(14)

where is aggregated with the query representation relevant to the -th frame. We concat such aggregated query feature vectors with frame features to get the final multi-modal semantic representations , where .

3.5. Moment Localization

We first apply a bi-directional GRU network on to further absorb the contextual evidences in temporal domain. To predict the target video segment, we pre-define a set of candidate moments with multi-scale windows (Yuan et al., 2019a) at each time , where is the number of moments at current time-step. Then, we need to score these candidate moments and predict the offsets of them relative to the ground-truth. In details, we produce the confidence scores for these moments at time by a Conv1d layer:

(15)

where is the sigmoid function. The temporal offsets are predicted by another Conv1d layer:

(16)

Therefore, the final predicted moment of time can be presented as .
Training. We first compute the IoU (Intersection over Union) score between each candidate moment with the ground truth . If is larger than an IoU threshold

, we treat this candidate moment as a positive sample. We adopt an alignment loss to learn the confidence scoring rule for candidate moments, where the moments with higher IoUs will get higher confidence scores. The alignment loss function can be formulated as follows:

(17)

Since parts of the pre-defined candidates are coarse in boundaries, we only fine-tune the localization offsets of positive moment samples by a boundary loss:

(18)

where denotes the number of positive moments, and is the smooth L1 loss. Therefore, the joint loss can be represented as:

(19)

where is utilized to control the balance.
Inference. We first rank all candidate moments according to their predicted confidence scores, and then adopt a non-maximum suppression (NMS) to select “Top ” moments as the prediction.

4. Experiments

4.1. Datasets and Evaluation Metrics

Activity Caption. Activity Caption (Krishna et al., 2017) contains 20k untrimmed videos with 100k descriptions from YouTube. The videos are 2 minutes on average, and the annotated video clips have much larger variation, ranging from several seconds to over 3 minutes. Since the test split is withheld for competition, following public split, we adopt “val 1” as validation subset, “val 2” as our test subset.

TACoS. TACoS (Regneri et al., 2013) is widely used on this task and contain 127 videos. The videos from TACoS are collected from cooking scenarios, thus lacking the diversity. They are around 7 minutes on average. We use the same split as (Gao et al., 2017), which includes 10146, 4589, 4083 query-segment pairs for training, validation and testing.

Evaluation Metrics. Following previous works (Gao et al., 2017; Yuan et al., 2019a)

, we adopt “R@n, IoU=m” as our evaluation metrics. The “R@n, IoU=m” is defined as the percentage of at least one of top-n selected moments having IoU larger than m. Following

(Liu et al., 2018a; Yuan et al., 2019a; Wang et al., 2020), we choose the evaluation criteria “R@n, IoU=m” with and “R@n, IoU=m” with for Activity Caption and TACoS datasets, respectively.

Method R@1 R@1 R@1 R@5 R@5 R@5
IoU=0.3 IoU=0.5 IoU=0.7 IoU=0.3 IoU=0.5 IoU=0.7
MCN (Anne Hendricks et al., 2017) 39.35 21.36 6.43 68.12 53.23 29.70
TGN (Chen et al., 2018) 45.51 28.47 - 57.32 43.33 -
CTRL (Gao et al., 2017) 47.43 29.01 10.34 75.32 59.17 37.54
ACRN (Liu et al., 2018a) 49.70 31.67 11.25 76.50 60.34 38.57
QSPN (Xu et al., 2019) 52.13 33.26 13.43 77.72 62.39 40.78
CBP (Wang et al., 2020) 54.30 35.76 17.80 77.63 65.89 46.20
SCDM (Yuan et al., 2019a) 54.80 36.75 19.86 77.29 64.99 41.53
ABLR (Yuan et al., 2019b) 55.67 36.79 - - - -
GDP (Chen et al., 2020) 56.17 39.27 - - - -
CMIN (Zhang et al., 2019c) 63.61 43.40 23.88 80.54 67.95 50.73
CSMGAN 68.52 49.11 29.15 87.68 77.43 59.63
Table 1. Performance compared with previous methods on the Activity Caption dataset.
Method R@1 R@1 R@1 R@5 R@5 R@5
IoU=0.1 IoU=0.3 IoU=0.5 IoU=0.1 IoU=0.3 IoU=0.5
MCN (Anne Hendricks et al., 2017) 3.11 1.64 1.25 3.11 2.03 1.25
CTRL (Gao et al., 2017) 24.32 18.32 13.30 48.73 36.69 25.42
ABLR (Yuan et al., 2019b) 34.70 19.50 9.40 - - -
ACRN (Liu et al., 2018a) 24.22 19.52 14.62 47.42 34.97 24.88
QSPN (Xu et al., 2019) 25.31 20.15 15.23 53.21 36.72 25.30
TGN (Chen et al., 2018) 41.87 21.77 18.90 53.40 39.06 31.02
GDP (Chen et al., 2020) 39.68 24.14 13.50 - - -
CMIN (Zhang et al., 2019c) 32.48 24.64 18.05 62.13 38.46 27.02
SCDM (Yuan et al., 2019a) - 26.11 21.17 - 40.16 32.18
CBP (Wang et al., 2020) - 27.31 24.79 - 43.64 37.40
CSMGAN 42.74 33.90 27.09 68.97 53.98 41.22
Table 2. Performance compared with previous methods on the TACoS dataset.
Components Module Activity Caption TACoS
R@1 R@1 R@1 R@5 R@5 R@5 R@1 R@1 R@1 R@5 R@5 R@5
IoU=0.3 IoU=0.5 IoU=0.7 IoU=0.3 IoU=0.5 IoU=0.7 IoU=0.1 IoU=0.3 IoU=0.5 IoU=0.1 IoU=0.3 IoU=0.5
Reference full 68.52 49.11 29.15 87.68 77.43 59.63 42.74 33.90 27.09 68.97 53.98 41.22
Encoder w/o HS 66.32 46.80 26.54 85.97 74.43 56.49 39.56 30.42 24.40 65.50 51.39 39.38
Joint Graph w/o CSG 64.13 44.47 25.49 84.35 72.97 54.47 36.91 28.45 22.27 63.18 49.19 37.11
Cross-Modal Graph w/o EM 67.48 47.94 28.09 86.02 75.27 56.32 41.11 32.24 25.93 66.67 53.05 40.17
w/o MG 67.28 47.39 28.13 86.46 74.91 56.47 40.48 31.66 25.73 66.39 52.56 40.21
Self-Modal Graph w/o SMG 66.53 46.62 27.57 85.68 73.97 55.68 39.64 30.86 24.61 65.10 50.90 39.11
w/o PE 67.41 48.45 28.56 86.51 75.20 57.20 40.23 31.32 25.30 66.17 51.41 39.43
Node Update w/o CG 67.37 47.51 28.07 86.06 75.66 56.96 40.97 31.96 25.66 66.42 52.00 40.04
Table 3. Ablation study on the Activity Caption and TACoS datasets, where the reference is our full model.

4.2. Implementation Details

For training our CSMGAN, we first resize every frame of videos to pixels as input, and then apply a pre-trained C3D (Tran et al., 2015) to obtain 4096 dimension features. After that we apply PCA to reduce the feature dimension from 4096 to 500 for decreasing the model parameters. These 500-d features are used as the frame features in our model. Since some videos are overlong, we set the length of video feature sequences to 200 for both Activity Caption and TACoS datasets. As for sentence encoding, we utilize Glove word2vec (Pennington et al., 2014) to embed each word to 300 dimension features. The hidden state dimension of BiGRU networks is set to 512. We set

to 1 for positional encoding. During moment localization, we adopt convolution kernel size of [16, 32, 64, 96, 128, 160, 192] for Activity Caption, and [8, 16, 32, 64] for TACoS. We set the stride of them as 0.5, 0.125, respectively. We then set the high-score threshold

to 0.45, and the balance hyper-parameter to 0.001 for Activity Caption, 0.005 for TACoS. The number of our joint graph layer is set to 2. We train our model with an Adam optimizer with leaning rate for Activity Caption and TACoS, respectively. The batch size is set to 128 and 64 for two datasets, respectively.

(a)
(b)
Figure 4. Effect of the number of graph layers on the Activity Caption and TACoS Datasets.

4.3. Performance Comparison and Analysis

Activity Caption. Table 1 shows the performance evaluation results of our method and all comparing methods on Activity Caption dataset. Compared to the state-of-the-arts methods, our model surpasses them with clear margin on all R@1 and R@5 metrics. Specially, our method brings 5.27% and 8.90% absolute improvements in the strict metrics “R@1, IoU=0.7” and “R@5, IoU=0.7”.

TACoS. Table 2 shows the performance results of our method and all baselines on TACoS dataset. On this challenging dataset, we find that our method still achieves significant improvements. In details, our method brings 2.30% and 3.82% improvements in the strict metrics“R@1, IoU=0.5” and “R@5, IoU=0.5”, respectively.

Analysis. Specifically, the compared methods can be divided into two classes: 1) Sliding window based methods: MCN (Anne Hendricks et al., 2017), CTRL (Gao et al., 2017), and ACRN (Liu et al., 2018a) first sample candidate video segments using sliding windows, and directly integrate query representations with window-based segment representations via a matrix operation. They do not employ a comprehensively structure for effective cross-modal interaction, leading to relatively lower performances than other methods. 2) Cross-modal interaction based methods: TGN (Chen et al., 2018), QSPN (Xu et al., 2019), CBP (Wang et al., 2020), SCDM (Yuan et al., 2019a), ABLR (Yuan et al., 2019b), GDP (Chen et al., 2020), and CMIN (Zhang et al., 2019c) integrate query representations with the whole video representations in an attention-guided manner, and can generate contextual query-guided video representation for precisely boundary localization. However, they ignore to capture the self-modal relation which helps to correlate relevant instances within each modality. Compared to them, our method emphasizes the importance of capturing both cross- and self-modal relation during the effective integration of multi-modal feature. Our jointly cross- and self-modal graph can mine much richer and higher-level interactions, thus achieving better results than both two kinds of methods.

4.4. Ablation Study

In this section, we perform ablation studies to examine the effectiveness of our proposed CSMGAN. Specifically, we re-train our model with the following settings:

  • w/o HS: We first remove the hierarchical structure from the sentence encoder, and only take a bi-directional GRU to encode the sentence query.

  • w/o CSG: We then discard the jointly cross- and self-modal graph to validate the importance of cross- and self-modal relations capturing in multi-modal interaction.

  • w/o EM: To explore the effect of heterogeneous attention in CMG, we remove the embedding matrices in Eq. 4, and compute the attentive matrix in the node embedding space.

  • w/o MG: To further analyze the gate mechanism in CMG, we remove the gate function during the message passing in the cross-modal graph layer.

  • w/o SMG: To evaluate the effect of SMG, we remove the self-modal graph, and only apply cross-modal graph for multi-modal interaction.

  • w/o PE: To assess the component of SMG, we remove the positional encoding from the SMG.

  • w/o CG: Finally, we replace the ConvGRU with a simple matrix element-wise addition during the node updating.

  • full: The full model.

Figure 5. Visualization on the edge weights of both cross-modal graph (CMG) and self-modal graph (SMG). Left: column-wise weights of the attention matrix of different CMG layer, where the weights stands for the relations from all words to a specific frame. Right: self-attention weights of different SMG layer, where the relevant frames have higher context weights.

The ablation study conducted on Activity Caption and TACoS datasets are shown in Table 3. By analyzing the ablation results, we have the conclusions as follows:

  • First of all, our full model outperforms all the ablation models on both two datasets, which demonstrates each component is definitely helpful for this task.

  • Compared to other ablation models, the w/o CSG model performs worst on two datasets. This means that our jointly cross- and self-modal graph takes an important role in effective multi-modal features interaction. Besides, the hierarchical structure (HS) for sentence embedding also has significant contribution to the full model.

  • At last, almost all ablation models still yield better results than all state-of-the-arts methods. This fact demonstrates that the excellent performance of our graph based framework does not only rely on one specific key component, and our full model is robust to address this task.

To further investigate the influence on the various number of our joint graph layers, we show the impact of different layer numbers on two datasets in Figure 4. We can observe that our model achieves best result when the number of layer is set to 2. Then the performance will drop if the number of layers increases. The propagated messages between the instances in cross-modality and self-modality will be accumulated if we use more graph layers, resulting in over-smoothing (Li et al., 2018) problem, namely the representations of both video and sentence converge to the same value.

Figure 6. Qualitative visualization on both two datasets (top: Activity Caption, bottom: TACoS).

4.5. Qualitative Results

To qualitatively validate the effectiveness of our method, we show examples from two datasets in Figure 6. Although the sentences are very diverse, our full model can still localize more accurate boundaries than CMIN. In two variant models, the w/o CSG has the most coarse boundaries because it lacks the detailed interaction of multi-modal features. The w/o HS fails to capture more contextual sentence guiding clues for localization, leading to relatively coarse boundaries. As a comparison, our full model achieves the most precise localization.

We further give a deep visualization on the cross- and self-modal relations in each joint graph layer. Specially, we first visualize the relations from all words to a specific frame in the cross-modal graph. As shown in Figure 5 (left), sentence “she begins brushing the horse while still speaking” has two activities. For the non-relevant frame, the attention weights on these eight words are more inclined to be an even distribution. But for the relevant frame, the contributed words like “begins”, “brushing”, “still”, and “speaking” obtain higher attention weights since the described action indeed happens there. Moreover, with GNN layer increasing, the distribution of these words weights are sharper and more distinguishable. However, too many GNN layers will result in over-smoothing problem, where each frame-word pair has almost the same activation. We also plot the context weights in the self-modal graph as shown in Figure 5 (right). The weights are calculated by a softmax function, and they represent the relations from surrounding frames to one specific frame. We find that frame containing “brushing while speaking” is more relevant to the frame “begin brushing”. Although the frames near the segment boundaries are visually similar to the frames in the segment, the self-modal relation can effectively distinguish them and produce lower attention weights for such noisy frames.

5. Conclusion

In this paper, we propose a jointly cross- and self-modal graph attention network (CSMGAN) for query-based moment localization in video. We consider both cross- and self-modal relations in a joint framework to capture much higher-level interactions. Specially, cross-modal relation highlights relevant components across video and sentence, and then self-modal relation models the pairwise correlation inside each modality for frames/words association. Besides, we also develop a hierarchical structure for more contextual sentence understanding in a word-phrase-sentence process. The experimental results on various datasets demonstrate the effectiveness of our proposed method.

References

  • L. Anne Hendricks, O. Wang, E. Shechtman, J. Sivic, T. Darrell, and B. Russell (2017) Localizing moments in video with natural language. In

    Proceedings of the IEEE International Conference on Computer Vision (ICCV)

    ,
    pp. 5803–5812. Cited by: §1, §2, §4.3, Table 1, Table 2.
  • N. Ballas, L. Yao, C. Pal, and A. Courville (2015) Delving deeper into convolutional networks for learning video representations. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §2, §3.3.1.
  • J. Chen, X. Chen, L. Ma, Z. Jie, and T. Chua (2018) Temporally grounding natural sentence in video. In

    Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)

    ,
    pp. 162–171. Cited by: §1, §2, §4.3, Table 1, Table 2.
  • K. Chen, R. Kovvuri, J. Gao, and R. Nevatia (2017a) MSRC: multimodal spatial regression with semantic context for phrase grounding. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pp. 23–31. Cited by: §1, §2.
  • K. Chen, R. Kovvuri, and R. Nevatia (2017b) Query-guided regression network with context policy for phrase grounding. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 824–832. Cited by: §2.
  • L. Chen, C. Lu, S. Tang, J. Xiao, D. Zhang, C. Tan, and X. Li (2020) Rethinking the bottom-up framework for query-based video localization. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Cited by: §4.3, Table 1, Table 2.
  • S. Chen and Y. Jiang (2019) Semantic proposal for activity localization in videos via sentence query. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 8199–8206. Cited by: §1, §2.
  • K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: §3.3.1.
  • J. Chung, C. Gulcehre, K. Cho, and Y. Bengio (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. In Advances in Neural Information Processing Systems (NIPS), Cited by: §3.2.
  • C. Deng, Q. Wu, Q. Wu, F. Hu, F. Lyu, and M. Tan (2018) Visual grounding via accumulated attention. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 7746–7755. Cited by: §2.
  • K. Endo, M. Aono, E. Nichols, and K. Funakoshi (2017) An attention-based regression model for grounding textual phrases in images.. In IJCAI, pp. 3995–4001. Cited by: §2.
  • Y. Feng, L. Ma, W. Liu, and J. Luo (2019) Spatio-temporal video re-localization by warp lstm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1288–1297. Cited by: §1.
  • Y. Feng, L. Ma, W. Liu, T. Zhang, and J. Luo (2018) Video re-localization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 51–66. Cited by: §1.
  • J. Gao, C. Sun, Z. Yang, and R. Nevatia (2017) Tall: temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5267–5275. Cited by: §1, §2, §4.1, §4.1, §4.3, Table 1, Table 2.
  • K. Gavrilyuk, A. Ghodrati, Z. Li, and C. G. Snoek (2018) Actor and action video segmentation from a sentence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5958–5966. Cited by: §1.
  • R. Ge, J. Gao, K. Chen, and R. Nevatia (2019) Mac: mining activity concepts for language-based temporal localization. In IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 245–253. Cited by: §2.
  • J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In

    Proceedings of the 34th International Conference on Machine Learning (ICML)

    ,
    pp. 1263–1272. Cited by: §2.
  • R. Hu, H. Xu, M. Rohrbach, J. Feng, K. Saenko, and T. Darrell (2016) Natural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4555–4564. Cited by: §2.
  • Z. Hu, Y. Dong, K. Wang, and Y. Sun (2020) Heterogeneous graph transformer. arXiv preprint arXiv:2003.01332. Cited by: §3.3.1.
  • R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. Carlos Niebles (2017) Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 706–715. Cited by: §4.1.
  • Q. Li, Z. Han, and X. Wu (2018)

    Deeper insights into graph convolutional networks for semi-supervised learning

    .
    In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §4.4.
  • M. Liu, X. Wang, L. Nie, X. He, B. Chen, and T. Chua (2018a) Attentive moment retrieval in videos. In Proceedings of the 41nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 15–24. Cited by: §1, §2, §4.1, §4.3, Table 1, Table 2.
  • M. Liu, X. Wang, L. Nie, Q. Tian, B. Chen, and T. Chua (2018b) Cross-modal moment localization in videos. In Proceedings of the 26th ACM international conference on Multimedia, pp. 843–851. Cited by: §1.
  • J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy (2016) Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11–20. Cited by: §2.
  • N. C. Mithun, S. Paul, and A. K. Roy-Chowdhury (2019) Weakly supervised video moment retrieval from text queries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11592–11601. Cited by: §1, §2.
  • J. Pennington, R. Socher, and C. D. Manning (2014) Glove: global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Cited by: §3.2, §4.2.
  • M. Regneri, M. Rohrbach, D. Wetzel, S. Thater, B. Schiele, and M. Pinkal (2013) Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics 1, pp. 25–36. Cited by: §1, §4.1.
  • S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), pp. 91–99. Cited by: §2.
  • A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele (2016) Grounding of textual phrases in images by reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 817–834. Cited by: §2.
  • F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The graph neural network model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §2.
  • D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2015) Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497. Cited by: §3.2, §4.2.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), pp. 5998–6008. Cited by: §3.2, §3.3.2.
  • P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio (2017) Graph attention networks. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §3.3.1.
  • J. Wang, L. Ma, and W. Jiang (2020) Temporally grounding language queries in videos by contextual boundary-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §1, §2, §4.1, §4.3, Table 1, Table 2.
  • M. Wang, M. Azab, N. Kojima, R. Mihalcea, and J. Deng (2016) Structured matching for phrase localization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 696–711. Cited by: §2.
  • H. Xu, K. He, B. A. Plummer, L. Sigal, S. Sclaroff, and K. Saenko (2019) Multilevel language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 9062–9069. Cited by: §2, §4.3, Table 1, Table 2.
  • L. Yu, Z. Lin, X. Shen, J. Yang, X. Lu, M. Bansal, and T. L. Berg (2018) Mattnet: modular attention network for referring expression comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1307–1315. Cited by: §2.
  • L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg (2016) Modeling context in referring expressions. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 69–85. Cited by: §2.
  • J. Yuan, B. Ni, X. Yang, and A. A. Kassim (2016) Temporal action localization with pyramid of score distribution features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3093–3102. Cited by: §1.
  • Y. Yuan, L. Ma, J. Wang, W. Liu, and W. Zhu (2019a) Semantic conditioned dynamic modulation for temporal sentence grounding in videos. In Advances in Neural Information Processing Systems (NIPS), pp. 534–544. Cited by: §1, §1, §2, §3.5, §4.1, §4.3, Table 1, Table 2.
  • Y. Yuan, T. Mei, and W. Zhu (2019b) To find where you talk: temporal sentence localization in video with attention based location regression. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 9159–9166. Cited by: §1, §1, §4.3, Table 1, Table 2.
  • D. Zhang, X. Dai, X. Wang, Y. Wang, and L. S. Davis (2019a) Man: moment alignment network for natural language moment retrieval via iterative graph adjustment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1247–1257. Cited by: §1, §2.
  • H. Zhang, Y. Niu, and S. Chang (2018) Grounding referring expressions in images by variational context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4158–4166. Cited by: §2.
  • S. Zhang, H. Peng, J. Fu, and J. Luo (2020) Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §2.
  • S. Zhang, J. Su, and J. Luo (2019b) Exploiting temporal relationships in video moment localization with natural language. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 1230–1238. Cited by: §1.
  • Z. Zhang, Z. Lin, Z. Zhao, and Z. Xiao (2019c) Cross-modal interaction networks for query-based moment retrieval in videos. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 655–664. Cited by: §1, §2, §3.2, §4.3, Table 1, Table 2.