Attend and Interact: Higher-Order Object Interactions for Video Understanding

11/16/2017 ∙ by Chih-Yao Ma, et al. ∙ 0

Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate the impact of modeling object interactions towards significantly improving accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets respectively, even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which proved to further improve the performance with low computational cost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 11

page 12

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Higher-order object interactions are progressively detected based on selected inter-relationships. ROIs with the same color (weighted r, g, b) indicating there exist inter-object relationships, e.g. eggs in the same bowl, hand breaks egg, and bowl on top of campfire (interaction within the same color). Groups of inter-relationships then jointly model higher-order object interaction of the scene (interaction between different colors). Right: ROIs are highlighted with their attention weights for higher-order interactions. The model further reasons about the interactions through time and predicts cooking on campfire and cooking egg. Images are generated from SINet (best viewed in color).

Video understanding tasks such as activity recognition and caption generation are crucial for various applications in surveillance, video retrieval, human behavior understanding, etc. Recently, datasets for video understanding such as Charades [43], Kinetics [21], and ActivityNet Captions [22] contain diverse real-world examples and represent complex human and object interactions that can be difficult to model with state-of-the-art video understanding methods [43]. Consider the example in Figure 1. To accurately predict cooking on campfire and cooking egg among other similar action classes requires understanding of fine-grained object relationships and interactions. For example, a hand breaks an egg, eggs are in a bowl, the bowl is on top of the campfire, campfire is a fire built with wood at a camp, etc. Although recent state-of-the-art approaches for action recognition have demonstrated significant improvements over datasets such as UCF101 [46], HMDB51 [23], Sports-1M [20], THUMOS [18], ActivityNet [5], and YouTube-8M [1], they often focus on representing the overall visual scene (coarse-grained) as sequence of inputs that are combined with temporal pooling, e.g. CRF, LSTM, 1D Convolution, attention, and NetVLAD [4, 30, 31, 42], or use 3D Convolution for the whole video sequence [6, 38, 47]. These approaches ignore the fine-grained details of the scene and do not infer interactions between various objects in the video. On the other hand, in video captioning tasks, although prior approaches use spatial or temporal attention to selectively attend to fine-grained visual content in both space and time, they too do not model object interactions.

Prior work in understanding visual relationships in the image domain has recently emerged as a prominent research problem, e.g. scene graph generation [27, 54] and visual relationship detection [7, 8, 14, 17, 59, 60]

. However, it is unclear how these techniques can be adapted to open-domain video tasks, given that the video is intrinsically more complicated in terms of temporal reasoning and computational demands. More importantly, a video may consist of a large number of objects over time. Prior approaches on visual relationship detection typically model the full pairwise (or triplet) relationships. While this may be realized for images, videos often contain hundreds or thousands of frames. Learning relationships across multiple objects alongside the temporal information is computationally infeasible on modern GPUs, and performance may suffer due to the fact that a finite-capacity neural network is used to model a large combinatorial space. Furthermore, prior work in both image and video domains 

[32, 33] often focus on pairwise relationships or interactions, where interactions over groups of interrelated objects—higher-order interactions—are not explored, as shown in Figure 2.

Toward this end, we present a generic recurrent module for fine-grained video understanding, which dynamically discovers higher-order object interactions via an efficient dot-product attention mechanism combined with temporal reasoning. Our work is applicable to various open domain video understanding problems. In this paper, we validate our method on two video understanding tasks with new challenging datasets: action recognition on Kinetics [21] and video captioning on ActivityNet Captions [22] (with ground truth temporal proposals). By combining both coarse- and fine-grained information, our SINet (Spatiotemporal Interaction Network) for action recognition and SINet-Caption for video captioning achieve state-of-the-art performance on both tasks while using RGB video frames sampled at only maximum 1 FPS. To the best of our knowledge, this is the first work of modeling object interactions on open domain large-scale video datasets, and we also show that modeling higher-order object interactions can further improve the performance at low computational costs.

2 Related work

We discuss existing work on video understanding based on action recognition and video captioning as well as related work on detecting visual relationships in images and videos.

Action recognition:

Recent work on action recognition using deep learning involves learning compact (coarse) representations over time and use pooling or other aggregation methods to combine information from each video frame, or even across different modalities 

[10, 13, 31, 42, 44]. The representations are commonly obtained directly from forward passing a single video frame or a short video snippet to a 2D ConvNet or 3D ConvNet [6, 38, 47]. Another branch of work uses Region Proposal Networks (RPNs) to jointly train action detection models  [15, 25, 37]. These methods use an RPN to extract object features (ROIs), but they do not model or learn interactions between objects in the scene. Distinct from these models, we explore human action recognition task using coarse-grained context information and fine-grained higher-order object interactions. Note that we focus on modeling object interactions for understanding video in a fine-grained manner and we consider other modalities, e.g. optical flow and audio information, to be complementary to our method.

Figure 2: Typically, object interaction methods focus on pairwise interactions (left). We efficiently model the higher-order interactions between arbitrary subgroups of objects for video understanding, in which the inter-object relationships in one group are detected and objects with significant relationships (i.e. those that serve to improve action recognition or captioning in the end) are attentively selected (right). The higher-order interaction between groups of selected object relationships are then modeled after concatenation.

Video captioning: Similar to other video tasks using deep learning, initial work on video captioning learn compact representations combined over time. This single representation is then used as input to a decoder, e.g. LSTM, at the beginning or at each word generation to generate a caption for the target video [34, 50, 51]. Other work additionally uses spatial and temporal attention mechanisms to selectively focus on visual content in different space and time during caption generation [39, 45, 55, 56, 58]. Similar to using spatial attention during caption generation, another line of work has additionally incorporated semantic attributes [11, 35, 41, 57]. However, these semantic or attribute detection methods, with or without attention mechanisms, do not consider object relationships and interactions, i.e. they treat the detected attributes as a bag of words. Our work, SINet-Caption uses higher-order object relationships and their interactions as visual cues for caption generation.

Interactions/Relationships in images: Recent advances in detecting visual relationships in images use separate branches in a ConvNet to explicitly model objects, humans, and their interactions [7, 14]. Visual relationships can also be realized by constructing a scene graph which uses a structured representation for describing object relationships and their attributes [19, 26, 27, 54]. Other work on detecting visual relationships explore relationships by pairing different objects in the scene [8, 17, 40, 59]. While these models can successfully detect visual relationships for images, a scene with many objects may have only a few individual interacting objects. It would be inefficient to detect all relationships across all individual object pairs [60], making these methods intractable for the video domain.

Interactions/Relationships in videos: Compared to the image domain, there is limited work in exploring relationships for video understanding. Ni et al. [32] use a probabilistic graphical model to track interactions, but their model is insufficient to model interactions involving multiple objects. To overcome this issue, Ni et al. [33] propose using a set of LSTM nodes to incrementally refine the object detections. In contrast, Lea et al. [24]

propose to decompose the input image into several spatial units in a feature map, which then captures the object locations, states, and their relationships using shared ConvNets. However, due to lack of appropriate datasets, existing work focuses on indoor or cooking settings where the human subject along with the objects being manipulated are at the center of the image. Also, these methods only handle pairwise relationships between objects. However, human actions can be complex and often involve higher-order object interactions. Therefore, we propose to attentively model object inter-relationships and discover the higher-order interactions on large-scale and open domain videos for fine-grained understanding.

Figure 3: Overview of the SINet for action recognition. Coarse-grained:

each video frame is encoded into a feature vector

. The sequence of vectors are then pooled via temporal SDP-Attention into single vector representation . Fine-grained: Each object (ROI) obtained from RPN is encoded in a feature vector . We detect the higher-order object interaction using the proposed generic recurrent Higher-Order Interaction (HOI) module. Finally, coarse-grained (image context) and fine-grained (higher-order object interactions) information are combined to perform action prediction.

3 Model

Despite the recent successes in video understanding, there has been limited progress in understanding relationships and interactions that occur in videos in a fine-grained manner. To do so, methods must not only understand the high-level video representations but also be able to explicitly model the relationships and interactions between objects in the scene. Toward this end, we propose to exploit both overall image context (coarse) and higher-order object interactions (fine) in the spatiotemporal domain for general video understanding tasks.

In the following section, we first describe the SINet on action recognition followed by extending it to SINet-Caption for the video captioning task.

3.1 Action Recognition Model

3.1.1 Coarse-grained image context

As recent studies have shown, using LSTM to aggregate a sequence of image representations often results in limited performance since image representations can be similar to each other and thus lack temporal variances 

[1, 21, 30]. As shown in Figure 3 (top), we thus begin by attending to key image-level representations to summarize the whole video sequence via the Scale Dot-Product Attention (SDP-Attention) [48]:

(1)
(2)

where is a set of image features: is the image feature representation encoded via a ConvNet at time , and ranges from for a given video length.

is a Multi-Layer Perceptron (MLP) with parameter

, is the dimension of last fully-connected (FC) layer of , is the projected image feature matrix, is a scaling factor, and is an attention weight applied to the (projected) sequence of image representations . The weighted image representations are then mean-pooled to form video representation .

3.1.2 Fine-grained higher-order object interactions

Traditional pairwise object interactions only consider how each object interacts with another object. We instead model inter-relationships between arbitrary subgroups of objects, the members of which are determined by a learned attention mechanism, as illustrated in Figure 2. Note that this covers pair-wise or triplet object relationships as a special case, in which the learned attention only focus on one single object.

Problem statement: We define objects to be a certain region in the scene that might be used to determine the visual relationships and interactions. Each object representation can be directly obtained from an RPN and further encoded into an object feature. Note that we do not encode object class information from the detector into the feature representation since there exists a cross-domain problem, and we may miss some objects that are not detected by the pre-trained object detector. Also, we do not know the corresponding objects across time since linking objects through time can be computationally expensive for long videos. As a result, we have variable-lengths of object sets residing in a high-dimensional space that spans across time. Our objective is to efficiently detect higher-order interactions from these rich yet unordered object representation sets across time.

In the simplest setting, an interaction between objects in the scene can be represented via summation operation of individual object information. For example, one method is to add the learnable representations and project these representations into a high-dimensional space where the object interactions can be exploited by simply summing up the object representations. Another approach which has been widely used with images is by pairing all possible object candidates (or subject-object pairs) [7, 8, 17, 40, 59]. However, this is infeasible for video, since a video typically contains hundreds or thousands of frame and the set of object-object pairs is too large to fully represent. Detecting object relationships frame by frame is computationally expensive, and the temporal reasoning of object interactions is not used.

Recurrent Higher-Order Interaction (HOI): To overcome these issues, we propose a generic recurrent module for detecting higher-order object interactions for fine-grained video understanding problems, as shown in Figure 4. The proposed recurrent module dynamically selects object candidates which are important to discriminate the human actions. The combinations of these objects are then concatenated to model higher order interaction using group to group or triplet groups of objects.

First, we introduce learnable parameters for the incoming object features via MLP projection , since the object features are pre-trained from another domain and may not necessarily present interactions towards action recognition. The projected object features are then combined with overall image content and previous object interaction to generate sets of weights to select groups of objects 111The number depends on the complexity of the visual scene and the requirement of the task (in this case, action recognition). We leave dynamically selecting to future work.

. Objects with inter-relationships are selected from an attention weight, which generates a probability distribution over all object candidates. The attention is computed using inputs from current (projected) object features, overall image visual representation, and previously discovered object interactions (see Figure 

4), which provide the attention mechanism with maximum context.

(3)

where the input is a set of objects: is the object feature representation at time . The is a MLP with parameter , the parameters are learnable synaptic weights shared across all objects and through time . denotes as encoded image feature at current time , and is the previous output of LSTM cell which represents the previous discovered object interaction. Formally, given an input sequence, a LSTM network computes the hidden vector sequences . Lastly, is an attention weight computed from the proposed attention module.

Figure 4: Recurrent Higher-Order Interaction module dynamically selects groups of arbitrary objects with detected inter-object relationships via learnable attention mechanism. This attentive selection module uses the overall image context representation , current set of (projected) objects , and previous object interactions to generate weights for selections. The higher-order interaction between groups of selected objects is then modeled via concatenation and the following LSTM cell.

Attentive selection module: Here we discuss two possible choices for the attention module, as shown in Figure 5. Dot-product attention considers inter-relationships when selecting the objects, and -attention does not.

- Dot-product attention: In order to model higher-order interactions, which models inter-object relationships in each group of selected objects, we use dot-product attention since the attention weights computed for each object is the combination of all objects.

Formally, the current image representation and the last object interaction representation are first projected to introduce learnable weights. The projected and are then repeated and expanded times (the number of objects in ). We directly combine this information with projected objects via matrix addition and use it as input to dot-product attention. We added a scale factor as in [48]. The input to the first matrix multiplication and the attention weights over all objects can be defined as:

(4)
(5)

where and are learned weights for and , is the dimension of last fully-connected layer of , is the input to attention module, and is a scaling factor, is the computed attention. We omit the bias term for simplicity. The attended object feature at time is then calculated as mean-pooling on weighted objects:

(6)

where the output is a single feature vector representation which encodes the object inter-relationships of a video frame at time .

Figure 5: Attention modules: dot-product attention and -attention. Both attention mechanisms take input from overall image representation , current set of objects , and previous object interactions computed from LSTM cell at time .

- -attention: The -attention uses the same input format as dot-product attention, but the attention is computed using a tanh function and a fully-connected layer:

(7)

where is a learned weight, and is the computed attention. The attended object feature at time is then calculated as a convex combination:

(8)

We use the -attention as a baseline to show how considering the inter-relationships of objects (dot-product attention) can further improve the accuracy when ROIs are selected separately.

Finally, for both attention mechanisms, the selected object candidates are then concatenated and used as the input to a LSTM cell. The output is then defined as the higher-order object interaction representation at current time .

(9)

where denotes concatenation between feature vectors. The last hidden state of the LSTM cell is the representation of overall object interactions for the entire video sequence.

Note that by concatenating selected inter-object relationships into a single higher-order interaction representation, the selective attention module tends to select different groups of inter-relationships, since concatenating duplicate inter-relationships does not provide extra information and will be penalized. For an analysis of what inter-relationships are selected, please refer to Sec. 7.1.

3.1.3 Late fusion of coarse and fine

Finally, the attended context information obtained from the image representation provides coarse-grained understanding of the video, and the object interactions discovered through the video sequences provide fine-grained understanding of the video. We concatenate them as the input to the last fully-connected layer, and train the model jointly to make a final action prediction.

(10)

where and are learned weights and biases.

3.2 Video Captioning Model

We now describe how SINet can be extended from sequence-to-one to a sequence-to-sequence problem for video captioning. Our goal in providing fine-grained information for video captioning is that, for each prediction of the word, the model is aware of the past generated word, previous output, and the summary of the video content. At each word generation, it has the ability to selectively attend to various parts of the video content in both space and time, as well as to the detected object interactions.

Our SINet-Caption is inspired by prior work using hierarchical LSTM for captioning tasks [2, 45], and we extend and integrate it with SINet so that the model can leverage the detected higher-order object interactions. We use a two-layered LSTM integrated with the coarse- and fine-grained information, as shown in Figure 6. The two LSTM layers are: Attention LSTM and Language LSTM. The Attention LSTM identifies which part of the video in spatiotemporal feature space is needed for Language LSTM to generate the next word. Different from prior work, which applied attention directly over all image patches in the entire video [56], i.e. attended to objects individually, our attentive selection module attends to object interactions while considering their temporal order.

Attention LSTM: The Attention LSTM fuses the previous hidden state output of Language LSTM , overall representation of the video, and the input word at time

to generate the hidden representation for the following attention module. Formally, the input to Attention LSTM can be defined as:

(11)

where is the projected and mean-pooled image features, is a MLP with parameters , is a word embedding matrix for a vocabulary of size , and

is one-hot encoding of the input word at time

. Note that is the video time, and is the timestep for each word generation.

Temporal attention module: We adapt the same -attention module as shown in Figure 5 to attend over projected image features . The two types of input for this temporal attention module are from outputs of the Attention LSTM and projected image features.

(12)

where is the output of Attention LSTM, and are learned weights for and . is the dimension of the last FC layer of .

Co-attention: We directly apply the temporal attention obtained from image features on object interaction representations (see Sec 3.1.2 for details).

Language LSTM: Finally, the Language LSTM takes in input which is the concatenation of output of the Attention LSTM , attended video representation , and co-attended object interactions at timestep .

(13)

The output of Language LSTM is then used to generate each word, which is a conditional probability distribution defined as:

(14)

where is a sequence of outputs and is learned weights for . All bias terms are omitted for simplicity.

4 Datasets and Implementations

4.1 Datasets:

Kinetics dataset: To evaluate SINet on a sequence-to-one problem for video, we use the Kinetics dataset for action recognition [21]. The Kinetics dataset contains 400 human action classes and has approximately 300k video clips (833 video hours). Most importantly, different from previous datasets which mostly cover sports actions [20, 23, 46], Kinetics includes human-object interactions and human-human interactions. We sampled videos at 1 FPS only, as opposed to sampling at 25 FPS reported for Kinetics [21].

Figure 6: Overview of the proposed SINet-Caption for video captioning. The Attention LSTM with -attention is used to selectively attend to temporal video frame features. The computed temporal attention is then used to attend to temporal object interactions (see Figure 4). Concatenation of the outputs of Attention LSTM, attended video frame feature, and attended object interactions is then used as input for language decoder LSTM.

ActivityNet Captions dataset: To evaluate SINet-Caption on a sequence-to-sequence problem for video, we use ActivityNet Captions for video captioning. The ActivityNet Captions dataset contains 20k videos and has total of 849 video hours with 100K total descriptions. To demonstrate our proposed idea, we focus on providing fine-grained understanding of the video to describe video events with natural language, as opposed to identifying the temporal proposals. We thus use the ground truth temporal segments and treat each temporal segment independently. We use this dataset over others because ActivityNet Captions is action-centric, as opposed to object-centric [22]. This fits our goal of detecting higher-order object interactions for understanding human actions. All sentences are capped to be a maximum length of 30 words. We sample predictions using beam search of size 5 for captioning. While the previous work sample C3D features every 8 frames [22], we only sampled video at maximum 1 FPS. Video segments longer than 30 secs. are evenly sampled at maximum 30 samples.

4.2 Implementation Details:

We now discuss how to extract image and object features for both Kinetics and ActivityNet Captions.

Image feature: We fine-tune a pre-trained ResNeXt-101 [53]

on Kinetics sampled at 1 FPS (approximately 2.5 million images). We use SGD with Nesterov momentum as the optimizer. The initial learning rate is

and drops by 10x when validation loss saturates for 5 epochs. The weight decay is

and the momentum is 0.9, and the batch size is 128. We use standard data augmentation by randomly cropping and horizontally flipping video frames during training. When extracting image features, the smaller edge of the image is scaled to 256 pixels and we crop the center of the image as input to the fine-tuned ResNeXt-101. Each image feature is a 2048-d feature vector.

Object feature: We generate the object features by first obtaining the coordinates of ROIs from a Deformable R-FCN [9] (pre-trained on MS-COCO) with ResNet-101 [16]

as backbone architecture. We set the IoU threshold for NMS to be 0.2. Empirically, we found that it is important to maintain a balance of image and object features, especially when image features were obtained from a network which was fine-tuned on the target dataset. Thus, for each of the ROIs, we extract features using coordinates and adaptive max-pooling from the same model (ResNeXt-101) that was fine-tuned on Kinetics. The resulting object feature for each ROI is a 2048-d feature vector. ROIs are ranked according to their ROI scores. We select top 30 objects for Kinetics and top 15 for ActivityNet Captions. Note that we have a varied number of ROIs for each video frame, and video length can also be different. We do not use the object class information since we may miss some of the objects that were not detected, due to the cross-domain problem. For the same reason, the bounding-box regression process is not performed here since we do not have the ground-truth bounding boxes.

Training: We train SINet and SINet-Caption with ADAM optimizer. The initial learning rate is set to for Kinetics and for ActivityNet Captions. Both learning rates automatically drop by 10x when validation loss is saturated. The batch sizes are and respectively for Kinetics and ActivityNet Captions.

Method Top-1 Top-5
I3D222Results obtained from https://github.com/deepmind/kinetics-i3d(25 FPS) [6] (test) 71.1 89.3
TSN (Inception-ResNet-v2) (2.5 FPS) [4, 52] 73.0 90.9
Ours (1 FPS)
Img feature + LSTM (baseline) 70.6 89.1
Img feature + temporal SDP-Attention 71.1 89.6
Obj feature (mean-pooling) 72.2 90.2
Img + obj feature (mean-pooling) 73.1 91.1
SINet (-attention) 73.9 91.5
SINet (dot-product attention) 74.2 91.7
Table 1: Prediction accuracy on the Kinetics validation set. All of our results use only RGB videos sampled at 1 FPS. Maximum number of objects per frame is set to be 30.

5 Evaluation

5.1 Action recognition on Kinetics:

In this section, we conduct an ablation study of SINet on Kinetics.

Does temporal SDP-Attention help? Several studies have pointed out that using temporal mean-pooling or LSTMs may not be the best method to aggregate the sequence of image representations for videos [4, 30, 31]. To overcome this issue, we use temporal SDP-Attention instead of LSTM. As we can see from Table 1, using temporal SDP-Attention has proven to be superior to traditional LSTM and already performs comparably with 3D ConvNet that uses a much higher video sampling rate.

Does object interaction help? We first evaluate how much higher-order object interactions can help in identifying human actions. Considering mean-pooling over the object features to be the simplest form of object interaction, we show that mean-pooling over the object features per frame and using LSTM for temporal reasoning has already outperformed single compact image representations, which is currently the trend for video classification methods. Directly combining image features with temporal SDP-Attention and object features over LSTM further reaches 73.1% top-1 accuracy. This already outperforms the state-of-the-art TSN [52] method using a deeper ConvNet with a higher video sampling rate. Beyond using mean-pooling as the simplest form of object interaction, our proposed method to dynamically discover and model higher-order object interactions further achieved 74.2% top-1 and 91.7% top-5 accuracy. The selection module with dot-product attention, in which we exploit the inter-relationships between objects within the same group, outperforms -attention where the inter-relationships are ignored.

Method Top-1 Top-5 FLOP ()
Obj (mean-pooling) 73.1 90.8  1.9
Obj pairs (mean-pooling) 73.4 90.8  18.3
Obj triplet (mean-pooling) 72.9 90.7  77.0
SINet () 73.9 91.3  2.7
SINet () 74.2 91.5  5.3
SINet () 74.2 91.7  8.0
Table 2: Comparison of pairwise (or triplet) object interaction with the proposed higher-order object interaction with dot-product attentive selection method on Kinetics. The maximum number of objects is set to be 15. FLOP is calculated per video. For details on calculating FLOP, please refer to Sec. 7.5.
Method B@1 B@2 B@3 B@4 ROUGE-L METEOR CIDEr-D
Test set
LSTM-YT [51] (C3D) 18.22 7.43 3.24 1.24 - 6.56 14.86
S2VT [50] (C3D) 20.35 8.99 4.60 2.62 - 7.85 20.97
H-RNN [56] (C3D) 19.46 8.78 4.34 2.53 - 8.02 20.18
S2VT + full context [22] (C3D) 26.45 13.48 7.21 3.98 - 9.46 24.56
LSTM-A + policy gradient + retrieval [12] (ResNet + P3D ResNet [38]) - - - - - 12.84 -
Validation set (Avg. 1st and 2nd)
LSTM-A (ResNet + P3D ResNet) [12] 17.5 9.62 5.54 3.38 13.27 7.71 16.08
LSTM-A + policy gradient + retrieval [12] (ResNet + P3D ResNet [38]) 17.27 9.70 5.39 3.13 14.29 8.73 14.75
SINet-Caption — img (C3D) 17.18 7.99 3.53 1.47 18.78 8.44 38.22
SINet-Caption — img (ResNeXt) 18.81 9.31 4.27 1.84 20.46 9.56 43.12
SINet-Caption — obj (ResNeXt) 19.07 9.48 4.38 1.92 20.67 9.56 44.02
SINet-Caption — img + obj — no co-attention (ResNeXt) 19.93 9.82 4.52 2.03 21.08 9.79 44.81
SINet-Caption — img + obj — co-attention (ResNeXt) 19.78 9.89 4.52 1.98 21.25 9.84 44.84
Table 3: METEOR, ROUGE-L, CIDEr-D, and BLEU@N scores on the ActivityNet Captions test and validation set. All methods use ground truth proposal except LSTM-A [12]. Our results with ResNeXt spatial features use videos sampled at maximum 1 FPS only.

Does attentive selection help? Prior work on visual relationships and VQA concatenate pairwise object features for detecting object relationships. In this experiment, we compare the traditional way of creating object pairs or triplets with our proposed attentive selection method. We use temporal SDP-Attention for image features, and dot-project attention for selecting object interactions. As shown in Table 2, concatenating pairwise features marginally improves over the simplest form of object interactions while increasing the computational cost drastically. By further concatenating three object features, the space for meaningful object interactions becomes so sparse that it instead reduced the prediction accuracy, and the number of operations (FLOP) further increases drastically. On the other hand, our attentive selection method can improve upon these methods while saving significant computation time. Empirically, we also found that reducing the number of objects per frame from 30 to 15 yields no substantial difference on prediction accuracy. This indicates that the top 15 objects with highest ROI score are sufficient to represent fine-grained details of the video. For detailed qualitative analysis of how objects are selected at each timestep and how SINet reasons over a sequence of object interactions, please see Sec. 7.1.

We are aware of that integrating optical flow or audio information with RGB video can further improve the action recognition accuracy [4, 6]. We instead focus on modeling object interactions for understanding video in a fine-grained manner, and we consider other modalities to be complementary to our higher-order object interactions.

5.2 Video captioning on ActivityNet Captions:

We focus on understanding human actions for video captioning rather than on temporal proposals. Hence, we use ground truth temporal proposals for segmenting the videos and treat each video segment independently. All methods in Table 3 use ground truth temporal proposal, except LSTM-A [12]. Our performances are reported with four language metrics, including BLEU [36], ROUGH-L [28], METEOR [3], and CIDEr-D [49].

For fair comparison with prior methods using C3D features, we report results with both C3D and ResNeXt spatial features. Since there is no prior result reported on the validation set, we compare against LSTM-A [12] which reports results on the validation and test sets. This allows us to indirectly compare with methods reported on the test set. As shown in Table 3, while LSTM-A clearly outperforms other methods on the test set with a large margin, our method shows better results on the validation sets across nearly all language metrics. We do not claim our method to be superior to LSTM-A

because of two fundamental differences. First, they do not rely on ground truth temporal proposals. Second, they use features extracted from an ResNet fine-tuned on Kinetics and another P3D ResNet 

[38] fine-tuned on Sports-1M, whereas we use a ResNeXt-101 fine-tuned on Kinetics sampled at maximum 1 FPS. Utilizing more powerful feature representations has been proved to improve the prediction accuracy by a large margin on video tasks. This also corresponds to our experiments with C3D and ResNeXt features, where the proposed method with ResNeXt features perform significantly better than C3D features.

Does object interaction help?

SINet-Caption without any object interaction has already outperformed prior methods reported on this dataset. Additionally, by introducing an efficient selection module for detecting object interactions, SINet-Caption further improves across nearly all evaluation metrics, with or without co-attention. We also observed that introducing the co-attention from image features constantly shows improvement on the first validation set but having separate temporal attention for object interaction features show better results on second validation set (please see Sec. 

7.4 for results on each validation set).

6 Conclusion

We introduce a computationally efficient fine-grained video understanding approach for discovering higher-order object interactions. Our work on large-scale action recognition and video captioning datasets demonstrates that learning higher-order object relationships provides high accuracy over existing methods at low computation costs. We achieve state-of-the-art performances on both tasks with only RGB videos sampled at maximum 1 FPS.

Acknowledgments

Zsolt Kira was partially supported by the National Science Foundation and National Robotics Initiative (grant # IIS-1426998).

7 Supplementary

7.1 Qualitative analysis on Kinetics

To further validate the proposed method, we qualitatively show how the SINet selectively attends to various regions with relationships and interactions across time. We show several examples in Figure 9, 10, and 11. In each of the figure, the top row of each video frame has generally multiple ROIs with three colors: red, green, and blue. ROIs with the same color indicates that there exist inter-relationships. We then model the interaction between groups of ROIs across different colors. The color of each bounding box is weighted by the attention generated by the proposed method. Thus, if some ROIs are not important, they will have smaller weights and will not be shown on the image. The same weights are then used to set the transparent ratio for each ROI. The brighter the region is, the more important the ROI is.

Focus on object semantics Recent state-of-the-art methods for action recognition rely on single compact representation of the scene. We show that the proposed SINet can focus on the details of the scene and neglect the visual content that maybe irrelevant such as the background information. For example, in Figure 9, the model constantly focus on the rope above the water and the person riding on wakeboard. The same goes for Figure 10. The background scenes with ice and snow are ignored throughout the video since it’s ambiguous and easy to be confused with other classes involve snow in the scene.

Adjustable inter-relationships selection We notice that our SINet tends to explore the whole scene early in the video, i.e. the attentions tend to be distributed to the ROIs that cover large portion of the video frame, and the attentions become more focused after this exploration stage.

7.2 Qualitative analysis on ActivityNet Captions

In addition to the qualitative analysis on action recognition task, we now present the analysis on video captioning. Several examples are shown in Figure 12, 13, and 14. At each word generation step, the SINet-Caption uses the weighted sum of the video frame representations and the weighted sum of object interactions at corresponding timesteps (co-attention). Note that, since we aggregate the detected object interactions via the LSTM cell through time, the feature representation of the object interactions at each timestep can be seen as a fusion of interactions at the present and past time. Thus, if temporal attention has highest weight on , it may actually attend to the interaction aggregated from to . Nonetheless, we only show the video frame with highest temporal attention for convenience. We use red and blue to represent the two selected sets of objects ().

In each of the figures, the video frames (with maximum temporal attention) at different timesteps are shown along with each word generation. All ROIs in the top or bottom images are weighted with their attention weights. In the top image, ROIs with weighted bounding box edges are shown, whereas, in the bottom image, we set the transparent ratio equal to the weight of each ROI. The brighter the region is, the more important the ROI is. Therefore, less important ROIs (with smaller attention weights) will disappear in the top image and be completely black in the bottom image. When generating a word, we traverse the selection of beam search at each timestep.

As shown in Figure 12, we can see that the SINet-Caption can successfully identify the person and the wakeboard. These selections of the two most important objects imply that the person is riding on the wakeboard — water skiing. We also observe that, in Figure 13, the proposed method focuses on the bounding boxes containing both person and the camel. Suggesting that this is a video for people sitting on a camel. However, it failed to identify that there are in fact multiple people in the scene and there are two camels. On the other hand, the SINet-Caption is able to identify the fact that there are two persons playing racquetball in Figure 14.

Figure 7: What interactions (verb) learned for video captioning. We verify how the SINet-Caption distinguishes various type of interactions with a common object - horse. (a) People are riding horses. (b) A woman is brushing a horse. (c) People are playing polo on a field. (d) The man ties up the calf.

7.2.1 Distinguish interactions when common objects presented

A common problem with the state-of-the-art captioning models is that they often lack the understanding of the relationships and interactions between objects, and this is oftentimes the result of dataset bias. For instance, when the model detects both person and a horse. The caption predictions are very likely to be: A man is riding on a horse, regardless whether if this person has different types of interactions with the horse.

We are thus interested in finding out whether if the proposed method has the ability to distinguish different types of interactions when common objects are presented in the scene. In Figure 7, each video shares a common object in the scene - horse. We show the verb (interaction) extracted from a complete sentence as captured by our proposed method.

  • People are riding horses.

  • A woman is brushing a horse.

  • People are playing polo on a field.

  • The man ties up the calf.

While all videos involve horses in the scene, our method successfully distinguishes the interactions of the human and the horse.

7.2.2 Discussion on ActivityNet Captions

We observed that while higher-order object interactions did contribute to higher performance on ActivityNet, the contributions were not as significant as when applied to the Kinetics dataset (quantitatively or qualitatively). We hereby discuss some potential reasons and challenges on applying SINet-Caption on the ActivityNet Captions dataset.

Word by word caption generation: In line with the work from question-answering, machine translation, and captioning, we generate a language sentence describing a video one word after another. At each word generation step, the SINet-Caption uses the last generated word, video frame representations, and their corresponding object interactions. As we can see from both qualitative results from Kinetics and ActivityNet Captions, our proposed method is able to identify the interactions within a very few video frames. However, taking Figure 13 as an example, at the first word ”a”, our model has already successfully selected the persons (both in light blue and red) on top of the camel (bright red). Yet, during the following caption generation, the SINet-Caption was forced to look at the visual content again and again. Introducing the gated mechanism [29] may mitigate this issue, but our preliminary results do not show improvement. Further experiments toward this direction may be needed.

Semantically different captions exist: Each video in the ActivityNet Captions dataset consists of 3.65 (average) different temporal video segments and their own ground truth captions [22]. These video captions have different semantic meanings but oftentimes share very similar video content, i.e. the same/similar video content has several different ground truth annotations. As a result, it may create confusion during the training of the model. Again, taking Figure 13 as an example, we observed that the SINet-Caption often focuses on the person who leads the camels (). We conjecture that this is due to the fact that, within the same video, there exists another video segment with annotation: A short person that is leading the camels turns around. Although within the same video content, one of the ground truth focuses on the persons sitting on the camels, another ground truth focuses on the person leading the camels. This seems to be the reason why the trained network focuses on that particular person. Based on this observation, we believe that future work in re-formulating these semantically different annotations of similar video content for network training is needed, and perhaps it may be a better way to fully take advantage of fine-grained object interactions detected from SINet-Caption. One possibility will be associating semantically different video captions with different region-sequences within a video [41].

7.3 Performance improvement analysis on Kinetics

The proposed SINet () shows more than 5% improvement on top-1 accuracy in 136/400 classes and more than 10% improvement in 46 classes over baseline. We show the classes that were improved more than 10% on top-1 accuracy in Figure 8. In addition to these classes, the proposed SINet in modeling fine-grained interactions specifically improved many closely related classes.

  • 7 classes related to hair that are ambiguous among each other: braiding hair, brushing hair, curling hair, dying hair, fixing hair, getting a haircut, and washing hair. We show 21% top-1 improvement on washing hair; 16% improvement on getting a haircut.

  • 4 classes related to basketball require the model to identify how the basketball are being interacted. These classes are: playing basketball, dribbling basketball, dunking basketball, and shooting basketball. We observed 18%, 10%, 6%, and 8% improvement respectively.

  • Among 3 related to juggling actions: juggling fire, juggling balls, and contact juggling. We obtained 16%, 14%, and 13% improvement respectively.

  • Our model significantly improved the eating classes, which are considered to be the hardest [21], because they require distinguishing what is being eaten (interacted). We show improvement among all eating classes, including eating hot dog, eating chips, eating doughnuts, eating carrots, eating watermelon, and eating cake. We obtained 16%, 16%, 14%, 8%, 4%, and 4% improvement respectively.

Figure 8: Top-1 accuracy improvement of SINet () over baseline. 46/400 classes that are improved more than 10% are shown.

7.4 ActivityNet Captions on 1st and 2nd val set

We report the performance of SINet-Caption on the 1st and the 2nd validation set in Table 4. We can see that using fine-grained (higher-order) object interactions for caption generation consistently shows better performance than using coarse-grained image representation, though the difference is relatively minor compared to the results on Kinetics. We discuss the potential reasons in Sec. 7.2. Combining both coarse- and fine-grained improve the performance across all evaluation metrics. Interestingly, using co-attention on detected object interactions shows better performance on the 1st validation set but has similar performance on the 2nd validation set.

7.5 Model architecture and FLOP

We now describe the model architecture of the proposed recurrent higher-order module and how the FLOP is calculated.

SINet architecture: We first project the image representations to introduce learnable feature representations. The MLP

consist of two sets of fully-connected layers each with batch normalization and ReLU. It maintains same dimension (

) of the input image feature. Thus, the coarse-grained representation of the video is a feature vector with 2048 dimension. Inside the Recurrent HOI module, each of the MLP has three sets of batch normalization layers, fully-connected layers, and ReLUs. In the experiments with two attentive selection module (), we set the dimension of the fully-connected layer to be 2048. The concatenation of and is then used as the input to the following LSTM cell. Empirically, we find out that it’s important to maintain high dimensionality for the input to LSTM cell. We adjust the dimension of hidden layers in given the number of , e.g. we reduce the dimension of the hidden layer if increases. In this way, the inputs to LSTM cell have the same or similar feature dimension for fair experimental comparison. The hidden dimension of the LSTM cell is set to be 2048. Before concatenating the coarse- () and fine-grained (

) video representations, we re-normalize the feature vector with batch normalization layer separately. The final classifier then projects the concatenated feature representation to 400 action classes.

Figure 9: Water skiing: Our SINet is able to identify several object relationships and reasons these interactions through time: (1) the rope above the water (2) the wakeboard on the water (3) human riding on the wakeboard (4) rope connecting to the person on the wakeboard. From the distribution of three different attention weights (red, green, blue), we can also see that the proposed attention method not only is able to select objects with different inter-relationships but also can use a common object to discover different relationships around that object when needed. We observed that our method tends to explore the whole scene at the beginning of the video, and focus on new information that is different from the past. For example, while video frame at first few frames are similar, the model focus on different aspect of the visual representation.
Figure 10: Tobogganing: Identifying Tobogganing essentially need three elements: toboggan, snow scene, and a human sitting on top. The three key elements are accurately identified and their interaction are highlighted as we can see from to . Note that the model is able to continue tracking the person and toboggan throughout the whole video, even though they appear very small towards the end of the video. We can also noticed that our SINet completely ignore the background scene in the last several video frames as they are not informative since they can be easily confused by other 18 action classes involving snow and ice, e.g. Making snowman, Ski jumping, Skiing crosscountry, Snowboarding, etc.
Figure 11: Abseiling is challenging since there are similar classes exist: Climbing a rope, Diving cliff, and Rock climbing, which involve ropes, rocks and cliffs. To achieve this, the model progressively identify the interactions and relationships like: human sitting the rock, human holding the rope, and the presence of both rope and rock. This information is proven to be sufficient for predicting Abseiling over other ambiguous action classes.
Figure 12: The man is then shown on the water skiing. We can see that the proposed SINet-Caption often focus on the person and the wakeboard, and most importantly it highlight the interaction between the two, i.e. the person steps on the wakeboard.
Figure 13: A man is sitting on a camel. The SINet-Caption is able to detect the ROIs containing both persons and the camel. We can also observe that it highlights both the ROIs for persons who sit on the camel and the camel itself at frame 3 and 9. However, the proposed method failed to identify that there are multiple people sitting on two camels. Furthermore, in some cases, it selects the person who leads the camels. This seems to be because the same video is also annotated with another caption focusing on that particular person: A short person that is leading the camels turns around.
Figure 14: Two people are seen playing a game of racquetball. The SINet-Caption is able to identify that two persons are playing the racquetball and highlight the corresponding ROIs in the scene.
Method B@1 B@2 B@3 B@4 ROUGE-L METEOR CIDEr-D
1st Validation set
SINet-Caption — img (C3D) 16.93 7.91 3.53 1.58 18.81 8.46 36.37
SINet-Caption — img (ResNeXt) 18.71 9.21 4.25 2.00 20.42 9.55 41.18
SINet-Caption — obj (ResNeXt) 19.00 9.42 4.29 2.03 20.61 9.50 42.20
SINet-Caption — img + obj — no co-attention (ResNeXt) 19.89 9.76 4.48 2.15 21.00 9.62 43.24
SINet-Caption — img + obj (ResNeXt) 19.63 9.87 4.52 2.17 21.22 9.73 44.14
2nd Validation set
SINet-Caption — img (C3D) 17.42 8.07 3.53 1.35 18.75 8.41 40.06
SINet-Caption — img (ResNeXt) 18.91 9.41 4.28 1.68 20.49 9.56 45.05
SINet-Caption — obj (ResNeXt) 19.14 9.53 4.47 1.81 20.73 9.61 45.84
SINet-Caption — img + obj — no co-attention (ResNeXt) 19.97 9.88 4.55 1.90 21.15 9.96 46.37
SINet-Caption — img + obj (ResNeXt) 19.92 9.90 4.52 1.79 21.28 9.95 45.54
Table 4: METEOR, ROUGE-L, CIDEr-D, and BLEU@N scores on the ActivityNet Captions 1st and 2nd validation set. All methods use ground truth temporal proposal, and out results are evaluated using the code provided in [22] with . Our results with ResNeXt spatial features use videos sampled at maximum 1 FPS only.

SINet-Caption architecture: We first use a single fully-connected layer with batch normalization, dropout, and ReLU to project the pre-saved image features . The maps the feature vector from 2048 to 1024. We use two attentive selection modules for video captioning task (). Each consist of a batch normalization, fully-connected layer, dropout layer, and a ReLU. It maps input object feature vector from 2048 to 512. The dropout ratio for both and are set to be 0.5. The concatenation of and is used as input to the LSTM cell inside Recurrent HOI module. The hidden dimension of this LSTM cell is set to be 1024. The dimension of word embedding is 512. We use ReLU and dropout layer after embedding layer with dropout ratio 0.25. The hidden dimension of both Attention LSTM and Language LSTM are set to be 512.

FLOP is computed per video and the maximum number of objects per frame is set to 15. We compare the computed FLOP with traditional object interactions by paring all possible objects. The results are shown in Table 5.

Proposed method () FLOP Object pairs FLOP
Project obj features
MLP 15 x 2048 x 2048 x 2 0.13e9 MLP 105 x 4096 x 2048 0.9e9
15 x 2048 x 2048 x 2 0.13e9 105 x 2048 x 2048 0.4e9
15 x 2048 x 2048 x 2 0.13e9 105 x 2048 x 2048 0.4e9
Recurrent unit
Recurrent HOI (SDP-Attention)
2048 x 2048 x 2 8.4e6
2048 x 2048 x 2 8.4e6
MatMul 15 x 15 x 2048 x 2 0.9e6
MatMul 15 x 15 x 2048 x 2 0.9e6
LSTM Cell 8 x 2 x 2 x 2048 x 2048 134.2e6 LSTM Cell 8 x 2 x 2048 x 2048 67e6
Total
timesteps () 10 x (MLP + Recurrent) 5.3e9 10 x (MLP + Recurrent) 18.3e9
Table 5: FLOPs calculation on Kinetics sampled at 1 FPS. The calculation is based on forward passing of one video.

References

  • [1] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and S. Vijayanarasimhan. Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675, 2016.
  • [2] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. Bottom-up and top-down attention for image captioning and vqa. arXiv preprint arXiv:1707.07998, 2017.
  • [3] S. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pages 65–72, 2005.
  • [4] Y. Bian, C. Gan, X. Liu, F. Li, X. Long, Y. Li, H. Qi, J. Zhou, S. Wen, and Y. Lin. Revisiting the effectiveness of off-the-shelf temporal modeling approaches for large-scale video classification. arXiv preprint arXiv:1708.03805, 2017.
  • [5] F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 961–970, 2015.
  • [6] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [7] Y.-W. Chao, Y. Liu, X. Liu, H. Zeng, and J. Deng. Learning to detect human-object interactions. arXiv preprint arXiv:1702.05448, 2017.
  • [8] B. Dai, Y. Zhang, and D. Lin. Detecting visual relationships with deep relational networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [9] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, 2017.
  • [10] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1933–1941, 2016.
  • [11] Z. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, and L. Deng. Semantic compositional networks for visual captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [12] B. Ghanem, J. C. Niebles, C. Snoek, F. C. Heilbron, H. Alwassel, R. Khrisna, V. Escorcia, K. Hata, and S. Buch. Activitynet challenge 2017 summary. arXiv preprint arXiv:1710.08011, 2017.
  • [13] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell. Actionvlad: Learning spatio-temporal aggregation for action classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [14] G. Gkioxari, R. Girshick, P. Dollár, and K. He. Detecting and recognizing human-object interactions. arXiv preprint arXiv:1704.07333, 2017.
  • [15] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with r* cnn. In Proceedings of the IEEE international conference on computer vision, pages 1080–1088, 2015.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [17] R. Hu, M. Rohrbach, J. Andreas, T. Darrell, and K. Saenko. Modeling relationships in referential expressions with compositional modular networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [18] H. Idrees, A. R. Zamir, Y.-G. Jiang, A. Gorban, I. Laptev, R. Sukthankar, and M. Shah. The thumos challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding, 155:1–23, 2017.
  • [19] J. Johnson, R. Krishna, M. Stark, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3668–3678, 2015.
  • [20] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei.

    Large-scale video classification with convolutional neural networks.

    In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014.
  • [21] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
  • [22] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. C. Niebles. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
  • [23] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: a large video database for human motion recognition. In 2011 International Conference on Computer Vision, pages 2556–2563. IEEE, 2011.
  • [24] C. Lea, A. Reiter, R. Vidal, and G. D. Hager. Segmental spatiotemporal cnns for fine-grained action segmentation. In European Conference on Computer Vision, pages 36–52. Springer, 2016.
  • [25] Y. Li, C. Huang, C. C. Loy, and X. Tang. Human attribute recognition by deep hierarchical contexts. In European Conference on Computer Vision, pages 684–700. Springer, 2016.
  • [26] Y. Li, W. Ouyang, B. Zhou, K. Wang, and X. Wang. Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1261–1270, 2017.
  • [27] X. Liang, L. Lee, and E. P. Xing.

    Deep variation-structured reinforcement learning for visual relationship and attribute detection.

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [28] C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
  • [29] J. Lu, C. Xiong, D. Parikh, and R. Socher. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
  • [30] C.-Y. Ma, M.-H. Chen, Z. Kira, and G. AlRegib. Ts-lstm and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition. arXiv preprint arXiv:1703.10667, 2017.
  • [31] A. Miech, I. Laptev, and J. Sivic. Learnable pooling with context gating for video classification. arXiv preprint arXiv:1706.06905, 2017.
  • [32] B. Ni, V. R. Paramathayalan, and P. Moulin. Multiple granularity analysis for fine-grained action detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 756–763, 2014.
  • [33] B. Ni, X. Yang, and S. Gao. Progressively parsing interactional objects for fine grained action detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1020–1028, 2016.
  • [34] Y. Pan, T. Mei, T. Yao, H. Li, and Y. Rui. Jointly modeling embedding and translation to bridge video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4594–4602, 2016.
  • [35] Y. Pan, T. Yao, H. Li, and T. Mei. Video captioning with transferred semantic attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [36] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
  • [37] X. Peng and C. Schmid. Multi-region two-stream r-cnn for action detection. In European Conference on Computer Vision, pages 744–759. Springer, 2016.
  • [38] Z. Qiu, T. Yao, and T. Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [39] V. Ramanishka, A. Das, J. Zhang, and K. Saenko. Top-down visual saliency guided by captions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [40] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems, 2017.
  • [41] Z. Shen, J. Li, Z. Su, M. Li, Y. Chen, Y.-G. Jiang, and X. Xue.

    Weakly supervised dense video captioning.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [42] G. A. Sigurdsson, S. Divvala, A. Farhadi, and A. Gupta. Asynchronous temporal fields for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [43] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pages 510–526. Springer, 2016.
  • [44] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems, pages 568–576, 2014.
  • [45] J. Song, Z. Guo, L. Gao, W. Liu, D. Zhang, and H. T. Shen. Hierarchical lstm with adjusted temporal attention for video captioning.

    International Joint Conference on Artificial Intelligence

    , 2017.
  • [46] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. In CRCV-TR-12-01, 2012.
  • [47] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. C3d: generic features for video analysis. CoRR, abs/1412.0767, 2:7, 2014.
  • [48] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
  • [49] R. Vedantam, C. Lawrence Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575, 2015.
  • [50] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko. Sequence to sequence-video to text. In Proceedings of the IEEE International Conference on Computer Vision, pages 4534–4542, 2015.
  • [51] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko.

    Translating videos to natural language using deep recurrent neural networks.

    In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1495–1504, 2014.
  • [52] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision, pages 20–36. Springer, 2016.
  • [53] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [54] D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei. Scene graph generation by iterative message passing. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [55] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In Proceedings of the IEEE International Conference on Computer Vision, pages 4507–4515, 2015.
  • [56] H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu. Video paragraph captioning using hierarchical recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4584–4593, 2016.
  • [57] Y. Yu, H. Ko, J. Choi, and G. Kim. End-to-end concept word detection for video captioning, retrieval, and question answering. In CVPR, volume 3, page 7, 2017.
  • [58] M. Zanfir, E. Marinoiu, and C. Sminchisescu. Spatio-temporal attention models for grounded video captioning. In Asian Conference on Computer Vision, pages 104–119. Springer, 2016.
  • [59] H. Zhang, Z. Kyaw, S.-F. Chang, and T.-S. Chua. Visual translation embedding network for visual relation detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [60] J. Zhang, M. Elhoseiny, S. Cohen, W. Chang, and A. Elgammal. Relationship proposal networks. In CVPR, volume 1, page 2, 2017.