Cooperative Cross-Stream Network for Discriminative Action Representation

08/27/2019 ∙ by Jingran Zhang, et al. ∙ 17

Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Fig. 1:

Traditional two-stream network for classifying “drink” and “eat” from HMDB-51

[15] dataset. The input consist of video frames and optical flow images. It is hard to tell the classes when input the frames or optical flow field to TSN [38]

separately. The scores part is result of confusion matrix, which is derived from experiment of TSN on HMDB-51 dataset.

Video analysis has attracted significant attention from the academic community in computer vision, partly due to the rapidly growing number of videos being shared on the Internet. As one of the fundamental task in video analysis, human action recognition in videos is a well-studied problem. However, traditionary CNN-based representations

[14] have not yet significantly made as the transformation of an impact on action representation as it does on still images, because of significant variations and complexities of video temporal sequence [20]. Different from still image analysis, action representations often equip with specific spatial patterns as well as long-term temporal structure. Temporal modeling is critical aspects for action recognition and actions can be characterized by the temporal evolution of appearance governed by motion. Thus, it is crucial to design model which has the capacity to exploit long-range temporal information.

Most recent works for action recognition can be generally originated from three kinds of architectures or frameworks, namely (1) 2D ConvNets with temporal modeling on top, like LSTM [5], (2) 3D based spatiotemporal convolutions [30] [13], (3) Two-stream based architectures [24] [7] [38]. Long term temporal modeling encode temporal relationship on frame-level features but has a poor capacity of capturing finer temporal relationship. Limited by complex spatiotemporal dependencies of action and computational cost, 3D based ConvNets have been so far hard to scale in terms of recognition performance. Whereas, two-stream based ConvNets [24] which consists of motion and appearance streams typically train separately for each stream, and fuse the outputs in the end. Two-stream based ConvNets have been shown to outperform the 3D based convolution and 2D ConvNets with temporal modeling because they can easily utilize the pre-trained deep architectures [10] for still-image recognition and have excellent motion sources to extract features.

Nevertheless, some motion features of different class extracted from two-stream framework are prone to confusing, resulting in the wrong classification, due to the similarity structure of optical flow field, for example, discriminating “eat” and “drink” from “smoking” (see Figure 1). What’s more, simple fusing the clip scores of RGB ConvNet and flow ConvNet don’t give large improvement. Experiments prove that existing two-stream based frameworks usually failed on categorizing those easily confused action label. Figure 1 give an example, experimental data stemmed from baseline model TSN [38] on HMDB-51 [15], of that case. Our human being can easily distinguish above action partly due to we focus on not only the motion features but also the appearance features when we determine an action. Hence, the reason for this case may be two stream based action recognition methods extract spatial and motion features separately, suffering from a limitation of lack of mutual spatial-temporal learning. An excellent framework should be able to capture both information simultaneously. The RGB frames ConvNets should help optical flow ConvNets in features extraction. That is to say, the features learned by the two distinct networks should enhance each other to make the features of the same class compact whereas the different class dissimilar. Actually, there is some subtle connection that is not well explored between spatial ConvNets and temporal ConvNets.

We introduce an architecture outline in which we simultaneously extract discriminative features and jointly train spatial-temporal network in an end-to-end manner for solving this issue. To efficient explore the relation of the RGB stream and optical flow stream, We propose a cross-modality features extraction paradigm to jointly learning spatiotemporal features for two heterogenous modalities, integrating modality information complementarity block and cross-modality ranking constraint to bridge the gap between two modalities and enhance the modality-invariance of the learned representation. As illustrated in Figure 2, the main of the framework is spatial and temporal features learning and interaction of the separate part. ConvNet operates the spatial and temporal feature extraction. Inspired by the non local block that calculates the dependency of the same modality features, here we design a new block which takes the spatial and temporal features as input and calculate its’ correlation. The connection block which design to capturing dependencies and relationship of spatial and temporal features tries to enhance the interaction between spatial and temporal ConvNets and provide complementarity information to each other. To fully utilize the complementarity information of spatial and temporal features, in the shared block of our framework, we propose to use triplet constraint to force the spatial and temporal features to preserve the similarity structure and weaken the modality discrepancy.

The key contributions of our work are summarized as three-fold: (1) We propose a cooperative spatial and temporal features learning model in an end-to-end manner. Comparing to exist two-stream networks, our model is uniquely able to cope with the incoordination problem between spatial and temporal features extracted in a separate manner. (2) The proposed network enhance the interaction and correlation between the spatial and temporal features by pulling a connection block between the spatial and temporal stream. (3) we aggregate the identity loss with cross-modality ranking constraint to ensure the discriminability by exploiting the relation between spatial and temporal stream.

2 Related work

Fig. 2: The overall architecture of our proposed cooperative cross-stream network (CCS). Feature extraction ConvNet, connection block, and shared block constitute our model. The feature extraction ConvNet is applied to capturing appearance and motion features. The connection block is used for enhancing appearance and motion features interaction. The shared block is designed for reducing the undesired modality discrepancy. The hole model is training under inter modality triplet and discriminative embedding constraint. The class scores of all modalities are then fused for prediction.

As one of video analysis task, action recognition has been well studied for decades. Action recognition is hard partly due to the large inter-class similarity of different action temporal features and intra-class variability of same action spatial features. In this paper, we apply jointly spatial and temporal features learning in a discriminative fashion to improve connections between the two, which cloud in some way learn compact features. Many previous works related to this problem fall into two categories in term of feature learning: (1) hand-crafted features designing, and (2) ConvNets for auto-features extraction.

Hand-crafted features for action recognition.

Before deep learning became popular, most of the traditional CV algorithm variants apply shallow hand-crafted features to solve action recognition. Improved Dense Trajectories (IDT)

[34] which uses densely sampled trajectory features indicates that the temporal information could be processed differently from that of spatial information. Instead of extending the Harris corner detector into 3D, it utilizes the warp optical flow field to obtain some trajectories and eliminate the effects of camera motion in the video sequence. For each tracker corner hand-crafted features, like HOF, HOG, and MBH, are extracted along the trajectory. Despite their excellent performance, IDT and its improvements [18], [17], [37] are still computationally formidable and become intractable on large-scale datasets.

ConvNets for auto-feature extraction. An activate research which devotes to the design of deep networks for video representation learning has been trying to devise effective ConvNet architectures [14] [33] [31] [33] [5]. Karparthy et al. [14] attempt to design a deep network which stacks CNN-based frame-level features in a fixed size and then conduct spatiotemporal convolutions for video-level features learning. However, the results which implied the difficulty of CNNs in capturing motion information of the video is not satisfied. Later, many works in this genre leverage ConvNets trained on frames to extract low-level features an then perform high-level temporal integration of those features using pooling [36] [35], high-dimensional feature encoding [8] [4]

, or recurrent neural networks

[5] [41] [33] [44]. Recently, the CNN-LSTM frameworks [5] [41], using stacked LSTM network to connect frame-level representation and exploring long-term temporal relationships of video for learning a more robust representation, have yielded an improvement for modeling temporal dynamics of convolution features in videos. However, this genre using CNN as an encoder and RNN as a decoder of the video will lose low-level temporal context which is essential for action recognition.

These works implied the importance of temporal information for action recognition and the incapability of CNNs to capture such information. To exploiting the temporal information, some studies resort to the use of the 3D convolution kernel. Tran et al. [30] [31] apply 3D CNN, both appearance and motion features learned with 3D convolution, simultaneously encode spatial and temporal cues. Several works explored the effect of performing 3D convolutions over the long-range temporal structure with ConvNets [39] [43]. Unfortunately, the network accepts a predefined number of frames as the input, and it’s unclear of the right choice of the temporal span. What’s more, the 3D convolution kernel inevitably has more network parameters. Therefore, recent interests have proposed a variant of factorizing a 3D filter into a combination of a 2D and 1D filter, including “R(2+1)D” [32], “Pseudo3D network” [20], “factorized spatiotemporal convolutional networks” [28].

Another efficient way to extract temporal features is to precomputing the optical flow [29]

using traditional optical flow estimation methods and training a separate CNN to encode the precomputed optical flow, which is kind of escape from temporal modeling but effective in motion features extraction. The famous two-stream architecture

[24] proposed to apply two CNN architectures separately on visual frames and staked optical flows to extract spatiotemporal features and then fuse classification score. Further improvements base on this architecture including multi-granular structure [26] [47], convolutional fusion [7] [39], key-volume mining [48], temporal segment networks [38] and ActionVLAD [8] for video representation learning. Remarkably, a recent work (I3D) [2] which combines two-stream processing and 3D convolutions holds the state-of-art action recognition results. The work reflects the power of ultra-deep architectures and pre-trained models.

Two-stream architectures based methods generally have the best performance among those works. Nevertheless, two-stream backbone networks often train spatial and temporal ConvNet separately, which will break the connections between appearance and motion information. Recently, many works have utilized cross-modality learning which could improve the discriminative of features to tackle computer vision task, like image retrieve [3, 22, 23] and person Re-ID [42]. In our framework, we jointly train spatial and temporal stream by cross-stream learning. Besides, to capture the complementarity information between appearance and motion across videos and encode the correlation features between different stream into a compact format, we propose a connection block and aggregate inter-modality triplet and intra-modality discriminative embedding constraint with identity loss.

3 Proposed Method

In this section, we illustrate the framework of our proposed architectures showed in Figure 2. In our cross-stream network, the spatial stream focuses on appearance features learning from sparsely sample frames, and the temporal stream focus on the motion features which is captured using multiple optical flows. The two parts should complementary to each other; a connection block is designed for improving the interaction of the two different modality features. The latter cross-modality feature learning focuses on learning a multi-modality sharable space to bridge the gap between two heterogenous modalities.

3.1 Feature Extraction

We adopt the off-the-shelf features extractor to extract the features from two heterogenous modalities. Both spatial and temporal ConvNet employ similar backbone structures in our feature extraction block.

Suppose we have a video containing frames, equipped with a label , where , is the total number of action labels. Considering the video , firstly, we need to get snippet-level action features. A end-to-end deep neural network perform effective video-level representation learning. Here, we use two-stream based framework [24] to extract appearance and motion feature. Given the input , where is the frame in video , is stacked optical flow field derived around , , are constant, typically 5 images and 5 images. Two-stream network includes spatial and temporal networks which operate on single video frame and stacked optical flow field respectively. considering the output of , , where is the learned features of frame in the video, and is the learned features of the stacked optical flow in the video.

3.2 Connection Block

Considering the output features of ConvNet from a video, the features of the sequence will be . Here, we suppose the output feature sequences with the appearance and motion have the same size, i.e., , where , ,

, denote the sequence number, filter number, and numbers of the output feature dimension. The goal of interaction block is to produce a vector which represents the correlation between

and , which can be further fed into a neural network to compute the similarity.

Inspired by non-local operation for capturing long-range dependencies [40] and relationship reasoning module [21] and video temporal reasoning [45], we present a pairwise spatial and temporal correlation function as blow:

(1)

where the input is a set of feature sequences of standard CNN extracted from video frames and optical flows, is the frame feature sequences of video and is the optical flow feature sequence of video, and

are function typically implement by multiple layer perceptrons with parameter

respectively.

Following non local module [40] aiming at calculating relation of elements of the object, here, we adopt embedded Gaussian to compute the similarity of two different modality object pairs. Considering the similarity measure function , we present it as follow:

(2)

where , and are two embeddings implemented by multiple layer perceptrons.

We further wrap the spatial and temporal correlation reasoning Eq.(1) into interaction operation as:

(3)

where is given in Eq. (1) and , are interaction function implemented by a convolution operation. Figure 3 shows the details of connection block.

Fig. 3: Cross-stream connection block. The appearance features denoted as and motion features denoted as are put into the connection block. “” denotes matrix multiplication, and “” denotes element-wise sum. , , and denote convolution operation.

3.3 Shared Block

Based on the output features of the ConvNets for segment from a video, it can be transformed into a features vector through aggregation operation. Supposing that there is a collection of instance video features, denoted as , , where is the aggregation feature of frame-stream ConvNet of and is the aggregation feature of optical flow filed stream ConvNet of , we build learning scheme by selecting triplets from above databases.

Inspired by triplet loss to learn discriminative embedding, we propose to use triplet constraint to extract spatial and motion features based on two-stream backbone ConvNets. Besides, to efficient explore the relation of the RGB stream and optical flow field stream, we propose cross-modality features extraction to jointly learning spatial-temporal features. Most works train the spatial ConvNet and temporal ConvNet separately under the architecture of two-stream. Actually, the RGB frame ConvNet should help optical flow ConvNet in features extraction. That is to say, the features learned by the two distinct networks should enhance each other to make the features of the same class compact whereas the different class dissimilar. The underlying idea is that we compare the distance of a positive appearance-motion pair and the minimum distance of all related negative appearance-motion pairs, rather than each of the negative pairs. More specifically, we sample frames from the entire video and extract appearance and motion features jointly using cross-modality training to enhance the connections of appearance and motion. The extracted features are then fed into a classifier which outputs the classification scores. Final results are improved by scores fusion.

We propose a cross-modality learning scheme relied on selecting triplets and discriminative embedding scheme on each modality in this section to reduce variations in both intra-modality and cross-modality. Online triplet sampling on each mini-batch [11] are employed here. The joint effect of these two processes is illustrated in Figure 4. The discriminative embedding loss force the learned features of the same class compact and different class dissimilar, meanwhile, the cross-modality triplet loss force appearance and motion stream to project into common feature space.

Fig. 4: Illustration of the joint effect of inter-modality triplet and discriminative embedding constraint. Different color represents different modality while the same color indicates the class-related cross-modality item; what’s more, different shape represent the different class.

A set of triplet samples and are built here, where is the member of the triplet, denote modality,

denote the kinds of match of the triplet. The inter-modality loss function using the following expression:

(4)

where is a margin that is enforced between positive and negative pairs.

Since the above ranking loss constrains the feature learning process with their underlying relationships among the heterogeneous modality, it’s hard to learn a robust feature representation to reduce the intra-class variations by simply exploiting the relationship cues. Inspired by linear discriminative analysis [1], we introduce discriminative embedding constraint to enhance the robustness of the learned feature representation and address intra-modality variations; the discriminative embedding loss function expresses as following:

(5)

where is the mean feature of class , is the number of the class and is a margin that forces the same class compact, is a margin that is enforced between the different class.

For the sake of feasibility and effectiveness in classification, the general cross entropy loss is utilized by treating each action as a class. In this manner, the identity-specific information is integrated to enhance the robustness.

Based on the above, the loss function of the proposed network, referred to as a combination of cross-modality, is formulated as the combination of the intra-modal discrimination loss and the intra-modal embedding constraint and identity loss:

(6)

where , control the contribution of the two terms. Algorithm 1 illustrates the steps of the proposed cooperative cross-stream network. From the backward pass, we can obtain that the connection block crosses the stream function as a bridge for information of appearance and motion stream flowing to each other.

  Input:  videos with class , where is the label of video , iteration number .
  Output:  The predicted action label , where
  initialization: ,
  repeat  1. Forward pass:    1.1 compute the appearance features and motion    features within the connection block;   1.2 predict the video label after shared block;  2. Backward pass:   using and    as parameters gradient,   where are the parameters of spatial stream model and     are the parameters of temporal stream model;  3. ;
  until or convergence
Algorithm 1 Optimization step of CCS.

4 Experiments

In this section, we describe our method for action recognition. Firstly, we introduce the benchmark datasets and implementation details of the proposed method. Afterward, we compare our method with state-of-art methods on standard action datasets. Following, we explore the effectiveness of applying different component in our proposed model. Finally, we investigate the effect of ConvNet architectures and hyperparameters and visualize the interesting region extracted by our model on the snippet video.

4.1 Experimental Setup

Datasets. We conduct our experiments on three challenging action datasets: namely, UCF-101 [27], HMD-B51 [15], something-something-V2 [16] to evaluate the overall performance. The UCF-101, one of popular action recognition dataset, consists of 101 action classes with 13320 short video clips. Videos in this dataset have spatial resolution. The HMDB-51 dataset has 6766 video clips with 51 categories. something-something-v2 an interesting temporal relationship reasoning dataset contain total 220,847 video clips with 174 action classes. For both datasets, we follow the standard evaluation protocol and adopt its training/testing splits for evaluation. We report accuracy on the split 1 test set of UCF-101 and HMDB-51 datasets.

Implementation details.

We employ the pytorch framework in this paper for Networks building, and all networks are trained on two GeForce GTX Titan X GPU with total 24G memory. We compute optical flow with a TV-L1 algorithm

[19]. All the input images are resized to 224 224 followed by the dataset processing strategy of [38]

. We adopt mini-batch stochastic gradient descent optimizer for model training, and initial learning rate here is 0.001 which will reduce by a factor 10 after 50 epochs. It has decay rate

, and momentum 0.9 to update Network parameters. The maximum epochs are 400. The trade-off parameters and are all set as . We set cross-modality margin , and intra-class , inter-class margin of the same modality.

We introduce a balance mini-batch sampling strategy for inter-modality modality constraint and discriminative embedding constraint. Specifically, we randomly select action categories. Then we randomly select instance of the selected identity from two different modalities to construct the mini-batch, in which totally instances are fed into the network for training.

4.2 Comparison with Existing Methods

We compare to the state-of-the-art action recognition methods and report the results in Table LABEL:tab:compare on UCF-101, HMDB-51 (split 1) and something-something-V2 dataset. For a fair comparison, we list the important factor such as the pre-trained dataset and use RGB images and optical flow fields as input modalities. We use CCS, as above, and predict the action in a single forward pass using fully network testing. Here, we extract three segments of a video and randomly sample a video snippet of 10 frames on each segment as input for training. During testing, 25 frames are sampled for each video. The comparison against the single model without ensemble technique, like the work in [5], which attaches an LSTM to a ConvNet architecture and the one spatiotemporal C3D based network [31] are impressive. Their accuracy of is to date the best performing approach using one stream for action recognition. Here, our gain of further underlines the importance of two-stream framework. Comparing to the original two-stream method [24], we improve by on UCF-101 and by on HMDB-51. Apparently, even though the original two-stream approach has the advantage than one stream method, the benefit of our cooperative cross-stream network with the interaction of heterogeneous features are still greater. Together with TSN or I3D, our cooperate two-stream architecture widens the advantage over previous models considerably, bringing overall performance to on UCF-101 and on HMDB-51. We observe that the combination of RGB images and optical flow image boosts the recognition performance and cooperative training the two kinds of image further yield an improvement. This result indicates that RGB images and optical flow image may encode complementary information.

These relatively larger performance increments again underline that our approach is better able to capture the available dynamic information. Overall, our result on HMDB-51 clearly sets a new state-of-the-art on this widely used action recognition datasets. This corroborates for different modality information, enhanced by modality connection block and cross-modality training, is crucial for a better understanding of action in videos. What’s more, from Table LABEL:tab:compare, we also can acquire the power of pre-trained model for action recognition.

something-something-v2 is a dataset for human-object interaction recognition, which cares more about temporal relations and transformations of objects rather than the appearance and motion of the objects characterize the activities [45]. In Table II, We report the accuracies of something-something-v2. Comparing with the baseline methods [9], our method further improves to . The combination of two-stream TRN [45] and our CCS achieves better results. The performance demonstrates the importance of not only the temporal reasoning pooling but also the correlation of appearance and motion features on something-something dataset.

Methods Pre-train dataset UCF-101 HMDB-51
RGB Flow RGB+Flow RGB Flow RGB+Flow
ConvNets+LSTM [5] ImageNet 68.2 - - - - -
Two-stream Network [24] ImageNet 73.0 83.7 88.0 40.5 54.6 59.4
ConvNet fusion [7] ImageNet 82.6 86.2.7 90.6 47.0 55.2 58.2
ST-resNet [6] ImageNet 82.3 79.1 93.4 43.2 55.5 66.4
DTPP [47] ImageNet 89.7 89.1 94.9 61.5 66.3 75.0
TLE+Two-stream [4] ImageNet - - 95.6 - - 71.1
ActionVLAD [8] ImageNet   - - 92.7 49.8 59.1 66.9
C3D [30] sports-1M 82.3 - - 51.6 - -
C3D [31] sports-1M 85.8 - - 54.9 - -
R(2+1)D [32] sports-1M 93.6 93.3 95.0 66.6 70.1 72.7
TSN [38] ImageNet 85.7 87.9 93.5 - - 68.5
I3D [2] ImageNet 84.5 90.6 93.4 49.8 61.9 66.4
R(2+1)D [32] ImageNet+Kinetics 96.8 95.5 97.3 74.5 76.4 78.7
TSN [38] ImageNet+Kinetics 91.1 95.2 97.0 - - -
CCS + TSN ImageNet 87.2 87.4 95.3 60.5 62.1 77.2
CCS + TSN ImageNet+Kinetics 94.2 95.0 97.4 69.4 71.2 81.9
CCS + I3D ImageNet 86.7 87.1 93.8 60.1 62.3 68.2
TABLE I: Comparison of state-of-the-art methods on the UCF-101 and HMDB-51 datasets (split 1). We report the accuracy of RGB modality, optical flow modality, and the combination of both two modality.
Methods val Test
top-1 top-5 top-1 top-5
Baseline 51.3 80.6 - -
MultiScale TRN 48.8 77.6 50.9 79.3
two-stream + TRN 55.5 83.1 56.2 83.2
CCS + two-stream + TRN 61.2 89.3 60.5 87.9
TABLE II: Results on something-something-V2.

4.3 Further Analysis

Importance of each component of the proposed model. With all the design choices set, we now apply the cooperative cross-stream network (CCS) to the action recognition with different variants, where the result is illustrated in Table III. A component-wise analysis of the components in terms of the recognition accuracies is also presented.

Base Model Methods UCF-101 HMDB51
RGB Flow RGB+Flow RGB Flow RGB+Flow
TSN Baseline 85.7 87.9 93.5 - - 68.5
CB 86.3 87.2 93.9 60.5 62.1 76.3
CS 84.9 85.1 91.7 54.4 61.6 67.3
All 87.2 87.4 95.3 61.7 65.1 77.2
I3D Baseline 84.5 90.6 93.4 49.8 61.9 66.4
CB 86.1 86.9 92.7 53.0 56.2 67.6
CS 82.4 83.1 91.8 50.9 52.3 64.7
All 86.7 87.1 93.8 60.1 62.3 68.2
TABLE III: Ablation studies: Results with different components on the UCF-101 , HMDB-51  datasets. “method” denotes the component we use in our final model. “CB”: with only connection block. “CS”: without connection module only using cross-stream training in the shared block. “All”: with the connection block and cross-stream training in the shared block.

We cooperate the CCS with TSN [38] and I3D [2], to verify the importance of modality information complementarity. Instead of training spatial and temporal stream separately, CCS jointly train the two stream network to improve the interaction of deep spatiotemporal features so that the model not only captures the co-occurrence also the specific patterns in the features. We keep all the training conditions the same, and vary connection block and loss function used by two models.

We investigate the effectiveness of each component in our proposed model by conducting a series of ablation studies on all three datasets. We treat the TSN [38] and I3D [2] as backbone framework in this section. We first study the effectiveness of our modality features connection modules by replacing the connection module with feature concatenation or average. We first train the TSN and I3D framework with the connection module, named TSN+CM, I3D+CM. Its’ RGB ensemble with optical flow accuracy increase by , on the UCF101 dataset, and

on the HMDB51, which demonstrates that conducting modality information interaction with connection block helps deep modality features complementarity to enhance the performance. For validating the effectiveness of shared features projection layer following connection module, we remove the shared layer and only employ cross entropy loss. Instead, we directly take the results from TSN or I3D and input them into two-layer feed-forward neural networks mentioned above to obtain the similarity confidence (denoted as TSN+CS I3D+CS). The performance even becomes worse compared with TSN and I3D. However, worked with connection block, our original CCS network can achieve the best results.

We can obtain that the reported baselines typically underperform the proposed model. Both TSN [38] and I3D [2] produce reasonable performances but work with our original design still yield improvement. We speculate this is because the connection block considerably explores the correlation information of heterogeneous modality and therefore, the network is able to store more complementary information for cross-modal feature learning.

We consider the additional parameters introduced by connection and shared block. All of them are contained in two 11 convolution and shared full connection operation. The computation is relatively small and worth of the cost, compared to the whole networks and contribution to model performance.

Effect of sequence features aggregation function. The two commonly used aggregation methods are the element-wise maximum of the sequence and element-wise average of the sequence. Here, we also evaluate (1) element-wise multiplication of the sequence, (2) concatenation of sequence. The comparisons among the four late score fusion methods are shown in Figure 5 (a). We can see that the element-wise average of the sequence achieves the best result on HMDB-51 dataset. This verifies that the effective of element-wise average to improve the final accuracy.

Effect of model parameter. We survey the hyperparameter , and in ranking loss. The parameter refers to the margin between the anchor/positive and negative samples. The parameter refers to the margin between the sample of its center and refers to the margin between different center. A small value enforces less on the similarities between the anchor/positive against negative, but the loss in faster convergence. On the other hand, a large value may lead to a network with good performance, but slow convergence during training. We conduct an experiment on UCF-101 to illustrate the effects of this parameter, and the results are showed in Table IV. From the table, it can be seen that it achieves the best accuracy when and are set to , and is set as . This suggests that we should carefully choose the hyperparameter, and it is advisable to set relatively small values for reasonable results.

Parameters Accuracy
0.2 0.3 0.8 96.3
0.3 0.3 0.8 97.4
0.3 0.5 1.0 97.1
0.5 0.5 1.0 95.4
0.8 0.5 1.2 95.5
TABLE IV: The performance of CCS with different model parameters values on UCF-101 dataset.
(a) Aggregation function
(b) Architecture
Fig. 5: The performance of different backbone architectures or feature aggregation functions.

Effect of ConvNet structure. Furthermore, to investigate different effect of ConvNet structures, We also explore the conventional CNN model, namely VGG [25], ResNet [10], BN-Inception [12], all pre-trained on ImageNet, as the backbone of two-stream ConvNets. All those ConvNets are trained together with TSN and our CCS network on UCF-101. The results of those deep structures are shown in Figure 5 (b). Among those structures, BN-Inception achieves the best accuracy.

4.4 Visualization

To verify how our model help in action classification, we would like to attain further insight into what our model has learned. As shown in [46], ConvNets are expert in capturing the basic visual concept, but it has difficulty in identifying the importance of different units for classifying different categories. Here, we use the CAM (Class Activation Map) [46] to visualize the most discriminative parts of the proposed model. Thus the output after a number of iterations can be considered as class visualization based on class knowledge inside the ConvNet model. To understanding the primitives our model used for represent actions and visualizing interesting class information in CCS models, We randomly select three classes from the UCF-101 dataset, “Apply Eye Make-Up”, “Archery”, “Blow Dry Hair” as visualization example. For ease of visualization, we only consider the spatial stream in this example. The results are shown in Figure 6. The highlight regions that correspond to the receptive field give us same insight of what the model cares about. For example, we see that the proposed model pays more attention to the region like ‘eye’ and ‘hand’ in ’ApplyEyeMakeUp’ video.

Fig. 6: Visualization of “CAM” [46] generated by our CCS model when jointly trained appearance and motion stream. The maps highlight the discriminative region for action classification.

5 conclusion

In this paper, a novel CCS network for video action recognition was proposed. It cooperatively exploits the information in RGB visual appearance features and optical flow motion features by mixing a connection block and jointly optimizing a ranking loss and a cross entropy loss. The CCS network enhances the discriminative power and explore the complementary information of the deeply learned heterogeneous features and weakens the modality discrepancy. Further, it can apply to both homogeneous and heterogeneous modality-based action recognition task. The ranking loss consists of inter-modality triplet constraint and discriminative embedding constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiment results on three datasets demonstrate and justify the effectiveness of the proposed method.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under grants No. 61502081, 61602089, 61632007 and the Sichuan Science and Technology Program 2018GZDZX0032, 2019ZDZX0008 and 2019YFG0003.

References

  • [1] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis & Machine Intelligence, 19(7):711–720, 2002.
  • [2] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In

    proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 6299–6308, 2017.
  • [3] M. Carvalho, R. Cadène, D. Picard, L. Soulier, N. Thome, and M. Cord. Cross-modal retrieval in the cooking context: Learning semantic text-image embeddings. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 35–44. ACM, 2018.
  • [4] A. Diba, V. Sharma, and L. Van Gool. Deep temporal linear encoding networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, 2017.
  • [5] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625–2634, 2015.
  • [6] C. Feichtenhofer, A. Pinz, and R. Wildes. Spatiotemporal residual networks for video action recognition. In Advances in neural information processing systems, pages 3468–3476, 2016.
  • [7] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1933–1941, 2016.
  • [8] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell. Actionvlad: Learning spatio-temporal aggregation for action classification. In IEEE Conference on Computer Vision & Pattern Recognition, 2017.
  • [9] R. Goyal, S. E. Kahou, V. Michalski, J. Materzynska, S. Westphal, H. Kim, V. Haenel, I. Fruend, P. Yianilos, M. Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In ICCV, volume 2, page 8, 2017.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [11] A. Hermans, L. Beyer, and B. Leibe. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017.
  • [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [13] S. Ji, W. Xu, M. Yang, and K. Yu.

    3d convolutional neural networks for human action recognition.

    IEEE transactions on pattern analysis and machine intelligence, 35(1):221–231, 2013.
  • [14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014.
  • [15] H. Kuehne, H. Jhuang, R. Stiefelhagen, and T. Serre. Hmdb51: A large video database for human motion recognition. In High Performance Computing in Science and Engineering ‘12, pages 571–582. Springer, 2013.
  • [16] F. Mahdisoltani, G. Berger, W. Gharbieh, D. Fleet, and R. Memisevic. Fine-grained video classification and captioning. arXiv preprint arXiv:1804.09235, 2018.
  • [17] Z. Pan, P. Jin, J. Lei, Y. Zhang, X. Sun, and S. Kwong. Fast reference frame selection based on content similarity for low complexity hevc encoder. Journal of Visual Communication and Image Representation, 40:516–524, 2016.
  • [18] X. Peng, L. Wang, X. Wang, and Y. Qiao. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Computer Vision and Image Understanding, 150:109–125, 2016.
  • [19] J. S. Pérez, E. Meinhardt-Llopis, and G. Facciolo. Tv-l1 optical flow estimation. Image Processing On Line, 2013:137–150, 2013.
  • [20] Z. Qiu, T. Yao, and T. Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 5534–5542. IEEE, 2017.
  • [21] A. Santoro, D. Raposo, D. G. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pages 4967–4976, 2017.
  • [22] F. Shen, X. Gao, L. Liu, Y. Yang, and H. T. Shen. Deep asymmetric pairwise hashing. In Proceedings of the 25th ACM international conference on Multimedia, pages 1522–1530. ACM, 2017.
  • [23] F. Shen, C. Shen, W. Liu, and H. Tao Shen. Supervised discrete hashing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 37–45, 2015.
  • [24] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pages 568–576, 2014.
  • [25] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [26] X. Song, C. Lan, W. Zeng, J. Xing, X. Sun, and J. Yang. Temporal-spatial mapping for action recognition. IEEE Transactions on Circuits and Systems for Video Technology, 2019.
  • [27] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
  • [28] L. Sun, K. Jia, D.-Y. Yeung, and B. E. Shi. Human action recognition using factorized spatio-temporal convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 4597–4605, 2015.
  • [29] S. Sun, Z. Kuang, L. Sheng, W. Ouyang, and W. Zhang. Optical flow guided feature: a fast and robust motion representation for video action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1390–1399, 2018.
  • [30] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497, 2015.
  • [31] D. Tran, J. Ray, Z. Shou, S.-F. Chang, and M. Paluri. Convnet architecture search for spatiotemporal feature learning. arXiv preprint arXiv:1708.05038, 2017.
  • [32] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6450–6459, 2018.
  • [33] G. Varol, I. Laptev, and C. Schmid. Long-term temporal convolutions for action recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6):1510–1517, 2018.
  • [34] H. Wang and C. Schmid. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, pages 3551–3558, 2013.
  • [35] J. Wang and A. Cherian. Learning discriminative video representations using adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 685–701, 2018.
  • [36] J. Wang, A. Cherian, F. Porikli, and S. Gould. Video representation learning using discriminative pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1149–1158, 2018.
  • [37] L. Wang, Y. Qiao, and X. Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4305–4314, 2015.
  • [38] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision, pages 20–36. Springer, 2016.
  • [39] X. Wang, L. Gao, P. Wang, X. Sun, and X. Liu. Two-stream 3-d convnet fusion for action recognition in videos with arbitrary size and length. IEEE Transactions on Multimedia, 20(3):634–644, 2018.
  • [40] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7794–7803, 2018.
  • [41] Z. Wu, X. Wang, Y.-G. Jiang, H. Ye, and X. Xue. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In Proceedings of the 23rd ACM international conference on Multimedia, pages 461–470. ACM, 2015.
  • [42] D. Xu, W. Ouyang, E. Ricci, X. Wang, and N. Sebe. Learning cross-modal deep representations for robust pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5363–5371, 2017.
  • [43] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In Proceedings of the IEEE international conference on computer vision, pages 4507–4515, 2015.
  • [44] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4694–4702, 2015.
  • [45] B. Zhou, A. Andonian, A. Oliva, and A. Torralba. Temporal relational reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pages 803–818, 2018.
  • [46] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.

    Learning deep features for discriminative localization.

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.
  • [47] J. Zhu, Z. Zhu, and W. Zou. End-to-end video-level representation learning for action recognition. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 645–650. IEEE, 2018.
  • [48] W. Zhu, J. Hu, G. Sun, X. Cao, and Y. Qiao. A key volume mining deep framework for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1991–1999, 2016.