DeepAI
Log In Sign Up

Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition

Motion representation plays a vital role in human action recognition in videos. In this study, we introduce a novel compact motion representation for video action recognition, named Optical Flow guided Feature (OFF), which enables the network to distill temporal information through a fast and robust approach. The OFF is derived from the definition of optical flow and is orthogonal to the optical flow. By directly calculating pixel-wise spatio-temporal gradients of the deep feature maps, the OFF could be embedded in any existing CNN based video action recognition framework with only a slight additional cost. It enables the CNN to extract spatio-temporal information, especially the temporal information between frames simultaneously. This simple but powerful idea is validated by experimental results. The network with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3 which is comparable with the result obtained by two streams (RGB and optical flow), but is 15 times faster in speed. Experimental results also show that OFF is complementary to other motion modalities such as optical flow. When the proposed method is plugged into the state-of-the-art video action recognition framework, it has 96.0

READ FULL TEXT VIEW PDF

page 1

page 3

page 4

07/26/2018

Motion Feature Network: Fixed Motion Filter for Action Recognition

Spatio-temporal representations in frame sequences play an important rol...
10/01/2013

Combining Spatio-Temporal Appearance Descriptors and Optical Flow for Human Action Recognition in Video Data

This paper proposes combining spatio-temporal appearance (STA) descripto...
08/08/2020

PAN: Towards Fast Action Recognition via Learning Persistence of Appearance

Efficiently modeling dynamic motion information in videos is crucial for...
01/12/2017

Ordered Pooling of Optical Flow Sequences for Action Recognition

Training of Convolutional Neural Networks (CNNs) on long video sequences...
05/06/2020

Exploiting Inter-Frame Regional Correlation for Efficient Action Recognition

Temporal feature extraction is an important issue in video-based action ...
05/25/2019

Exploring Temporal Information for Improved Video Understanding

In this dissertation, I present my work towards exploring temporal infor...
07/18/2019

Real-Time Driver State Monitoring Using a CNN Based Spatio-Temporal Approach

Many road accidents occur due to distracted drivers. Today, driver monit...

1 Introduction

Video action recognition has received longstanding attentions in the community of computer vision for decades. It aims at automatically recognizing human action from video sequences. Since CNNs have achieved great successes in image classification and other related tasks

[20, 30, 34, 15, 49, 51, 25], lots of CNN based methods have been proposed by considering video action recognition as a classification task [5, 43, 23, 50, 11, 10, 9, 41, 42, 33, 29]. Compared to the image classification methods, temporal information is the key ingredient of video action recognition.

Figure 1: The Optical Flow guided Feature (OFF). Left column: input frames. Middle two columns: standard deep features before applying OFF onto two frames. Right column: temporal difference in OFF. The colors red and cyan are used respectively for positive and negative values. The feature difference between two frames is valid and comprehensive in representing motion information. Best seen in color and zoomed in.

Optical flow is found to be a useful motion representation in video action recognition, including the Two-Stream-based [29, 43] and 3D convolution-based methods [5]. However, extracting dense optical flows is still inefficient. It costs over 90% of the whole run-time in a two-stream based pipeline both at training and testing phases. Moreover, 3D convolutions on RGB input can also capture temporal information, but the RGB-based 3D CNN still does not perform on par with its two-stream version. Other motion descriptors, e.g., 3DHOG [19], improved Dense Trajectory [40]

, and motion vector

[50], are either inefficient or not so effective as optical flow.

How to design/use motion representation that is both fast and robust? To this end, the required computation should be economical and the representation should be sufficiently guided by the motion information. Taking the above requirements into consideration, we propose the Optical Flow guided Feature (OFF), which is fast to compute and can comprehensively represent motion dynamics in a video clip.

In this paper, we define a new feature representation from the orthogonal space of optical flow on the feature level [16]. Such definition brings the guidance from optical flow here to the representation, therefore, we name it as the Optical Flow guided Feature (OFF). The feature consists of spatial gradients of feature maps in horizontal and vertical directions, and temporal gradients obtained from the difference between feature maps from different frames. Since all the operations in OFF are differentiable, the whole process is end-to-end trainable when OFF is plugged into one CNN architecture. Actually the OFF unit only consists of pixel-wise operators on CNN features. These operators are fast to apply, and enable the network with RGB input to capture spatial and temporal information simultaneously.

One vital component in OFF is the difference between features from different images/segments. As shown in Fig. 1, the difference between the features from two images provides representative motion information that can be conveniently employed by CNNs. The negative values in the difference image depict the locations where the body parts/objects disappear, while the positive values represent where they emerge. This pattern of disappearing at one location and emerging at another location can be easily treated as a specific motion pattern and captured by later CNN layers. The temporal difference could be further combined with the spatial gradients such that the constituted OFF is guided by the optical flow on feature level according to our derivation in later section. Moreover, calculation of the motion dynamics at the feature level is faster and also more robust because 1) it enables the spatial and temporal networks with the capability of weight sharing and 2) deeply learned features convey more semantic and discriminative representations with reliable elimination of local and background noises in the raw frames.

Our work has two main contributions.

First, OFF is a fast and robust motion representation. OFF is fast to enable over 200 frames per second with only RGB as the input and is derived from and guided by the optical flow. Taking only RGB from videos, experimental results show that the CNN with OFF is close in performance when compared with the state-of-the-art optical flow based algorithms. The CNNs with OFF can achieve on UCF-101 with only RGB as the input, which is currently state-of-the-art among the RGB-based action recognition methods. When plugging OFF in the state-of-the-art action recognition method [43] in a Two-Stream manner (RGB + Optical Flow), the performance of our algorithm could result in on UCF-101 and on HMDB-51.

Second, an OFF equipped network can be trained in an end-to-end fashion. In this way, the spatial and motion representations can be jointly learned through a single network. This property is friendly for video tasks on large-scale datasets, as it may not require the network to pre-compute and store motion modalities for training. Besides, the OFF can be used between images/segments in a video clip both on image level and feature level.

The rest of this paper is organized as follows. Section  2 introduces recent methods that are related to our work. Section 3 illustrates the definition of OFF and details our proposed method. Section 4 explains our implementation method in CNN. Our experimental results is summarized in section 5, with concluding remarks in conclusion Section 6.

2 Related Work

Traditional methods extracted hand-craft local visual features such as 3DHOG [19], Motion Boundary Histograms (MBH) [8], improved Dense Trajectory (iDT) [40, 39]

and then encoded them into sparse or compact feature vectors which were fed into classifiers

[27, 26]. Deeply learned features were then found to perform better than hand-crafted features for action recognition [29, 41].

As a significant breakthrough in action recognition, Two-Stream based frameworks used the deep CNN to learn from the hand-craft motion features like optical flow and iDT [29, 41, 50, 43, 9, 47, 5, 35, 11, 12]. These attempts have achieved remarkable progress in improving the recognition accuracy, but still rely on the pre-computed optical flow or iDT, which constrains the speed of the whole framework.

In order to obtain the motion modality in a fast way, recent works used optical flow only at the training stage [23], or proposed motion vector as the simplified version of optical flow [50]. These attempts have produced degraded optical flow results and still did not perform on par with the approaches using traditional optical flow as the input stream.

Many approaches learn to capture the motion information directly from input frames using 3D CNN [35, 37, 5, 36, 9, 38]. Boosted by the temporal convolution and pooling operations, 3D CNN could distill the temporal information between consecutive frames without segmenting them into short snippets. Compared with the learning of filters to capture motion information, our OFF is a principled representation mathematically derived from the optical flow. 3D CNN, constrained by network design, training sample, and parameter regularization like weight decay, may not be able to learn good motion representation like OFF. Therefore, current state-of-the-art 3D CNN based algorithms still rely on traditional optical flow to help the networks to capture motion patterns. In comparison, our OFF 1) well captures the motion patterns so that RGB stream with OFF performs on par with two stream methods, and 2) is also complementary to other motion representations like optical flow.

To capture long-term temporal information from videos, one intuitive approach is to introduce the Long Short-Term Memory (LSTM) module as an encoder to encode the relationship between the sequence-illustrating deep features

[47, 32, 28]. LSTM can still be applied on the OFF. Therefore, our OFF is complementary to these methods.

Concurrent with our work, another state-of-the-art method applies a strategy called ranked pool [13] that generates a fast video-level descriptor, namely, the dynamic images [3]. However, the very nature in design and implementation between the dynamic images and ours are different. The dynamic images are designed to summarize a series of frames while our method is designed to capture the motion information related to optical flow.

3 Optical Flow Guided Feature

Figure 2: Network architecture overview. The feature generation sub-network extracts feature for each frame sampled from the video. Based on the features from two adjacent frames extracted by the feature generation sub-networks, a OFF sub-network is applied to generate the OFF for further classification. The scores from all sub-networks are fused to get the final result.

Our proposed OFF is inspired by the famous brightness constant constraint defined by traditional optical flow [16]. It is formulated as follows:

(1)

where denotes the pixel at the location of a frame at time . For frames and , and are the spatial pixel displacement in and axes respectively. It assumes that for any point that moves from at frame to at frame , its brightness keeps unchanged over time. When we apply this constraint at the feature level, we have

(2)

where is a mapping function for extracting features from the image I. denotes the parameters in the mapping function. The mapping function

can be any differentiable function. In this paper, we employ trainable CNNs consisted of stacks of convolution , ReLU, and pooling operations. According to the definition of optical flow, we assume that

and obtain the equation as follows:

(3)

By dividing in both sides of Equation 3, we obtain

(4)

where , and denotes the two dimensional velocity of feature point at . and are the spatial gradients of in and axes respectively. is the temporal gradient along time axis.

As a special case, when , then simply represents pixel at . In this special case, are called optical flow. Optical flow is obtained by solving an optimization problem with the constraint in Equation 4 for each [1, 4, 2]. Here in this case, the term represents the difference between RGB frames. Previous works have shown that the temporal difference between frames is useful in video related tasks [43], however, there is no theoretical evidence to help explain why this simple idea works that well. Here, we can find its correlation to spatial features and optical flow.

We generalize the representation of optical flow from pixel to feature . In this general case, are called the feature flow. We can see from Equation 4 that is orthogonal to the vector containing feature-level optical flow. changes as the feature-level optical flow changes. Therefore, is guided by the feature-level optical flow. We call as Optical Flow guided Feature (OFF). The OFF encodes the spatial-temporal information orthogonally and complementarily to the feature-level optical flow . In the next section, detailed implementation of OFF and its usage for action recognition are introduced.

Figure 3: Network architecture overview for two segments. The inputs are two segments in blue and green colors that are separately fed into the feature generation sub-network to obtain basic features. In our experiment, the backbone for each feature generation sub-network is the BN-Inception [34]. Here K represents the largest side length of the square feature map selected to undergo the OFF sub-network for obtaining the OFF features. The OFF sub-network consists of several OFF units, and several residual blocks [15] are connected between OFF units from different levels of resolution. These residual blocks constitute a ResNet-20 when seen as a whole. The scores obtained by different sub-networks are supervised independently. Detailed structure of the OFF unit is shown in Figure 4.

4 Using Optical Flow Guided Feature in Convolutional Neural Network

4.1 Network Architecture

Network Architecture Overview. Figure 2 shows an overview of the whole network architecture. The network consists of three sub-networks for different purposes: feature generation sub-network, OFF sub-network and classification sub-network. The feature generation sub-network generates basic features using common CNN structures. In the OFF sub-network, the OFF features are extracted using the features from the feature generation sub-network, and then several residual blocks are stacked for obtaining the refined features. The features from the previous two sub-networks are then used by the classification sub-network for obtaining the action recognition results. The Figure 3 exhibits the more detailed network structure with the inputs of two segments. As shown in Figure 3, we extract features from multiple layers on a specific level with the same resolution by concatenating them together and feed them into one OFF unit. The whole network has 3 OFF units with different scales. The details about the structure of each sub-network is discussed as follows.

Feature Generation Sub-network. The basic features (equivalent to the representation

in previous section) are extracted from the input image using several convolutional layers with Rectified Linear Unit (ReLU) for non-linear function and max-pooling for down-sampling. We select BN-Inception

[34] as the network structure to extract feature maps. The feature generation sub-network can be replaced by any other network architecture.

OFF Sub-network. The OFF sub-network consists of several OFF units. Different units use basic features from different depths. As shown in Figure 4, an OFF unit contains an OFF layer to generate the OFF. Each OFF layer contains a convolutional layer for each piece of feature, and a set of operators including sobel and element-wise subtraction for OFF generation. After the OFF is obtained, the OFF unit will concatenate them together with features from the lower level, then the combined features will be output to the following residual blocks.

The OFF layer is responsible for generating the OFF from the basic features . Figure 4 shows the detailed implementation the OFF layer. According to Equation 3, the OFF should consist of both spatial and temporal gradient of the feature. Denote as the th channel of the basic feature . Denote and as the OFF for gradients of and directions respectively, which correspond to spatial gradients. We apply the Sobel operator for spatial gradient generation as follows:

(5)
(6)

where denotes a convolution operation, and the constant indicates the number of channels of the feature . Denote as the OFF for gradients at the temporal directions. Temporal gradient is obtained by element-wise subtraction as follows:

(7)

With the features , , and obtained above, we concatenate them together with the features from the lower level as the output of the OFF layer. We use a convolutional layer before the sobel and subtraction operations to reduce the number of channels. In our experiments, the channel dimension is reduced to 128 regardless of how many the input channels are. Then the feature is fed into the OFF unit to calculate the OFF we defined in previous section. After the OFF is obtained, several residual blocks designed in [15]

are connected between the OFF units at different levels of resolution as refinement. The dimensionality of OFF is further reduced in the residual block adjacent to the OFF unit for saving computation and the number of parameters. The residual blocks on different levels of resolution finally constitute a ResNet-20. Note that there is no Batch Normalization

[17] operation applied in our residual network in order to avoid the over-fitting problem.

The OFF unit can be applied for CNN layers on different levels. The inputs of one OFF unit include the basic deep features from two segments, and the feature from the OFF unit on the previous feature level if it exists. In this way, the OFF at the previous semantic level can be used for refining the OFF at the current semantic level.

Classification Sub-network. The classification sub-network takes features from different sources and uses multiple inner-product classifiers to obtain multiple classification scores. The classification scores of all sampled frames are then combine by averaging for each feature generation sub-network, or OFF sub-network. The OFF at a semantic level can be used to produce a classification score at the training stage, which is learned using its corresponding loss. Such strategy has been proved to be useful in many tasks [34, 45, 22]. In the testing phase, scores from different sub-networks could be assembled for better performance.

Figure 4: Detailed architecture of OFF unit. A 1x1 convolution layer is connected to the input basic feature for dimension reduction. After that, we utilize the Sobel operator and element-wise subtraction to calculate the spatial and temporal gradients respectively. The combination of gradients constitutes the OFF, and the sobel operator, subtracting operator and the convolution layers before them constitute a OFF layer.

4.2 Network Training

Action recognition is treated as a multi-class classification problem. Followed by the settings in TSN, as there are multiple classification scores produced by each segment, we need to fuse them all in each sub-network separately to generate a video-level score for loss calculation. Here, for the OFF sub-networks, the features produced by the output of OFF sub-network for the th segment on level is denoted by . The classification score for segment on the level using is denoted by . The aggregated video-level score at level is denoted by . The video-level action classification score is obtained by:

(8)

where denotes the number of frames for extracting features. The aggregation function denoted by is used for summarizing the scores predicted from different segments along time. Following the investigations in TSN, is implemented by average pooling for better performance [43]. As for the feature generation sub-network, the above equations are also applicable. While as we do not need intermediate supervision for feature generation sub-network, the feature at level for segment is simply equivalent to the final feature output of the sub-network.

To update the parameters of the whole network, the loss is set to be the standard categorical cross-entropy loss. As the sub-network for each feature level is supervised independently, a loss function is used for each level as:

(9)

where is the number of action categories,

is the estimated score for class

from the features at level , and represents the ground-truth class label. By using this loss function we can optimize the network parameters through back-propagation. Detailed implementation of training is described as follows.

Two-stage Training Strategy. Training of the whole network consists of two stages. The first stage indeed is to apply existing approaches, e.g. TSN [43], to train the feature generation sub-network. At the second stage, we train the OFF and classification sub-network with all the weights in feature generation sub-network frozen. The weights of OFF sub-network and classification sub-network are learned from scratch. The whole network could be further fine-tuned in an end-to-end manner, however, we do not find significant gain in this stage. To simplify the training process, we only train the network using the first two stages.

Intermediate Supervision during Training. Intermediate supervision has been proven to be practical training strategy in many other computer vision tasks [22, 45, 46, 24, 6]. As the OFF sub-networks are fed by intermediate inputs, here we add the intermediate supervision on each level to get better OFFs on each level of resolution.

Reducing the Memory Cost. As our framework consists of several sub-networks, it costs more memory than the original TSN framework, which extracts and stores motion frames before training CNNs, and trains several networks independently. In order to reduce the computational and memorial cost, we sample less frames in the training phase than in the testing phase, and still obtain satisfactory results.

However, the time duration between segments may be varied if we sample different number of segments between training and testing. According to our definition in equation 3, only when the denotation is a fixed constant, the equation 4 could be derived from the equation 3. If we sample different frames between training and testing, the time interval may be inconsistent, which makes our definition to be invalid and influences the final performance. In order to keep time interval consistent between training and testing, we design the sampling scheme carefully. Therefore, during training, we sample frames from a video as follows:

Let be the number of frames sampled for training, and be the number for testing. In training phase, a video with length would be divided into segments. Each segment has length . We randomly select from , where is treated as a frame seed. Then the whole training set is constructed as , which has interval . In testing phase, we sample the images using the same interval as that in the training phase.

4.3 Network Testing

As there are multiple classification scores produced by different sub-networks, we need to fuse them all in testing phase for better performance. In this study, we assemble scores from the feature generation sub-network and the last level of OFF sub-network by a simple summing operation. We select to test our model based on a state-of-the-art framework TSN [43]. The testing setting under the TSN framework is illustrated as follows:

Testing under TSN Framework. In the testing stage of TSN, 25 segments are sampled from RGB, RGB difference, and optical flow. However, the number of frames in each segment is different among these modalities. We use the original settings adopted by TSN to sample 1, 5, 5 frames per segment for RGB, RGB difference, and optical flow respectively. The input of our network is 25 segments, where the th segment is treated as the Frame in Figure 3

. In this case, the features extracted by a separate branch of our feature generation sub-network is for a segment instead of a frame when using TSN. Other settings are kept to be the same as those in TSN.

5 Experiments and Evaluations

In this section, datasets and implementation details used in experiments will be first introduced. Then we will explore the OFF and compare it with other modalities under current state-of-the-art frameworks. Moreover, as our method can be extended to other modalities such as RGB difference and optical flow, we will show how such a simple operation could improve the performance for input with different modalities. Finally, we will discuss the meaning and difference between the OFF and other motion modalities such as optical flow and RGB difference.

5.1 Datasets and Implementation Details

Evaluation Datasets. The experimental results are evaluated on two popular video action datasets, UCF-101 [31] and HMDB-51 [21]. The UCF-101 dataset has 13320 videos and is divided into 101 classes, while the HMDB-51 contains 6766 videos and 51 classes. Our experiments follow the officially offered scheme which divides a dataset into 3 training and testing splits and finally calculating the average accuracy over all 3 splits. We prepare the optical flow between frames before training by directly using the OpenCV implemented algorithm [48].

Implementation Details.

We train our model with 4 NVIDIA TITAN X GPU, under the implementation on Caffe

[18] and OpenMPI. We first train the feature generation sub-networks using the same strategy provided in the corresponding method [43]

. Then at the second stage, we train the OFF sub-networks from scratch with all parameters in the feature generation sub-networks frozen. The mini-batch stochastic gradient descent algorithm is adopted here to learn the network parameters. When the feature generation sub-networks are fed by RGB frames, the whole training procedure for OFF sub-network takes 20000 iterations to converge with the learning rate initialized at 0.02 and decreased to its 0.1 using multi-step policy at the iteration 10000, 15000 and 18000. When input changes to temporal modality like optical flow, the learning rate is initialized at 0.05, and other policies are kept the same with what have been proposed in RGB. The batch size is set to 128 and all the training strategies described in previous sections are applied. When evaluating on UCF-101 and HMDB-51, we add dropout modules on spatial stream of OFF. There is no difference on training parameters for different modalities. However, when the input is RGB difference or optical flow, it would cost more time in both training and testing stages as more frames are read into the network.

5.2 Experimental Investigations on OFF.

In this section, we will investigate the performance of OFF under the TSN framework. The analysis for the performance of single and multiple modalities, and the performance comparison between the state-of-the-art will be shown. All the results for OFF based networks are trained with the same network backbone and strategies illustrated in previous sections for fair comparison.

 

Method Speed (fps) Acc.
TSN(RGB) [43] 680 85.5%
TSN(RGB+RGB Diff) [43] 340 91.0%
TSN(Flow) [43] 14 87.9%
TSN(RGB+Flow) [43] 14 94.0%
RGB+EMV-CNN [50]
390 86.4%
MDI+RGB [3]
<131 76.9%
Two-Stream I3D
(RGB+Flow) [5]
<14 93.4%

 

RGB+OFF(RGB)+
RGB Diff+OFF(RGB Diff)
206 93.3%

 

Table 1: Experimental results of accuracy and efficiency for different real-time video action recognition methods on UCF-101 over three splits. Here the notation Flow represents the motion modality Optical Flow. Note that our OFF based algorithm could achieve the state-of-the-art performance among real-time algorithms.

Efficiency Evaluation. In this experiment, we evaluate the efficiency between the OFF based method and other state-of-the-art methods. The experimental results for efficiency and accuracy for different algorithms are summarized in Table 1. OFF(RGB) denotes our use of OFF for the network with RGB input, in this case, the OFF is acquired from spatial deep features. As one special case, the denotation RGB Diff represents the OFF calculated directly from consecutive RGB frames on the input level instead of on the feature level. After applying the OFF calculation to RGB frames, the processed inputs could be fed into the feature generation sub-network and the generated feature maps could be again used to calculate their corresponding OFF features on the feature level. The other methods we compared here includes TSN [43] with different inputs, motion vector based RGB+EMV-CNN [50], dynamic image based CNN [3] and current state-of-the-art 3D-CNN with two stream [5]. From the Table 1, by applying the OFF to the spatial features and the RGB inputs, we can achieve a competitive accuracy with only RGB inputs on the UCF-101 over three splits, which is even comparable with some Two-Stream based methods such as [5, 43]. Besides, our methods is still very efficient under this kind of settings. The whole network could run over 200 fps, while other methods listed here are either inefficient or not so effective as the Two-Stream based approaches.

RGB
OFF
(RGB)
RGB
Diff
OFF
(RGB Diff)
Flow
OFF
(Flow)
Speed
(fps)
Acc.
680 85.5%
450 90.0%
340 90.7%
257 92.0%
206 93.0%
14 93.5%
14 95.1%
14 95.5%
Table 2: Experimental results for different modalities using the OFF on UCF-101 Split1. Here Flow denotes the optical flow. OFF(*) denotes the use of OFF for the input *. For example, OFF(RGB) denotes the use of OFF for RGB input. The speed here illustrates the time cost for network forward. The results for RGB and RGB + Flow are from [43]. The OFF(RGB) provides a strong improvement when fusing with RGB.

Effectiveness Evaluation. In this part, we try to investigate the robustness of OFF when applying to different kinds of input. According to the definition in equation 4, we can replace the image from RGB image to optical flow or RGB difference image to extract OFF on feature level for further experiments. Based on the scores predicted by different modalities, we can further improve the classification performance by fusing them together [29, 9, 43, 50]. We carry out the experimental results with various score fusing schemes on UCF-101 split 1, and summarize them in Table 2. Table 2 shows the results when different kinds of modalities are introduced as the network input. From each block separated by a horizon line, we can find that the OFF is complementary to other kinds of modalities, e.g. RGB and optical flow, and could get a remarkable gain every time the OFF is introduced. Besides, interestingly, the OFF is still working when the input modality is already describing the motion information. This phenomenon indicates that the acceleration information between frames might also make a difference in describing the temporal patterns.

 

RGB
Hyp-Net + RGB
OFF(RGB) + RGB
Acc. 85.5% 86.0% 90.0%

 

Table 3: Experimental results of accuracy for hypercolumn network and the comparison with OFF on UCF-101 Split1. The denotation ”Hyp-Net” indicates the output of hypercolumn network.

Comparison with the Hypercolumns CNN. As our network extracts intermediate deep features from a pre-trained CNN, such hypercolumn based network structure may lead to additional gain on specific datasets [14]. Experiment and analysis are conducted to investigate whether the OFF is playing a key role for the improvement. The network architecture and all training strategies for the hypercolumn CNN are the same as that in OFF except for the removal of OFF unit, in other words, the hypercolumn network here is constructed as the same structure of OFF sub-network without OFF unit. In this case, the features from feature generation sub-networks are directly fed into the OFF sub-networks without the calculation of OFF.

From the experimental results shown in Table 3, it is clear that, despite the hypercolumn network could get a slight improvement on UCF-101 split 1, its final accuracy is still apparently less than the one obtained by OFF(RGB). Therefore, a conclusion could be drawn that it is the OFF calculation rather than the hypercolumn structure that plays the key role in achieving the significant gain.

Comparison with the State-of-the-art. Above all, after the exploration and analysis of the OFF, we show our final result. As what has been done in TSN, we also assemble the classification scores obtained by different kinds of modalities. We sum the scores produced by each modality together, and get the final version output in Table 4. All the results are evaluated in the UCF-101 and HMDB-51 over 3 splits. Our results are obtained by assembling the scores from RGB, OFF(RGB), optical flow and their corresponding version of OFF(optical flow) together. When we add one more score from OFF(RGB Diff), a slight 0.3% gain is obtained compared to the version without it, and finally results in on UCF-101 and on HMDB-51. Note that we do not introduce improved Dense Trajectories (iDT)[40] into our network as the input. The components of inputs we need to prepare in advance for our final version result only consist of RGB and optical flow.

We compare our result with both the traditional approaches and deep learning based approaches. We obtain

gain compared with the baseline Two-Stream TSN [43] on UCF-101 [31] and HMDB-51 [21] respectively. Note that the final version TSN takes 3 modalities (RGB, Optical Flow and iDT) as network input. The other compared methods listed in Table 4 include iDT [40], Two-Stream ConvNet [29], Two-Stream + LSTM [47], Temporal Deep-convolutional Descriptors (TDD) [41], Long-term Temporal Convolutions (LTC) [37], Key Volume Mining Deep Framework (KVMDF) [52], and also the current state-of-the-art methods such as Spatio-Temporal Pyramid (STP) [44], Saptio-Temporal Multiplier Network (STMN) [12], Spatio-Temporal Vector [7], Lattice LSTM (LSTM) [32], and I3D [5]. The method I3D could achieve spectacular performance (98.0% on UCF-101, 80.7% on HMDB-51, over 3 splits) when proposing a new large dataset Kinetics for pre-train. While without the pre-training, the method I3D could achieve 93.4% on UCF-101 Split1. From the comparison with all the listed methods, we conclude that our OFF based method allow for state-of-the-art performance in video action recognition.

Method
UCF-101
HMDB-51
iDT [40] 86.4% 61.7%
Two-Stream [29] 88.0% 59.4%
Two-Stream TSN [43] 94.0% 68.5%
Three-Stream TSN [43] 94.2% 69.4%
Two-Stream+LSTM [47] 88.6% -%
TDD+iDT [41] 91.5% 65.9%
LTC+iDT [37] 91.7% 64.8%
KVMDF [52] 93.1% 63.3%
STP [44] 94.6% 68.9%
STMN+iDT [12] 94.9% 72.2%
ST-VLMPF+iDT [7] 94.3% 73.1%
LSTM [32] 93.6% 66.2%
Two-Stream I3D [5] 93.4% 66.4%
Two-Stream I3D
(with Kinetics 300k) [5]
98.0% 80.7%
Ours
96.0% 74.2%
Table 4: Performance comparison to the state-of-the-art methods on UCF-101 and HMDB-51 over 3 splits.

6 Conclusion

In this paper, we have presented Optical Flow guided Feature (OFF), a novel motion representation derived from and guided by the optical flow. OFF is both fast and robust. By plugging the OFF into CNN framework, the result with only RGB as input on UCF-101 is even comparable to the result obtained by Two-Stream (RGB+Optical Flow) approaches, and at the same time, the OFF plugged network is still very efficient with the speed over 200 frames per second. Besides, it has been proven that the OFF is still complementary to other motion representations like optical flow. Based on this representation, we proposed an new CNN architecture for video action recognition. This architecture outperforms many other state-of-the-art video action recognition methods on two popular video datasets UCF-101 and HMDB-51, and could be used to accelerate the speed of the video based tasks. In future works, we will validate our method on other video based tasks and datasets.

References