Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition

by   Li Tao, et al.
The University of Tokyo

Recently, 3D convolutional networks yield good performance in action recognition. However, optical flow stream is still needed to ensure better performance, the cost of which is very high. In this paper, we propose a fast but effective way to extract motion features from videos utilizing residual frames as the input data in 3D ConvNets. By replacing traditional stacked RGB frames with residual ones, 20.5 accuracy can be achieved on the UCF101 and HMDB51 datasets when trained from scratch. Because residual frames contain little information of object appearance, we further use a 2D convolutional network to extract appearance features and combine them with the results from residual frames to form a two-path solution. In three benchmark datasets, our two-path solution achieved better or comparable performances than those using additional optical flow methods, especially outperformed the state-of-the-art models on Mini-kinetics dataset. Further analysis indicates that better motion features can be extracted using residual frames with 3D ConvNets, and our residual-frame-input path is a good supplement for existing RGB-frame-input models.



page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 9


Motion Representation Using Residual Frames with 3D CNN

Recently, 3D convolutional networks (3D ConvNets) yield good performance...

Challenge report:VIPriors Action Recognition Challenge

This paper is a brief report to our submission to the VIPriors Action Re...

Residual Frames with Efficient Pseudo-3D CNN for Human Action Recognition

Human action recognition is regarded as a key cornerstone in domains suc...

Ordered Pooling of Optical Flow Sequences for Action Recognition

Training of Convolutional Neural Networks (CNNs) on long video sequences...

PAN: Towards Fast Action Recognition via Learning Persistence of Appearance

Efficiently modeling dynamic motion information in videos is crucial for...

Exploiting Inter-Frame Regional Correlation for Efficient Action Recognition

Temporal feature extraction is an important issue in video-based action ...

Train, Diagnose and Fix: Interpretable Approach for Fine-grained Action Recognition

Despite the growing discriminative capabilities of modern deep learning ...

Code Repositories


Official implementation of ACMMM'20 paper 'Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework'

view repo


Unofficial implement of "Video cloze procedure for self-supervised spatio-temporal learning" [AAAI20]

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For action recognition, motion representation is an important challenge to extract motion features among multiple frames. Various methods have been designed to capture the movement. 2D ConvNet based methods use interactions in the temporal axis to include temporal information [12, 28, 15, 16, 29]. 3D ConvNet based methods improved the recognition performance by extending 2D convolution kernel to 3D, and computations among temporal axis in each convolutional layers are believed to handle the movements [24, 18, 32, 1, 9, 26]. State-of-the-art methods showed further improvements by increasing the number of used frames and the size of the input data as well as deeper backbone networks [6, 2, 25].

In a typical implementation of 3D ConvNets, these methods used stacked RGB frames as the input data. However, this kind of input is considered not enough for motion representation because the features captured from the stacked RGB frames may pay more attention to the appearance feature including background and objects rather than the movement itself, as shown in the top example in Fig. 1. Thus, combining with an optical flow stream is necessary to further represent the movement and improve the performance, such as the two-stream models [8, 7, 21]. However, the processing of optical flow greatly increases computation time111Because there are many types of implementation of optical flow, we do not refer to any specific type of implementation. But the calculation of optical flow is generally very expensive.. Besides, obtaining two-stream results activation of the optical flow stream only after the optical flow data are extracted, which causes high latency.

Figure 1: An example of our residual frames compared with normal 3D ConvNet inputs. The residual-input model focused on the movement part while RGB-input model paid more attention on background, which lead to lower accuracy for prediction.

In this paper, we propose an effective strategy based on 3D convolutional networks to pre-process RGB frames for the generation and replacement of input data. Our method retains what we call residual frames, which contain more motion-specific features by removing still objects and background information and leaving mainly the changes between frames. Through this, the movement can be extracted more clearly and recognition performance can be improved comparing to just using stacked RGB inputs as shown in the bottom sample in Fig. 1. Our experiments reveal that our approach can yield significant improvements over top-1 accuracies when those ConvNets are trained from scratch on UCF101 [22] and HMDB51 [14] datasets. One may think that our approach is naive and therefore cannot be applied to videos with global motion, but this will also be addressed in Section 5.1.

For larger action recognition datasets such as Mini-kinetics [32] and Kinetics [13], the definitions of the actions become more complex such as Yoga containing various combination of simple actions, and these datasets have a large amount of compound labels, such as playing guitar and playing ukulele, where the movement is almost the same and the difference is mainly on the objects. In this case, it is difficult to distinguish by only motion representation without enough appearance features. Therefore, we propose a two-path solution, which combines the residual input path with a simple 2D ConvNet to extract appearance features from a single frame. Experiments show that our proposed two-path method obtains better performance over some two-stream models on UCF101 / HMDB51 / Mini-kinetics datasets when using the same input shapes and similar or even shallower network architectures.

Our contributions are summarized as follows:

  • We propose a simple, fast, but effective way for 3D ConvNets to better extract motion features by using stacked residual frames as the model input.

  • The proposed two-path solution including a 3D ConvNet with residual input as the motion path and a 2D ConvNet as the appearance path can achieve better performance than other methods using similar settings.

  • Our proposal can avoid the requirement of high-cost computation for optical flow while ensuring high performance. Our analysis also suggests potential limitations in the current action recognition task.

We would like to clarify that we are proposing a new way for motion representation. For this purpose, we do not always focus on the better performance than other approaches based on very deep and complex DNN architectures as well as other training / parameter settings. Instead, we discuss why and how much our approach is reasonable as compared to optical-flow-based and RGB-only approaches. We will release our code if the paper is accepted.

2 Related works

In this section, traditional action recognition networks are introduced. Though temporal modeling usually exists among those networks, we use another subsection to introduce and discuss this in detail because temporal information is a key feature. Model combination is set as another subsection to clearly see the solution route maps for high accuracies.

2.1 Deep action recognition

2D solution. 2D ConvNets based methods mainly consist of frame-level feature representation and temporal modeling to fuse these features. When treating each frame of a video as a single image, 2D ConvNets which are effective for image classification task can be directly applied to video recognition. Karpathy et al[12]

tried different ways to fuse features from 2D ConvNet and then used fused features to classify videos. Temporal Segment Networks (TSN) 


was designed to extract average features from stride sampled frames. Two-stream ConvNets 

[21, 8, 7] used an additional optical flow stream. And for both RGB stream and optical flow stream, 2D ConvNets were used. Recent works such as Temporal Bilinear Networks (TBN) [15] and Temporal Shift Module (TSM) [16] are variants of 2D ConvNets. Compared to 3D counterparts, 2D methods are more efficient because fewer parameters are used, and the performance is highly related to the temporal modeling. Our method uses a 2D network to extract appearance features considering the high efficiency of 2D models, and the proposed appearance path uses less input than existing 2D ConvNets, which is more efficient.

3D solution. 3D ConvNets based methods directly use 3D convolution kernels to process input video frames. The computation between frames is carried out when the temporal kernel size is 2 or larger, and spatial-temporal features can be automatically learned by network optimization. Tran et al[24] proposed C3D, which consists of 8 directly-connected convolutional layers and 2 fully-connected layers. Hara et al[9] conducted many experiments on the 3D version of residual networks, including different depths and using some variants such as ResNeXt [31]. Carreira et al[1] proposed I3D based on Inception network. SlowFast [6] used two ResNet pathways to capture multi-scale information in the temporal axis. Despite of different network architectures, 3D convolutional kernel also has variants. One kernel can be separated into two parts, and . Based on this, P3D [18], R(2+1)D [26], and S3D [32] were proposed. The backbones of mainstream networks are ResNets [10] and Inception network [23]. Neural architecture searching (NAS) is used in [17] to get efficient network architectures. However, because the parameter number is larger than 2D counterparts, 3D models are prone to overfitting when trained from scratch on small datasets such as UCF101 [22] and HMDB51 [14]. Fine-tuning models pre-trained on very large dataset such as Kinetics [13] is one solution to acquire good performance on these small datasets. From another point of sight, our proposed method focus more on the movement itself and utilize a 3D ConvNet with higher motion representation ability by using residual frames as input. In this way, we can reduce the tendency to over-fit on small datasets compared to normal RGB inputs when using the same network architectures.

2.2 Temporal modeling

For 2D ConvNets, some models [12, 28] have been proposed which simply averaged frame features to represent videos. Donahue et al[11]

used 2D models to extract features using long short-term memory (LSTM) 

[5]. Zhou et al[33] proposed Temporal Relation Network to learn temporal dependencies. Temporal Bilinear Networks [15] uses temporal bilinear modeling to embed temporal information. Temporal Shift Module [16] shifts 2D feature maps along temporal dimension.

For 3D ConvNets, temporal modeling is automatically processed by learning kernels in the temporal axis. Because 3D ConvNets use stacked RGB frames as input, the computation among frames is believed to learn motion features, while the spatial computation is for spatial feature embedding. Therefore, existing 3D models do not pay much attention to this part, and trusting the capabilities of network. Recently, Crasto et al[2] trained a student network using RGB-frame input by learning feature representation from a teacher network, which had been trained using optical flow data to enhance temporal modeling.

Our proposed two-path method consists of an appearance path using a 2D ConvNet only to extract appearance features and a motion path using 3D ConvNet to calculate motion features. Temporal modeling only exists in the motion path. The use of residual frames differs from using as motions exist not only in the temporal dimension of residual frames, but also in the spatial dimension because one residual frame is generated from two adjacent frames.

2.3 Two-stream model

Two-stream models usually stand for those methods combining 2D features / results from RGB stream with optical flow stream [21, 8, 7]. Some researchers extended the concept by combining RGB-frame-input path with another path which uses pre-computed extra motion features, such as trajectories [27] or SIFT-3D [19], as well as optical flow. Many existing methods can then be extended by combining motion feature stream to further improve their performances [2, 1, 26]. To distinguish our proposal from the aforementioned two-stream methods, we refer to our method as ‘two-path’ rather than ‘two-stream’ because we do not use any pre-computed motion features.

3 Proposed method

In this section, we first introduce our proposal that uses residual frames as a new form of input data for 3D ConvNets. Because residual frames lack enough information for objects, which are necessary for the compound phrases used for label definitions in most video recognition datasets, we further propose a two-path solution to utilize appearance features as an effective complement for motion features learned from the residual inputs.

3.1 Residual frames

For 3D ConvNets, stacked frames are set as input, and the input shape for one batch data is , where frames are stacked together with height and width , and the channel number is 3 for RGB images. We denote the data as for simplicity. The convolutional kernel for each 3D convolutional layer is also in 3 dimensions, being . Then for each 3D convolutional layer, data will be computed among three dimensions simultaneously. However, this is based on a strong assumption that motion features and spatial features can be learned perfectly at the same time. To improve performance, many existing 3D models expand weights from 2D ConvNets to initialize 3D ConvNets, and this has been proved to provide higher accuracies. Pre-training on larger datasets will also enhance performance when fine-tuned on small datasets.

When subtracting adjacent frames to get a residual frame, only the frame differences are kept. In a single residual frame, movements exist in the spatial axis. Using residual frames for 2D ConvNets have been attempted and proved to be somewhat effective [30]. However, because actions or activities are complex with much longer durations, stacked frames are still necessary. In stacked residual frames, the movement does not only exist in the spatial axis, but also in the temporal axis, which is more suitable for 3D ConvNets because 3D convolution kernels will process data in both spatial and temporal axes. Using stacked residual frames helps 3D convolutional kernel to concentrate on capturing motion features because the network does not need to consider the appearance information of objects or backgrounds in videos.

Here we use to represent the frame data, and denotes the stacked frames from the frame to the frame. The process to get residual frames can be formulated as follows,


The computational cost is cheap and can even be ignored when compared with the network itself or optical flow calculation.

With this change, 3D ConvNet can extract motion features by focusing on the movements in videos alone. However, by ignoring objects and backgrounds, some movements in similar actions become indistinguishable. For example, in the actions Apply Eye Makeup and Apply Lipstick, the main difference lies in the location of the movement being around the eyes or the mouth rather than the movement itself. In this example, 3D ConvNets may be able to distinguish them to some extent but the loss of information does increase the difficulty. Therefore, we use a 2D ConvNet to process the lost appearance information and combine with a 3D ConvNet using residual frames as input to form a two-path network.

3.2 Two-path network

Our two-path network is formed by a motion path and an appearance path, which is illustrated in Fig. 2

Figure 2:

Framework of our two-path network. The motion path and the appearance path are trained separately using cross-entropy loss. Action recognition is carried out within each path. In inference period, the output probabilities from two paths are averaged. In this way, both motion features and appearance features are utilized for final classification.

Motion path. Because residual frames are used in this path, movements then exist in both spatial axis and the temporal axis. Therefore, 3D convolution layers are used in this path. Because there are many existing 3D convolution based network architectures which have been proved effective in many action recognition datasets, we do not focus on designing a new network architecture in this paper. To verify the robustness and versatility of our proposal, we conduct experiments on various models, and discussed especially on ResNet-18-3D as its good performance. In the original ResNet-18-3D [9]

, convolution with stride is used at the beginning of several residual blocks to perform down-sampling. We attempt another version of residual blocks which uses max-pooling layers at the end of each corresponding blocks. These two versions have almost the same network parameters.

Appearance path.

By using residual frames with 3D ConvNets, motion features can be better extracted, while background features which contains object appearances are lost. The lost part can be extracted by a 2D ConvNet, which uses one RGB frame as input. The goal for our appearance path is to embed object and background appearances which are mostly lost in the motion path. Therefore, in contrast to TSN or other complex models, a simple 2D ConvNet is sufficient. The naive 2D ConvNet treats action recognition as a simple image classification problem. During training, only one frame in a video is randomly selected in one epoch.

For the combination of these two paths, we average the predictions for the same video sample. There are early fusion methods that may be more effective, which we leave as our future work.

4 Experiments

4.1 Datasets and metrics


There are several commonly used datasets for video recognition tasks. Thanks to the large amount of videos and labels in these datasets, deep learning methods can detect a high number of actions. We mainly focus on the following benchmarks: UCF101 

[22], HMDB51 [14], and Kinetics400 [13]. UCF101 consists of 13,320 videos in 101 action categories. HMDB51 is comprised of 7,000 videos with a total of 51 action classes. Kinetics400 is much larger, consisting 400 action classes and contains around 240k videos for training, 20k videos for validation and 40k videos for testing. For the Kinetics400 dataset, because it is very large, we mainly perform our experiments on its subset, Mini-kinetics [32], which consists of 200 action classes with 80,000 videos for training and 5,000 videos for validation. The actual data used in our experiments may be a little smaller because some videos were unavailable.

Metrics. We report all experiments with top-1 and top-5 accuracies for all experiments. The performance on Mini-kinetics was evaluated on the validation split. We also use correlation coefficient index for deeper analysis between different models, which may indicate the relationships between the knowledge learned from existing models.

4.2 Scratch training and fine-tuning

There are always two ways to train a network, either training from scratch or fine-tuning from a pre-trained one. There is an obvious gap between these two training routes. Thanks to the proposal of the Kinetics datasets, several 3D convolution based models have been proposed with better performances using pre-trained models. Therefore, many recent works based their results on fine-tuned models for small datasets such as HMDB51 and UCF101, and trained from scratch for larger datasets such as Kinetics400 and its subset, Mini-kinetics.

Models can benefit from larger datasets, but training on larger datasets significantly increases computation time, so repeatedly increasing the size of datasets to improve performance is not always a solution. In this paper, in addition to the default settings discussed above, we also look into the situation that no additional knowledge is available. Specifically, we want to explore the limitations for 3D ConvNets on UCF101 and HMDB51 without any additional datasets.

4.3 Implementation details

Motion path. In this path, stacked residual frames are set as the network input data. Residual frames are used identically to traditional RGB frame clips. For 3D ConvNets in action recognition, there are several input setting choices. 3D ConvNets started from [24] which used a clip of consecutive frames, with a slice cropped in the spatial axis. To achieve the state-of-the-art results, clips in size were used in many recent works. When using such a large input data size, improvements can be achieved but limited while longer training time as well as larger memory occupations are necessary. Therefore, for all of our motion path, following [24], frames are resized to and consecutive frames are stacked to form one clip. Then, random spatial cropping is conducted to generate an input data of size . Before it is fed into the network, random horizontal flipping is performed. Jittering along the temporal axis is applied during training. The backbone in most our experiments is ResNet-18-3D. We tried two variants of ResNet-18-3D, the difference of which is whether using convolution with strides at the beginning of some residual blocks or using max-pooling at the end of corresponding blocks instead. R(2+1)D, I3D, and S3D are also tested to verify the robustness of our proposal. The batch size is set to 32. When models are trained from scratch, the initial learning rate is set to 0.1. We trained models for 100 epochs on UCF101 and HMDB51, and used 200 epochs for Mini-kinetics. When fine-tuning on UCF101 and HMDB51 using Kinetics400 pre-trained models, model weights are from [9] and the network architecture remains the same as [9]. The initial learning rate became 0.001, and 50 epochs were sufficient.

Appearance path. In contrast to TSN, our appearance path used a simpler model which treats action recognition as image classification because appearances in consecutive frames changed infrequently, and the goal for this path is to capture appearance features for background and objects. Frames are first resized to and random spatial cropping and random horizontal flipping are applied in sequence to generate input data with a size of . This progress is standard in image classification to enable the use of many pre-trained models. ResNet-18, ResNet-34, ResNet-50, and ResNeXt-101 were used to test the impact of different model depth. In addition, models were also trained from scratch to see the performances when no additional knowledge is provided.

Training recipes.

We noticed that few works paid attention to training recipes in video recognition. We used several training recipes to train our models. Specifically, we tried to use different activation functions and different learning rate decay methods. Experiments for ths part are mainly carried out on the UCF101 dataset. For larger datasets, we find that some settings still work, and we think they can be called as ‘bag of tricks’ in video recognition tasks.

Testing and Results Accumulation. There are two means of testing for action recognition using 3D ConvNets. One is to uniformly get video clips from one video, which means a fixed number of clips is generated and set as the input of the model, regardless of the video length. The predictions are averaged over all video clips to generate the final result. The other method uses non-overlapping video clips, which means longer videos will produce more video clips. The final result for one video is also generated by averaging these video clips. We performed a small test for these two means of testing and found the difference can be ignored because all of the clip results are averaged in both methods. Thus, we used the uniform method in our experiments, and our appearance path used a fixed number frames sampled from all video frames to match the motion path.

5 Results and discussion

In this section, results from single paths are first introduced. The motion path is used to investigate the effectiveness of stacked residual frames. Then, results from the appearance path are reported. Further analysis is conducted to explore the connections between models, especially RGB 2D model and RGB / residual 3D model. Finally, we show the performance of our proposed two-path network comparing to various existing models.

5.1 Single path

Motion path. For the motion path, different training recipes were investigated first. Different activation functions were tried and we found that, in contrast to existing 3D convolution based methods [9, 26, 32, 24, 18, 1]

which use ReLU as the default activation function, replacing ReLU with ELU improved the top-1 accuracy by 2.6% points (from 51.9% to 54.5%) and 3.3% points (from 58.0% to 61.3%) for two experimental versions (

convolution with stride and max-pooling) of ResNet-18 (Table 1). Similar results can be found for Mini-kinetics. To get better performance, we use ResNet-18 with max-pooling layers as our default model version.

Compared to RGB clips, stacked residual frames maintain movements in both spatial and temporal axes, which takes greater advantage of 3D convolution. Results are shown in Table 1 and the following discussion is all based on this table. By simply replacing RGB clips with our proposed residual clips, ResNet-18-3D results can be improved from 51.9% to 72.4%. To the best of our knowledge, this outperforms the current state-of-the-art results when models are trained from scratch on UCF101. In addition to directly using residual frames, feature differences were carried out. In Model ResNet-18(fea_diff), we used RGB clips as input while calculating feature differences in the temporal axis after first convolutional layer. The results were then fed into the rest of the network. However, this produced lower accuracies than directly using the residual frames as the network input. R(2+1)D, I3D, and S3D are also experimented and improvements are achieved by more than 10% when replacing original RGB input with our residual frames.

To sum up our residual inputs, we can see that this approach is robust for different model architectures. Because ResNet-18 is light-weighted and has good performance, we used ResNet-18 as the default backbone in our motion path.

Model type recipe residual top-1 top-5
ResNet-18 baseline [9] - - - 42.4 -
STC-ResNet-101 [4] - - - 47.9 -
NAS [17] - - - 58.6 -
ResNet-18 stride 51.9 76.3
ResNet-18 pool 58.0 82.6
ResNet-18 stride 54.5 80.6
ResNet-18 pool 61.3 84.2
ResNet-18 (fea_diff) pool 64.7 86.6
ResNet-18 stride 66.4 88.0
ResNet-18 pool 72.4 89.7
R(2+1)D [26] stride 51.8 79.2
R(2+1)D [26] stride 66.7 88.3
I3D [1] - 56.5 81.3
I3D [1] - 66.6 87.0
S3D [32] - 51.1 77.4
S3D [32] - 64.8 86.9
Table 1: Different settings on UCF101 split 1, all models are trained from scratch. The original ResNet-18 baseline did not search better training recipes. Our implement results are higher than the baseline using the same network architecture. We also implement R(2+1)D, I3D and S3D and more than 10% improvement can be achieved for each by using our residual input.
Model Type Pre-train UCF101 HMDB51 Mini-kinetics
ResNet-18 RGB 51.9 22.2 65.0
ResNet-18 Residual 72.4 34.7 64.4
ResNet-18 RGB 84.4 56.4 -
ResNet-18 Residual 89.0 54.7 -
Table 2: Top-1 results for motion path on three benchmark datasets. Training on Kinetics400 costs too much time. Therefore, for fine-tuning models, we used pre-trained models provided in [9], the only difference is that we use our residual input. The reported results are on UCF101 split 1 and HMDB51 split 1.
Figure 3: Visualization using grad-cam [20]. The number is the corresponding prediction probability for each sample. Residual-input model focused more on the moving entity and the moving area while RGB-input included more background information.

We also tested the performance on HMDB51 and Mini-kinetics. Results are shown in Table 2. On HMDB51 split 1, the results can be improved from 22.2% to 34.7% when replacing the original input with residual frames. However, the improvement can not be observed for Mini-kinetics because the labels are more related to objects rather than actions, which is the main reason of introducing our appearance path. Residual-input model can also benefit from pre-trained models when fine-tuning, yielding 89.0% on UCF101 split 1. The results on HMDB51 are not as good as RGB model because on this dataset, the range of one variation of one action is larger. For example, the category Dive including bungee jumping and a movement by a score keeper on the ground. And many movements are inconsistent in one category while the samples are few, which greatly increases the difficulty for residual inputs.

Figure 4: Visualization for model weights. Models are trained from scratch on Mini-kinetics. Filters in RGB-input model are similar among temporal axis. On the other hand, in the residual-input model, the weights indicate that the residual-input model will be more sensitive for changes in temporal dimension.
Figure 5: Accuracy difference between models with residual inputs and RGB inputs on Mini-kinetics. Best-5 and worst-5 categories are illustrated.

For deeper analysis, we further use grad-cam [20] for visualization. As shown in Fig. 3, the residual-input model pays attention to the action entity while the RGB-input model focuses more on the background. The prediction probability is low for BreastStroke because RGB model gives higher probability for another swimming style FrontCrawl.

The first 16 out of 64 convolutional filters in layers from the RGB-input model and the residual-input model are illustrated in Fig. 4

. These two models are both trained from scratch on Mini-kinetics. We can see that the filters in the RGB-input model are similar among different temporal axes. For this reason, using ImageNet 

[3] pre-trained models can achieve good performance, even when using naive 2D models, while with our residual inputs, the performance is a little bit lower. The filters in residual input model differs from each other among different temporal axis, indicating that this model is more sensitive for changes in time. The accuracy differences between our residual-input model and the RGB-input model are illustrated in Fig. 5. We show the best-5 and worst-5 classes. The positive peak belongs to the class playing bagpipes and we found that in this category, there are global movements cause by lens shake and other irrelevant movements by bystanders while our residual-input model can handle this kind of movement. Movements in throwing discus and hula hooping are highly consistent. In contrast, movements in yoga varies from each other while the appearance information plays a more important role.

Based on our analysis, the ability of 3D ConvNets may be limited because of the ambiguity in action labels. Additionally, more attention is paid to appearance rather than movements for RGB 3D models.

Appearance path. For the appearance path, four ResNet architectures were used, namely ResNet-18, ResNet-34, ResNet-50, and ResNeXt-101. Scratch training as well as fine-tuning from ImageNet pre-trained models were both tried. The results are shown in Table 3.

Dataset UCF101 HMDB51 Mini-kinetics
 Pre-train Scratch ImNet Scratch ImNet Scratch ImNet
ResNet-18 37.7 79.6 25.0 42.6 57.7 64.4
ResNet-34 40.1 81.5 24.8 43.1 59.4 68.9
ResNet-50 33.7 83.8 21.3 43.4 58.6 69.7
ResNeXt-101 34.4 85.2 23.3 45.6 59.7 70.5
Table 3: Top 1 accuracies using appearance path on UCF101 split 1, HMDB51 split 1, and Mini-kinetics. Models are trained either from scratch or by using fine-tuning.

We can clearly find that the gap is large for 2D ConvNets between these two training ways, which is consistent with previous works on image classification tasks. However, pre-training also needs much time if no pre-trained models are provided. For better performance, deeper networks generally provide higher scores.

Regarding Mini-kinetics, ImageNet pre-trained models were directly used and high accuracies could be achieved. Among these 2D ConvNets, the best top-1 accuracy was 70.5% points which is very high. However, in this case, the action recognition task is treated as a simple image classification task, which does not benefit from the use of any temporal information.

The performance of ResNet-18-2D using pre-trained weights is 79.6%, which is close to the performance of scratch training using ResNet-18-3D in Table 1, 72.4%. Though it may be unfair to compare these two models because the 2D version utilizes image classification knowledge to initialize its parameters while the 3D version does not. Duplicating ImageNet pre-trained model parameters in 3D ConvNets could be a good solution. However, it is still prone to mainly using appearance features.

Analysis among models. The difference between 2D convolution and 3D convolution is that 3D convolution has another dimension which is aimed to process temporal information. For continuous frames, especially those trimmed videos provided in video recognition datasets, the difference between frames is limited. Therefore, the 3D convolution may not process temporal information efficiently. Duplicating ImageNet per-trained model parameters as the initial model parameters does provide improvements, but then spatial-temporal convolution might be lazy during fine-tuning progress because even for models trained from scratch, model weights are tend to be similar among temporal axis (Fig. 4).

Here we introduce the correlation coefficient index to calculate the relationships between different models. 2D models and 3D models were tested. For 2D ConvNets, we also used optical flow stream as a comparative model. Correlation coefficient indexes for per-category accuracies between two different models will be reported in Table 4. The backbone networks are ResNeXt-101-2D and ResNet-18-3D. All models were fine-tuned to ensure the classification performance. From the table, we can see that the correlation coefficient index for RGB 2D and 3D models is high, which indicates that these two approaches may make judgement in a similar way while optical flow stream differs significantly. Our residual-input model has a high correlation with RGB 3D models because of the same network architecture. However, the correlation becomes lower with RGB 2D models because using residual frames results in more motions being used for classification rather than appearance.

Model Model Correlation
Input Type Input Type
RGB 2D RGB 3D 0.839
RGB 2D Residual 3D 0.663
RGB 2D Flow 2D 0.505
Flow 2D RGB 3D 0.569
Flow 2D Residual 3D 0.534
RGB 3D Residual 3D 0.791
Table 4: Correlation coefficient indexes on UCF101 split 1. Type means the type of convolution kernels used in the network.

5.2 Two-path network

By combining the motion path with the appearance path, appearances and motions can be used to get the predictions. Because we have several models, we tried different combinations among different models. For example, in the UCF101 dataset, we try different combinations by selecting two models among the 2D ConvNet RGB model, the 2D ConvNet optical flow model, the 3D ConvNet RGB model and the 3D ConvNet residual model. The results are listed in Table 5. In our implementation, the optical flow path used a ResNeXt-101 backbone, which is the same as our appearance path. However, the combination of optical flow and other RGB models produces side effects on the accuracies. The identical values in this table are the result of rounding because the accuracies happen to be close enough to exceed the points of precision used in this table.

Model Model top-1 top-5
Input Type Input Type
RGB 3D Optical Flow 2D 75.7 92.1
RGB 3D Residual 3D 87.4 97.5
Residual 3D Optical flow 2D 75.7 92.1
RGB 2D Optical Flow 2D 75.7 92.1
RGB 2D RGB 3D 86.6 97.1
RGB 2D Residual 3D 90.3 98.5
Table 5: Results from different combination of different models on UCF101 split 1. Our combination yielded the best performances.
Method Optical flow UCF101 HMDB51
top-1 top-5 top-1 top-5
Two-stream [21] 86.9 - 58.0 -
Two-stream (+SVM) [21] 88.0 - 59.0 -
I3D [1] 98.0 - 80.7 -
TSN [28] 85.1 - 51.0 -
I3D-RGB [1] 84.5 - 49.8 -
TBN [15] 89.6 - 62.2 -
Motion path 87.0 97.9 55.4 85.4
Our two-path 90.6 98.6 55.4 86.6
Table 6: Two-path results on UCF101 and HMDB51. Accuracies are calculated by averaging results from 3 splits. The size of the input clips for the state-of-the-art method I3D is our motion path input and the network parameters are around larger. Our two-path network is even better than the basic two-stream model which required optical flow features.
Method Optical flow top-1 top-5
TBN C2D [15] 69.0 89.8
TBN C3D [15] 67.2 88.3
MARS [2] 72.3 -
MARS + RGB [2] 72.8 -
MARS + RGB + Flow  [2] 73.5 -
Motion path 64.4 86.4
Our two-path 73.9 91.4
Table 7: Results on Mini-kinetics. Our tow-path network outperforms MARS even when it is combined with another RGB and optical stream. The depth of our motion path is 18 while that for MARS is 101.

Here, we do not focus on developing a new network architecture, and therefore, we only compare our method with some corresponding methods, as shown in Table 6. Our single motion path can outperform TSN [28] and I3D-RGB [1] which only use RGB input data. Without any additional computation for optical flow and only using ResNet18, we can have better performance than the original two-stream model [21] which uses optical flow. On the other hand, our model is not better than the state-of-the-art such as [1]. But it is out of the scope of our paper because many settings including the input size and network architectures are totally different.

For Mini-kinetics, results are shown in Table 7. We mainly compared our method with TBN [15] and MARS [2], which does not use optical flow yet achieving good performances. WTBN used temporal bilinear modeling to process temporal information, which is insufficient to extract motion features compared with ours. The backbone network for MARS is ResNeXt-101-3D. To get the results using distillation methods, their networks should be trained on optical flow inputs first, and then another network is built to learn features from optical flow stream. The process is complex and is much more expensive than our proposed two-path method. The backbone network for our motion path is ResNet-18-3D, which is shallower than that used in MARS. There is much room for our proposed solution to improve by using deeper networks and other feature fusion methods.

6 Conclusion

In this paper, we mainly focused on extracting motion features without optical flow. 3D ConvNets are believed to be capable of capturing motion features when RGB frames are set as input, but we demonstrated that it is not always true. We improved use of 3D convolution by using stacked residual frames as the network input. The overhead for this computation was negligibly small. With residual frames, the results of 3D ConvNets could be improved significantly when trained from scratch on UCF101 and HMDB51 datasets. Besides residual frames input, we proposed a two-path network, of using the motion path to extract motion features while the appearance path uses RGB frames to get the corresponding appearance. By combining the results from two paths, the state-of-the-art could be achieved on the Mini-kinetics dataset and better or comparable results can be achieved on UCF101 and HMDB51 datasets compared with the corresponding two-stream methods. Our results and analysis imply that residual frames can be a fast but effective way for a network to capture motion features and they are a good choice for avoiding complex computation for optical flow. In our future work, we will focus on performance improvement by investigating better combination method for our two-path network.


  • [1] J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In

    proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    pp. 6299–6308. Cited by: §1, §2.1, §2.3, §5.1, §5.2, Table 1, Table 6.
  • [2] N. Crasto, P. Weinzaepfel, K. Alahari, and C. Schmid (2019) MARS: motion-augmented rgb stream for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7882–7891. Cited by: §1, §2.2, §2.3, §5.2, Table 7.
  • [3] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (CVPR), pp. 248–255. Cited by: §5.1.
  • [4] A. Diba, M. Fayyaz, V. Sharma, M. Mahdi Arzani, R. Yousefzadeh, J. Gall, and L. Van Gool (2018) Spatio-temporal channel correlation networks for action classification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 284–299. Cited by: Table 1.
  • [5] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell (2015) Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 2625–2634. Cited by: §2.2.
  • [6] C. Feichtenhofer, H. Fan, J. Malik, and K. He (2018) Slowfast networks for video recognition. arXiv preprint arXiv:1812.03982. Cited by: §1, §2.1.
  • [7] C. Feichtenhofer, A. Pinz, and R. Wildes (2016) Spatiotemporal residual networks for video action recognition. In Advances in neural information processing systems (NeurIPS), pp. 3468–3476. Cited by: §1, §2.1, §2.3.
  • [8] C. Feichtenhofer, A. Pinz, and A. Zisserman (2016) Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1933–1941. Cited by: §1, §2.1, §2.3.
  • [9] K. Hara, H. Kataoka, and Y. Satoh (2018) Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18–22. Cited by: §1, §2.1, §3.2, §4.3, §5.1, Table 1, Table 2.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778. Cited by: §2.1.
  • [11] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §2.2.
  • [12] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei (2014)

    Large-scale video classification with convolutional neural networks

    In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1725–1732. Cited by: §1, §2.1, §2.2.
  • [13] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §1, §2.1, §4.1.
  • [14] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre (2011) HMDB: a large video database for human motion recognition. In International Conference on Computer Vision (ICCV), pp. 2556–2563. Cited by: §1, §2.1, §4.1.
  • [15] Y. Li, S. Song, Y. Li, and J. Liu (2019) Temporal bilinear networks for video action recognition. In

    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)

    Vol. 33, pp. 8674–8681. Cited by: §1, §2.1, §2.2, §5.2, Table 6, Table 7.
  • [16] J. Lin, C. Gan, and S. Han (2018) Temporal shift module for efficient video understanding. arXiv preprint arXiv:1811.08383. Cited by: §1, §2.1, §2.2.
  • [17] W. Peng, X. Hong, and G. Zhao (2019) Video action recognition via neural architecture searching. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 11–15. Cited by: §2.1, Table 1.
  • [18] Z. Qiu, T. Yao, and T. Mei (2017) Learning spatio-temporal representation with pseudo-3d residual networks. In proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 5533–5541. Cited by: §1, §2.1, §5.1.
  • [19] P. Scovanner, S. Ali, and M. Shah (2007) A 3-dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th ACM international conference on Multimedia (ACMMM), pp. 357–360. Cited by: §2.3.
  • [20] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 618–626. Cited by: Figure 3, §5.1.
  • [21] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems (NeurIPS), pp. 568–576. Cited by: §1, §2.1, §2.3, §5.2, Table 6.
  • [22] K. Soomro, A. R. Zamir, and M. Shah (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Cited by: §1, §2.1, §4.1.
  • [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1–9. Cited by: §2.1.
  • [24] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2015) Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 4489–4497. Cited by: §1, §2.1, §4.3, §5.1.
  • [25] D. Tran, H. Wang, L. Torresani, and M. Feiszli (2019) Video classification with channel-separated convolutional networks. arXiv preprint arXiv:1904.02811. Cited by: §1.
  • [26] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri (2018) A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6450–6459. Cited by: §1, §2.1, §2.3, §5.1, Table 1.
  • [27] H. Wang and C. Schmid (2013) Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 3551–3558. Cited by: §2.3.
  • [28] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In European conference on computer vision (ECCV), pp. 20–36. Cited by: §1, §2.1, §2.2, §5.2, Table 6.
  • [29] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7794–7803. Cited by: §1.
  • [30] C. Wu, M. Zaheer, H. Hu, R. Manmatha, A. J. Smola, and P. Krähenbühl (2018) Compressed video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6026–6035. Cited by: §3.1.
  • [31] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He (2017) Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1492–1500. Cited by: §2.1.
  • [32] S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy (2018) Rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 305–321. Cited by: §1, §1, §2.1, §4.1, §5.1, Table 1.
  • [33] B. Zhou, A. Andonian, A. Oliva, and A. Torralba (2018) Temporal relational reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 803–818. Cited by: §2.2.