End-to-end learning methods have achieved great improvements over previous hand-crafted features [15, 17, 34], and become the mainstream in video recognition area. The design of video recognition models enjoys great benefits of the prior art of still image recognition models. On one hand, many works utilize successful 2D CNNs, such as Inception  and ResNet  architectures, to extract spatial features of individual frames and then perform temporal aggregation using pooling strategies [13, 7, 36], feature encoding functions [6, 24, 45]3, 41, 19], and even optical flow-guided methods [4, 30, 27]. These approaches incorporate a learnable module into 2D CNNs that captures temporal dependency and motion information. On the other hand, some works directly inflate 2D CNNs into 3D CNNs by replacing convolution filters with [1, 32], and then add non-local block  to grasp long-range temporal dependency or separate kernel into and kernels [28, 40, 33] to reduce computational costs. The expanded temporal filters in 3D CNNs thus can conveniently model the temporal information from videos. Although these improved 3D CNNs show their effectiveness for action recognition, they usually introduce additional computational cost. This may limit the usage of 3D CNNs in real-world applications requiring low latency.
In conventional 2D video models, each frame is independently fed into a 2D CNN to extract feature and then a temporal pooling function aggregates all frame features to video level. We consider the probability of performing temporal modeling by reallocating the inputs of 2D convolution, instead of introducing the temporal integrating methods. To this end, we equip 2D convolution with spatio-temporal receptive field by employing the recent group[16, 40] and shuffle operations . Specifically, we propose video shuffle, an efficient and generic plug-in component for modeling temporal dependency in 2D CNNs with zero cost. As shown in Figure 1, video shuffle first divides channels of each frame into several groups with equal size, and then aggregates all of grouped features with same group index into a new frame feature. The reallocated frame feature contains spatial information of all frames and therefore the following 2D convolutions can conveniently learn both spatial and temporal representation. Video shuffle is superiority efficient since there are no additional parameters and FLOPs (addition or multiplication) introduced. The computation time of the proposed video shuffle comes only from the data movement in memory, which hardly affects the inference latency.
Video shuffle can be easily incorporated into 2D video models. In this work, we adopt temporal segment networks (TSN)  as our basic model, and take ResNet-50 and ResNet-101  as the backbones. In implementation, we plug video shuffle and its inverse operation, which restores the original spatial representation of each frame, into ResNet, before and after 2D convolutions inside residual blocks respectively. To demonstrate the effectiveness of the proposed VSN, we conduct experiments on several popular video action recognition datasets, including large-scale Kinetics and Moments in Time as well as the temporal-sensitive Something-Somethings. In experiments, VSN outperforms its counterpart on all datasets at the cost of zero parameters and zero FLOPs. Moreover, VSN surpass it by a large margin and further achieves state-of-the-art performance on the challenging Something-Something-V1, Something-Something-V2 datasets.
2 Related Work
2.1 Video Recognition Models
The conventional 2D CNNs learn video representation using 2D CNNs as spatial features extractor for frames and then performing temporal aggregation over frame features. In , they made use of 2D CNNs to extract features from individual frames and then integrated temporal features into a fixed-size video representation using various fusion methods. Many works focused on designing temporal aggregation methods to improve the recognition accuracy. Pooling approaches [24, 7, 36], feature encoding functions [6, 45] and recurrent neural networks [3, 41, 19] were usually preformed on high-level features, while optical flow-guided methods computed motion information on low and middle-level features [4, 30]. Two-stream framework introduced by 
is a widely-used approach to capture the motion information. It fused deep features extracted from optical flows and traditional features computed from RGB inputs.
On the other hand, a video can be viewed as a cube stacked of many images. That is to say, a 3D convolution can process video directly. Previous works demonstrate that 3D CNNs 
can straightforward learn the spatio-temporal features. In order to take benefits of the successful 2D CNNs and ImageNet pretraining, Carreira and Zisserman introduces the Inflated 3D ConvNets (I3D) based on the Inception architecture [31, 12] and show its superior performance on a large human action recognition benchmark . Meanwhile, several 3D variants are proposed [28, 40, 33, 9]. Qiu  introduces a Pseudo-3D ResNet architecture. Tran  presents the R(2+1)D model based on ResNet. Xie  investigates where “deflating” 3D convolution are more suitable and then presents the separable-3D model built upon I3D. These mixed 2D and 3D networks are constructed by replacing filter to followed by a filter. Additionally, non-local neural network  and its improved version , trajectory convolution 35] are also introduced.
In parallel, some works focused on efficient model design in video understanding. The most related approaches were ECO  and TSM . ECO  employed a 3D-net stacking on the 2D feature extractors to model the temporal relationships, and they further proposed an online video understanding algorithm for fast video inference. Although ECO achieves a good runtime-accuracy trade-off, it still increased the computational costs compared to 2D CNNs. TSM  introduced a zero-cost temporal shift module which shifts part of channels along temporal dimension by to fuse temporal information. TSM not only had fewer FLOPs than ECO family but also achieved finer recognition performance, especially on temporal-sensitive datasets.
2.2 Group and Shuffle Operation
The idea of splitting channels into several groups was first presented in AlexNet  for distributing the model over two GPUs to handle the memory issue, and then widely used for designing tiny and efficient network architectures . In ResNeXt , they further developed the group convolution to reduce the number of parameters and computational complexity by dividing input channels into several groups, then performing regular convolution on each group and concatenating all group results as the outputs. Experiments demonstrated that group convolution can lead to performance improvement on image recognition task. But when stacking group convolution multiple layers, the outputs from a certain channel were only derived from a small fraction of input channels. To address this weakness, ShuffleNet  utilized the channel shuffle operation, by which the resulting channels of each group were collected from all input groups, to enable information interaction across groups. It not only greatly reduced computational costs but also maintained accuracy. Besides, Zhang  proposed an interleaved group convolutions which consists of a primary group convolution for handling spatial correlation and a secondary group convolution for blending the channels and show its effectiveness. A special case of group convolutions was channel-wise convolution where the number of groups is equal to the number of channels, this is also very similar to the separable convolution [2, 11]. The basic idea of group and shuffle operations are adopted in this work.
3 Video Shuffle Networks
In this section, we first introduce the design criteria of video shuffle, and show how to incorporate it into the building block of ResNet. Then, we present the overall network architecture of VSN, followed by implementation details.
3.1 The Design of Video Shuffle
The design motivation of video shuffle lies in the fact that though recent 3D CNNs have improved recognition performance, they could hardly be deployed into real-world video recognition systems due to its heavy computational cost. The conventional 2D CNNs enjoy low latency, but they learn spatial feature from isolated frames without temporal modeling, leading to accuracy gap against state-of-the-arts models. Namely, there is an accuracy-speed trade-off in video recognition models. To address it, we propose to equip 2D CNNs with temporal receptive field by reallocating the inputs of 2D convolution. In order to enable 2D convolution learn both spatial and temporal feature without modifying its structure, there are two prerequisites: 1) the input should contain spatial representation from all frames and 2) its input size should not be changed. Group operation divides input channels into several groups to make each contain partial representation and shuffle operation further facilitates information exchanging across different groups. These features make them perfect choices in this work.
Figure 2 shows a graphical representation of the proposed video shuffle. A video with frame features is shown as an example and each one of them is a tensor extracted by 2D convolutions, where indicates the channel size, and
are spatial dimensions. For each frame feature, we first divide channels into several groups with equal sizes. In this work, number of groups is heuristically set to number of frames, in consequence, channel size of each grouped feature equals . In this way, each grouped feature with shape of contains a part of spatial feature. We aggregate all of grouped features with same group index into a new frame feature by temporal shuffle operation, which allows spatial features exchanging across different frames. As illustrated in Figure 1 right: the reallocated feature at the first frame is a stack of all first grouped features in Figure 1 left (before video shuffle applied).
Denoting a feature at -th frame as , video shuffle transforms the original to a new feature by the following equation:
where , is the channel size of each group and in this setting, the symbol denotes the tensor slicing operation along channel dimension. The index plays the role of both frame and group index. For instance, indicates the first grouped feature at the first frame (masked green in Figure 1 left). As a result, the new frame feature contains the spatial information of all sequential frames and further serves as the inputs of the following 2D convolutions. The proposed video shuffle has three advantages: first, it allows spatial features interacting across different frames; second, the following 2D convolutions in 2D CNNs can handily perform both spatial and temporal modeling; and third, video shuffle is easy to implement via data movement in memory, not introducing additional parameters or theoretical FLOPs at all.
3.2 Video Shuffle in Residual Block
Since we have obtained a parameter-free video shuffle that is able to model temporal information cooperating with 2D convolutions, we consider inserting it into conventional 2D CNNs. In this work, we mainly study on the ResNet architectures. As a result, we attempt to plug video shuffle and its reverse operation, which restores the original spatial representation for each frame, into the primary building block of ResNet. We investigated two positions to insert video shuffle units and obtain two variants: the headtail (residual) block and compact (residual) block.
We first consider placing video shuffle at the head and tail of a residual block, namely headtail block. Before any convolutional layer, the inputs go though the first video shuffle directly, and each new frame is consequently composed with partial spatial features from all sampled frames. The following convolutions in bottleneck block end-to-end learn both spatio-temporal feature consequently. After them, to guide subsequent convolutions focusing on spatial reasoning, we restore the original spatial feature of each frame by inserting an inverse video shuffle. As for compact block, video shuffle units are similarly configured with a “paired” setting but compactly performed before and after conv2d_3x3 (shown in Figure 3).
Given that a bottleneck performs compression on channel dimension where the spatial and temporal information blend, we argue that a weakness of headtail residual block could be an information loss along with dimensional reduction. We further empirically verify this assumption in experiments (ablation studies). Results demonstrate that compact block is stronger than headtail in temporal modeling. Unless specified, we always use the compact block in following experiments.
3.3 Network Architectures
|layer name||ResNet-50||ResNet-101||output size|
, stride 2
|max, stride 2||6485656|
|global average pooling||2048111|
Table 1 presents VSN with ResNet-50 and ResNet-101 backbone for video recognition. In this work, we have not added or modified any convolution or pooling layer. Instead of incorporating video shuffle into all building blocks, we heuristically insert it into the last building block in each ResNet layer. For example, in of ResNet-50, we replace the third block with the compact video shuffle and retain the first two blocks. That is, there are totally four blocks equipped with video shuffle in overall ResNet architectures, and more related ablation studies are presented in experiments.
In TSM , they insert the temporal shift module into the residual block and show its effectiveness in video recognition. We argue that TSM only allows temporal information to be interchanged between neighbor frames, which models temporal information locally. As a result, it fails to take advantages of non-local details in long range. Compared with local-field TSM, video shuffle broadens its horizon to all frames and models temporal dependency in a global version. As TSM is orthogonal to video shuffle, we further combine them together. In terms of implementation, we use their residual temporal shift module
with zero padding to replace the building blocks which have not be incorporated with video shuffle e.g. the first two blocks inof ResNet-50. Experiments are conducted to show the superiority of such combination in temporal modeling. In practice, we replace the last block of , , and with our compact block and add temporal shift module to the other residual blocks. We denote our video model as VSN-ResNet-L (VSN-RL for simplicity) if the backbone is ResNet-L, where indicates the number of layers, e.g. VSN-ResNet-50 (VSN-R50) and VSN-ResNet-101 (VSN-R101).
|Model||#Frame||#Params||FLOPs||Sth-V1 val||Sth-V1 test||Sth-V2 val||Sth-V2 test|
|Two-stream TRN ||8+8||36.6M||-||42.0||40.7||55.5||56.2|
|NL-I3D + GCN ||64||62.2M||605G||46.1||45.0||-||-|
|Two-stream TSM ||16+16||48.6M||-||50.2||47.0||64.0||64.3|
3.4 Implementation Details
In this work, all models are implemented in PyTorch. ResNet architectures and ImageNet pre-trained models are derived fromtorchvision package.
We sample 8 frames from an entire video using the sparse segment-based sampling . For data augmentation, our implementation follows the practice in  to alleviate negative effects of overfitting. The images are first resized with shorter side to and then applied by corner cropping and scale-jittering. We also apply random left-right flipping consistently for all videos except actions are horizontal-order-sensitive in Something-Something. e.g. Pushing something from left to right. Finally, the cropped images are resized to pixels for network training. We distribute totally 64 videos into 8 TITANXP GPUs and each GPU has 8 videos in a mini-batch. We adopt SGD with momentum as optimizer and set its initial learning rate to 0.01. We utilize both the multi-step learning rate decaying and cosine learning rate schedule 
with warm-up depending on dataset. The momentum value, weight decay and dropout rate are set to 0.9, 5e-4 and 0.8 respectively. We freeze all batch normalization except the first convolution layer.
In the inference phase, TSN  takes the average predictions of 2510 crops as the video prediction. I3D and S3D  densely sample all frames and take center crops for evaluation. In this work, we take the same pre-processing as non-local neural network , which performs spatially fully-convolutional inference on videos whose shorter side is re-scaled to 256. For temporal domain, we also sample total 8 frames during evaluation.
In this paper, extensive experiments are performed on four popular and challenging video recognition benchmarks. We first introduce these experimental benchmarks and then show that the proposed VSN can not only perform very well on Kinetics, but also achieve state-of-the-art performance on Something-Something-V1, Something-Something-V2 and Moments in Time datasets.
We conduct experiments on various video datasets with great diversity, whose sources range from YouTube to crowdsourcing videos and durations range from three seconds to tens of seconds, covering human daily activities, human actions to sports and events.
Kinetics  is a large human action recognition dataset, which contains around 240k training videos and 20k validation videos, involving 400 human cation classes.
Moments in Time (Moments)  includes a collection of 1 million trimmed 3s-video clips, corresponding to 339 dynamic event categories.
Something-Something-V1 (Sth-V1)  is a temporal-sensitive dataset, containing 108,499 videos. The total 174 categories are basic actions with objects.
Something-Something-V2 (Sth-V2)  increases its number of videos to 220,847 and further improves the annotation quality and pixel resolution.
|Two stream VSN-R101||ResNet-101||77.6||93.7|
4.2 Results on Something-Something
We first show a comparison of the performance between VSN and previous state-of-the-art methods in Table 2, on both validation and test set of Something-Something-V1. The top-1 accuracy as well as the statistics of computational costs are reported.
Previous results are presented in the first group.  found that TSN fails to reason temporal relation and thus proposed the temporal relation networks (TRN) to learn temporal dependencies between video frames at multiple time scales. They show that TRN-multiscale can improve TSN by 14.7% and fusing optical flow gives another 7.6% improvement. In , they introduced the efficient video understanding model ECO by leveraging the 3D-net stacking on 2D features. Their best single model achieved an accuracy of 41.4%. Some works attempted to use pure 3D models. Both I3D  and its improved version NL-I3D  obtained good performance, but their computational costs (FLOPs) are too huge to deploy. Furthermore, to explicitly model relationships between humans and objects, NL-I3D+GCN  used a object detector to extract region proposals and composed these regions from different frames by the graph convolution network. Although NL-I3D+GCN achieved a very competitive accuracy of 46.1%, the introduced computational cost is non-negligible. TrajectoryNet  allows visual features to be aggregated along motion paths by a trajectory convolution, achieving a higher accuracy of 47.8%. The recent temporal shift module (TSM) achieved 43.4% when taking 8 RGB frames as inputs. As number of frames was increased to 16, it gained another 1.4% boosts. The previous state-of-the-art performance was held by , which fused 16-frames RGB model with another optical flow stream.
Our results are presented at the last two groups. Taking 8 RGB frames as inputs, our VSN-R50 achieves 46.6% accuracy, outperforming TSN and TSM by 27.1% and 3.2% respectively. This demonstrates that video shuffle performs outstanding for temporal modeling. When fused with optical flow stream, two-stream VSN-R50 achieves 51.6% accuracy, which is 1.4% higher than two-stream TSM. Going deeper with network architecture from ResNet-50 to ResNet-101 gives notable 2.2% improvements. The best single model VSN-R101 gets an accuracy of 47.8%. Our ensemble model, which averages the predictions of VSN-R101 and VSN-R50, achieves top-1 accuracy of 49.2%. Furthermore, the two-stream VSN-R101 gets a new state-of-the-art 52.7% performance on Something-Something-V1 dataset. We also submit test predictions to the evaluation server and report test results. The trend of improvement is basically consistent with the validation set and the best performance 49.9% is held by our two-stream VSN-R101.
It distinguishes from previous Sth-V1 dataset with increased training examples, better annotation quality and higher video resolution. The comparison results of VSN with the state-of-the-art methods also listed in Table 2. The previous models have been introduced in the above experiments. Similar to Sth-V1 dataset, our models are able to outperform both 2D and 3D counterparts. VSN-R50 and VSN-R101 achieve 60.6% and 61.6% respectively. With VSN going deeper, the improvement gains constantly climb higher. VSN-R101 outperforms TSM by 2.5%. In accord with our expectation, the ensemble models () show big advantages over the ones fed with single modality. Evaluated on the validation and test set, our two-stream VSN-R101 establishes the new state-of-the-art on both sets.
4.3 Results on Kinetics and Moments in Time
|Two-stream TSN ||BN-Inception||25.32||50.10|
|Two-stream TRN ||BN-Inception||28.27||53.87|
|Two-stream I3D ||Inception-V1||29.51||56.06|
Kinetics and Moments
Our VSN models are also evaluated on both Kinetics and Moments in Time, featuring huge size and tough task. Table 3 and Table 4 compares our VSN-R50 and VSN-R101 models against the previous state-of-the-art models on Kinetics and Moments respectively. First, it is observed that VSN outperforms TSN by a considerable margin, which verifies the effectiveness of the proposed video shuffle component. Second, compared with 2D attention-based models, such as Attention-Cluster and NL-C2D listed in Table 3, our VSN-R101 achieves a very close performance. It demonstrates video shuffle indeed can act as a non-local feature integrator. Third, VSN-R101 can outperform 3D variants, such as I3D, R(2+1)D and S3D-G. Even comparing to huge 3D counterparts, our two-stream VSN-R101 is also competitive. Although both 2D and 3D models are not well-performed on the challenging Moments, VSN can outperform these counterparts. VSN-R101 beats all other models and stands as a new state-of-the-art.
4.4 Generalize to Optical Flow
We also verify whether VSN can generalize to optical flow. For these experiments, we follow the standard setup as described in  and extract optical flow with the TV-L1 algorithm . All models are trained on the Kinetics and Sth-V1 and report the top-1 accuracy. We sample 8 segments in training optical flows like RGB. In [29, 36], they stack 5 or 10 consecutive optical flows for capturing the long-term temporal dependency in videos. We consider that VSN have ability to learn long-range temporal dependency and verify it by training models using only 1 optical flow in a segment.
The results are shown in Table 5. The first group presents the performance of TSN baseline with different backbones while the second one presents ours. Trained with flow as inputs on Kinetics and Sth-V1, our VSN-R50 outperforms its counterpart by 5.5% and 6.7% respectively. By increasing the number of inputting flows from 1 to 5, both baseline and our model yield considerable gains. Furthermore, our VSN-R50 trained on optical flows is able to achieve performance close to the TSN ResNet-50 with flows, whose inputs are more than ours. Going deeper with VSN, the improvement grows considerably. VSN-R101 outperforms VSN-R50 models by around 3%-4% accuracy.
|(Zolfaghari et.al 2018)||30.6ms||45.6vps||41.4|
4.5 Inference Latency
To measure the latency and throughput, we perform inference on one NVIDIA Tesla P100 GPU and use the average value of 500 times batch inference with batch size of 16. Following , we provide the speed of VSN-R50 and VSN-101. The vps indicates the videos per second. It is clearly observed from Table 6 that our VSN models act superior not only by high accuracy but also by low latency and high throughput. Compared to I3D, VSN gets 13 speedup along with higher accuracy. It is also illustrated that video shuffle can hardly hurt the runtime speed: VSN has almost the same latency and throughput as TSN, but it brings 20%+ improvement.
4.6 Ablation Studies
Which residual block is better for temporal modeling?
Table 7 shows top-1 accuracies of compact and headtail residual block both on Kinetics and Sth-V1. In this setting, we train our models with RGB inputs and replace all last blocks at different ResNet layers with video shuffle blocks. Although both of two variants outperform the baseline, compact residual block clearly outperforms headtail counterpart, no matter testing on temporal-sensitive dataset or using backbones network with different depth.
How many blocks are replaced with video shuffle block?
As discussed above, the last block of ResNet layer is replaced by our compact residual block. We conduct experiments to verify whether our model can capture temporal information more effectively using more video shuffle blocks. Since a video shuffle block could be place at arbitrary ResNet layer, e.g. , we average scores achieved by models whose number of video shuffle blocks is same. The results are reported in Table 8. It is obvious that increasing the number leads to better accuracy. Each ResNet layer having one video shuffle block (totally four) performs best.
|+ temporal shift|
|+ video shuffle|
Comparison of temporal shift and video shuffle.
Table 9 presents the respective temporal modeling ability of temporal shift and video shuffle, as well as their combined version. In comparison to temporal shift module, video shuffle is more competitive on both Kinetics and Sth-V1. Furthermore, combining these two component gains higher scores and shows superior temporal modeling capability.
In this paper, we introduced the video shuffle network, an efficient video recognition model that can conveniently learn spatio-temporal representation by inserting video shuffle into 2D CNNs. VSN not only enables 2D convolutions performing temporal modeling, but also hardly increases the overall latency. In experiments, VSN outperforms its counterparts by a great margin and further achieves state-of-the-art performance on Something-Something-V1, Something-Something-V2 and Moments in Time. We hope that our insights will inspire new efficient network designs concentrating computation and accuracy trade-off in video recognition.
-  (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In , pp. 4724–4733. Cited by: §1, §2.1, §4.2, Table 3, Table 6.
Xception: deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258. Cited by: §2.2.
-  (2015) Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2625–2634. Cited by: §1, §2.1.
-  (2018) End-to-end learning of motion representation for video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6016–6025. Cited by: §1, §2.1.
-  (2018) Slowfast networks for video recognition. arXiv preprint arXiv:1812.03982. Cited by: Table 3.
-  (2017) Actionvlad: learning spatio-temporal aggregation for action classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 971–980. Cited by: §1, §2.1.
-  (2017) Attentional pooling for action recognition. In Advances in Neural Information Processing Systems, pp. 34–45. Cited by: §1, §2.1.
-  (2017) The” something something” video database for learning and evaluating visual common sense.. In ICCV, Vol. 2, pp. 8. Cited by: §4.1, §4.1.
-  (2018) Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6546–6555. Cited by: §2.1.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §1.
Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §2.2.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §2.1.
-  (2014) Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732. Cited by: §1, §2.1.
-  (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §2.1, §4.1.
-  (2008) A spatio-temporal descriptor based on 3d-gradients. In BMVC 2008-19th British Machine Vision Conference, pp. 275–1. Cited by: §1.
-  (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1, §2.2.
-  (2005) On space-time interest points. International journal of computer vision 64 (2-3), pp. 107–123. Cited by: §1.
-  (2019) Collaborative spatio-temporal feature learning for video action recognition. arXiv preprint arXiv:1903.01197. Cited by: Table 3, Table 4.
-  (2017) Temporal modeling approaches for large-scale youtube-8m video understanding. arXiv preprint arXiv:1707.04555. Cited by: §1, §2.1.
-  (2018) Temporal shift module for efficient video understanding. arXiv preprint arXiv:1811.08383. Cited by: §2.1, §3.3, Table 2, §4.5, Table 6.
-  (2019) Learning video representations from correspondence proposals. In IEEE CVPR, Cited by: Table 3.
-  (2018) Attention clusters: purely attention based local feature integration for video classification. In IEEE CVPR, Cited by: Table 3.
Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §3.4.
-  (2017) Learnable pooling with context gating for video classification. arXiv preprint arXiv:1706.06905. Cited by: §1, §2.1.
-  (2019) Moments in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence. Cited by: §4.1, Table 4.
TV-l1 optical flow estimation. Image Processing On Line 2013, pp. 137–150. Cited by: §4.4.
-  (2019) Representation flow for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1.
-  (2017) Learning spatio-temporal representation with pseudo-3d residual networks. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5534–5542. Cited by: §1, §2.1.
-  (2014) Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pp. 568–576. Cited by: §2.1, §4.4.
-  (2018) Optical flow guided feature: a fast and robust motion representation for video action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1390–1399. Cited by: §1, §2.1.
-  (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §1, §2.1.
-  (2015) Learning spatiotemporal features with 3d convolutional networks. In Computer Vision (ICCV), 2015 IEEE International Conference on, pp. 4489–4497. Cited by: §1, §2.1.
-  (2018) A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459. Cited by: §1, §2.1, Table 3.
-  (2013) Action recognition with improved trajectories. 2013 IEEE International Conference on Computer Vision, pp. 3551–3558. Cited by: §1.
-  (2018) Appearance-and-relation networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1430–1439. Cited by: §2.1.
-  (2016) Temporal segment networks: towards good practices for deep action recognition. In European conference on computer vision, pp. 20–36. Cited by: §1, §1, §2.1, §3.4, §3.4, Table 2, §4.4, Table 3, Table 4, Table 5, Table 6.
-  (2018) Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §1, §2.1, §3.4, Table 2, §4.2, Table 3.
-  (2018) Videos as space-time region graphs. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 399–417. Cited by: §2.1, Table 2, §4.2.
-  (2017) Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500. Cited by: §2.2, §3.4.
-  (2017) Rethinking spatiotemporal feature learning for video understanding. arXiv preprint arXiv:1712.04851. Cited by: §1, §1, §2.1, Table 3.
-  (2015) Beyond short snippets: deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4694–4702. Cited by: §1, §2.1.
-  (2017) Interleaved group convolutions. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4373–4382. Cited by: §2.2.
-  (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856. Cited by: §1, §2.2.
-  (2018) Trajectory convolution for action recognition. In Advances in Neural Information Processing Systems, pp. 2208–2219. Cited by: §2.1, Table 2, §4.2.
-  (2018) Temporal relational reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 803–818. Cited by: §1, §2.1, Table 2, §4.2, Table 4.
-  (2018) Eco: efficient convolutional network for online video understanding. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 695–712. Cited by: §2.1, Table 2, §4.2.