1 Introduction
The latest advances in the Computer Vision domain are definitely related to the development of Deep Learning (DL) methods
[42, 45, 23] which show great results on many tasks such as Image Classification [14] and Segmentation [12], Object Detection [16, 32], etc. There is a tendency nowadays to create more and more sophisticated pipelines [22, 57, 7], combining quite complex components which solve the task well but require a massive amount of calculations and power at the same time. On the other hand, since the times of AlexNet [30] and VGG [42] where a vanilla convolution was used as a basic building block, new lightweight primitives have been proposed [27, 10, 26, 56], allowing to reduce the theoretical complexity but retain or even improve the final accuracy. However, videolevel tasks, such as Human Action Recognition, which is being discussed in this work, require to consider temporal structure of input data by aggregating information from multiple frames in order to solve action ambiguities (opening/closing the door). This inevitably incurs extra computational costs during inference of the model. Nevertheless, few studies [8] pay attention to the complexity of the algorithm while maximizing accuracy. Therefore, creating a solution that can achieve high accuracy providing a fast inference speed would be a relevant task, especially in the case of lowpower devices used for edge computing (at the edge).Following this idea, we propose a lightweight architecture for AR which can run in realtime on a regular CPU, performing on par with heavy methods, such as 3D CNN [46, 9, 47]. In support of this, we provide a comparison (see Fig. 1 and Section 4.3) of our model with the stateoftheart methods and verify its accuracy on modern benchmarks, such as Kinetics [28], UCF101 [43], and HMDB51 [31].
Shortly, our contributions can be summarized as follows:

A new lightweight CNN architecture for realtime Action Recognition that achieves results comparable to stateoftheart methods.

Comparison of modern approaches to Action Recognition.

A method for improving the accuracy of an existing model by accommodating information from additional modality without a discernible increase in complexity.
2 Related Work
Currently, there are multiple methods that solve the AR problem with certain quality.
One of the examples is the twostream framework that fuses information from spatial and temporal nets [41, 18]. Spatial net uses RGB frame as input and represents an ordinary classification CNN working on a frame level, whereas temporal net receives multiple stacked optical flow (OF) frames. Calculating OF with traditional algorithms, such as TVL1 [55], requires extra resources, but there are several ways to avoid it. For example, OF can be extracted with additional subnetwork [44] or RGB difference [51] can be used as an alternative motion representation.
Another popular group of methods is related to the use of 3D primitives like 3D Convolution, 3D Batch Normalization, 3D Pooling, and others. They generalize original operations introducing an additional dimension
, which indicates the sequence of frames. One of the first architectures that leveraged these primitives for the application to AR, is C3D [46]. Another famous 3D CNN, which saturated UCF101 benchmark [43], is I3D [9]. It benefits from pretraining on a largescale ImageNet
[14] dataset by inflating trained 2D filters into 3D. Although methods based on 3D convolutions allow improving results in terms of accuracy, the computational expenses may achieve dozens of GFLOPs. Another substantial drawback is that at some level of the network only a small number of weights inside the convolutional kernels have a significant impact on the output signal regarding their contribution to the absolute value of activations making utilization of resources ineffective. This problem was mentioned in [47, 53] where authors proposed decomposition techniques and mixed architectures that combine 3D and 2D operations on different levels of the network.Recurrent neural networks, LSTMs [25], and GRUs [11] have been regarded as the default starting point for many sequence modeling problems, such as machine translation or language modeling [20]. Many significant results have been achieved in several challenging tasks by means of employing recurrent networks and attention mechanism [40, 4]. Not surprisingly, several approaches to video classification that model sequences with recurrent connections or gated units have been proposed [54, 39, 15]. These models, while showing comparable results on many benchmarks [9]
, seem to be more suitable for online prediction and thus realtime applications, because feature vector computed for the frame can be reused for predicting classification label for multiple timewindows containing this frame.
Several viable alternative approaches to sequence modeling have been proposed recently. These approaches, for example convolutional [5] or fullyattentional (e.g. Transformer [48]) networks, achieve better results on many tasks while addressing significant shortcomings of RNNs such as sequential computing or gradient vanishing.
We adopt recently proposed Transformer network in our work as a more elaborate way for sequence modeling. This allowed us to attain high accuracy, retaining the performance, that is sufficient for realtime applications.
3 Approach
In this section, we describe a designed approach to AR problem in details as well as discuss some improvements that help to boost the accuracy of our baseline architecture without significantly increasing the complexity.
3.1 Architecture overview
Video Transformer Network (see Fig. 2) consists of two parts: the first is the encoder that processes each frame of input sequence independently with 2D CNN in order to get frame embeddings, and the second is the decoder that integrates intraframe temporal information in a fullyattentional feedforward fashion, producing the classification label for the given clip. ResNet34 is used [23]
as a baseline architecture for the encoder in most of our experiments. We reuse parameters of all convolutional layers to maximize the benefit of transfer learning from image classification tasks. Global average pooling is then applied to the resulting feature maps to get the frame embeddings of size
(that is equal to 512 in our case), which are then transformed by the decoder, by repeatedly applying multihead selfattention and convolutional blocks. In multihead selfattention block, a temporal interrelationship between frames is modeled by informing each frame representation by representation of other frames using the attention mechanism. It consists of several sequential operations. First, vectors of frame representations are mapped to multiple key, value, and query spaces using different learned affine transformations. Each triple of query , key , value matrices (where is the sequence size and , are the dimensions of key and value space accordingly) is then transformed to the corresponding head output using the scaled multiplicative attention as following:(1) 
Each head output is then concatenated and passed to the convolutional block that consists of two convolutions with kernel of size 1 (positionwise feedforward) and residual connection. Resulting frame representations are then refined by applying the same procedure multiple times. As we found experimentally, four stacks of such decoder blocks are sufficient for maximizing classification accuracy, and the further increase of the number of blocks did not lead to improvement. In order to produce action confidences for the current clip, a fullyconnected layer is applied to all elements of the sequence. Resulting scores are then averaged and normalized with softmax function producing the clip prediction.
3.2 Multimodal knowledge distillation
As it was discussed above, the fusion of results of models that receive inputs with different modalities is a common approach to improve the accuracy of Action Recognition algorithm. But in most cases, it leads to a substantial increase in computational complexity due to several reasons. First, it requires to calculate a new modality, which itself may be a hard task, especially in case of the optical flow where commonly used algorithms perform costly iterative energy minimization. Second, since the same architecture is used to do prediction using the second modality, the complexity of the method is doubled. Therefore, both issues make applying of multimodal solutions difficult in realworld applications.
On the other hand, using the RGB difference in place of the optical flow results in almost the same performance [51], which has been verified by our experiments. At the same time, it requires much lower computational resources that makes using this modality more suitable in conjunction with a still RGB data.
Knowledge distillation [24] is the procedure that designated to help optimization of the student network by providing extra supervision from a larger model or an ensemble of models (teacher). There are successful applications of this technique for reducing the complexity of a larger teacher network [36] or integrating the performance of an ensemble of models into a single student [6, 24]. However, we hypothesize whether it is possible to transfer knowledge from multiple models working on different modalities (twostream teacher) to a single student. In order to better understand this, we ran several experiments where knowledge from two ResNet34 based VTN models working with RGB and RGB difference is distilled to the single RGB model and to the model which receives stacked RGB and RGB difference inputs. We also tried to train a model that operates on stacked input without extra supervision from knowledge distillation. Results are shortly summarized in Table 1. The model working on stacked inputs outperforms the single modality model when trained with knowledge distillation. We suppose that the main reason for that is that motion representation, learned by RGBdifference subnetwork in the twostream teacher, are not discovered by RGBonly model, yet they significantly contribute to model performance. Note that this technique does not allow matching the performance of the twostream model. However, it significantly reduces the complexity compared with the original twostream solution.
Model  Video@1  GMAC^{2}^{2}2Billion of multiplyaccumulate operations. 
Fused RGB + RGBdiff (teacher)  78.2  7.51 
RGB  75.2  3.77 
RGB with KD  75.2  3.77 
Stacked RGB + RGBdiff  75.2  3.88 
Stacked RGB + RGBdiff with KD  76.0  3.88 
4 Experiments
In this section we present a study of the proposed method. Kinetics400 is considered as the primary benchmark. However, the smaller MiniKinetics subset that was introduced in [53] is also used for faster experimentation. We also evaluated our models on UCF101 and HMDB51 and evaluated the inference speed on CPU.
4.1 Implementation details
We train and validate our models on 16frame input sequences that are formed by sampling every second frame from the original video, therefore the total temporal receptive field of our model equals to 32 frames. We tried longer sequences by adding or skipping more frames, but this only resulted in an increased clip accuracy, not the video. In order to calculate video classification accuracy (Video@1), we extracted all nonoverlapping 32 frame segments and averaged prediction on these segments.
Frames are scaled in a way, that the shorter side becomes equal to 256. We randomly crop with four different scales during training, as described in [50], and use central crop during the test time. Adam optimizer [29] with the momentum of 0.9 and weight decay of 0.0001 is used. Training is started with the learning rate of
, which is decayed by a factor of 10 when validation loss reaches a plateau. Models are trained until validation loss stops decreasing, which is usually happened within 50 epochs.
4.2 Model hyperparameters
We varied the structure of our decoder block in order to come up with one that maximizes performance on MiniKinetics dataset and believed that the same parameter settings would maximize efficiency on other datasets.
First of all, we evaluated how the number of stacked decoder blocks affects accuracy. We trained models with 1,3,4,5 and 6 blocks, and determined that 4 blocks result in the maximal accuracy and the higher number of blocks does not further boost the metric. We also experimented with sharing parameters between blocks by applying one block recurrently, as suggested in [13], but it did not lead to performance improvement. We varied the number of heads in multihead selfattention, and dimension of query, key , and value space, heads with gave the best results. We also tried to add trainable linear transformation after concatenation of heads and to use layer normalization in different locations, but these changes did not affect the accuracy.
4.3 Comparison with other methods
Model  MiniKinetics  UCF101  MAC  FPS  Parameters 
3D CNN  72.9  86.4  50.2G  5  63.5M 
Fused RGB and OF  74.3  89.8  8.5^{3}^{3}3 Optical flow calculation is not included in the complexity estimation. 
32  42.8M 
Fused RGB and RGBdiff  73.7  88.3  9.1  30  42.9M 
Stacked LSTMs  72.0  86.6  3.7  55  27.6M 
VTN (ours)  75.2  89.0  3.8  56  29.0M 
In order to better understand capabilities of the proposed approach, we compare it with methods described in Section 2. For a fair comparison, we take ResNet34 architecture and extend it to the case of 3D networks and twostream methods in the way described below.
The first model we compare with is ResNet34 3D which is described in [21]. It repeats a common ResNet architecture, but instead of 2D Convolutions and Pooling layers, it utilizes their 3D analogs. A global Average Pooling operation over three dimensions is applied at the end of the network in order to get a representation vector, which is fed to a fullyconnected layer producing the CNN output. Vanilla ResNet34 pretrained on ImageNet is used to initialize its 3D analog where convolutional kernels are repeated over temporal dimension , as proposed in [9].
The next approach that we consider is a twostream model that is represented by a fusion of two ResNet34 CNNs trained on RGB and OF inputs. The OF model is almost the original ResNet34, but its first convolutional layer receives 32channels input, formed by and components of precalculated optical flow for 16 sequential frames. To initialize this layer we average the first convolutional kernel of the RGB model pretrained on ImageNet over the channel dimension and repeat it 32 times.
We also tried a twostream model where two fused CNNs were trained on RGB and RGB difference inputs since the calculation of the latter is much cheaper than the optical flow. In this case, the motion model receives 48channels input of RGB differences from 16 consecutive frames.
The last model examined in our comparison is the ResNet34 followed by three stacked LSTM cells operating on independent frame embeddings. As before, we use the ImageNet pretrained model for initialization, but learn LSTM parameters from scratch. We found this model quite simple but representative at the same time. We also tried to apply a visual attention mechanism, as suggested in [39], but it did not improve the performance.
The comparison of the described models and our proposed method is shown in Table 2. For the sake of convenience, we also provide a theoretical complexity and inference time for all models. The input resolution is set to 224x224, and the sequence size is 16 frames for all models. The models were trained with Adam optimizer until validation loss reaches the plateau. The obtained results show that our VTN model outperforms others on MiniKinetics dataset and works on par with the twostream method. We find this fact surprising because we believe that 3D Convolutional model should perform better because it consists of operations that can learn temporal dependencies at every layer and has a higher capacity regarding the number of parameters.
Another interesting result is that the twostream RGBdifference model shows the performance that is close to the OFbased model while saving a large number of calculations. These findings correspond to the results of [21, 35]. Nevertheless, our VTN approach is attractive in terms of speed/accuracy tradeoff.
4.4 Comparison with stateoftheart
Method  Video@1 
BNInception+TSNRGB [51]  69.1^{4}^{4}4Author’s implementation (https://github.com/yjxiong/tsnpytorch) uses 10crop TTA during testing. 
I3DRGB [9]  72.1 
I3DTwoStream [9]  75.7 
S3DG [53]  74.7 
R(2+1)DTwoStream [47]  75.4 
R(2+1)DRGB [47]  74.3 
NLI3DResNet101RGB [52]  77.7 
MobileNetV2VTNRGB  62.5 
ResNet34VTNRGB  68.3 
ResNet34VTNRGB+RGBDiff  71.0 
SEResNeXt101VTNRGB  69.5 
SEResNeXt101VTNRGB+RGDiff  73.5 
To compare with other stateoftheart models we assessed our approach on Kinetics400 dataset. In addition to the baseline ResNet34VTN, we used a larger model employing SEResNeXt101 (32x4d) architecture for the encoder, which is, however, still very cheap in terms of a number of multiplyaccumulates in comparison with 3D CNNs. Another interesting question is the potential of the proposed method in optimizing a model for mobile devices and what associated drop in accuracy it would incur. To tackle this question we tested our approach with the lightweight MobileNetV2 [38] encoder.
Since fusion of prediction from streams with different modalities (e.g. RGB and optical flow or RGB and RGB difference) allowed improving results in many published works, we experimented with enhancing the results of our RGB model by combining it with the analogous RGB difference model. We subtracted normalized adjacent frames and trained the ResNet34VTN model on this data. This allowed us to improve the results of the ResNet34VTN model by a margin of 2.4%.
The results for the Kinetics400 validation set are presented in Table 3. The breakthrough I3D model [9] outperforms ResNet34 VTN and SEResNeXt101 (32x4d) VTN only by a small margin of 3.5% and 2.1% accordingly, thus our method still shows competitive results while being computationally significantly cheaper for online prediction scenarios.
We also provide results on the popular UCF101 and HMDB51 datasets. We finetuned models trained on Kinetics400 for 20 epochs with smaller learning rate of . Mean video accuracies over three validation splits are presented in Table 4.
Computational complexity versus accuracy on Kinetics400 for some stateoftheart methods and various variants of VTN is shown in Fig. 1. Since we primarily focus on the online prediction scenario (i.e. when the classification label is required for every subsequent frame) we consider the number of operations needed to execute the encoder on one frame as well as operations for the whole decoder. On the other hand, 3D convolutional models extract features from adjacent frames and require to execute the entire network for each new frame. Thus our method is more attractive in terms of accuracy/complexity for realtime applications.
Method  UCF101  HMDB51  
IDT [49] 
86.4  61.7  
C3D [46]  85.2    
TwoStream [41]  88.0  59.4  
TwoStream Fusion + IDT [19]  93.5  69.2  
BNInception+TSNRGB [51]  91.1    
P3D [51]  88.6    
STResNet + IDT [17]  94.6  70.3  
I3DRGB [9]  95.6  74.8  
I3DTwoStream [9]  98.0  80.7  
S3DG [53]  96.8  75.9  
R(2+1)DTwoStream [47]  97.3  78.7  
ResNet34VTNRGB  90.8  63.5  
SEResNeXt101VTNRGB  92.2  67.2  
ResNet34VTNRGB+RGBDiff  95.0  71.3  

95.0  71.6 
4.5 Inference speed
Since theoretically faster models do not necessarily correspond to higher inference speed [34, 33, 37]
, we also evaluate the actual inference time to prove the feasibility of the proposed method for realtime applications. Currently, there are several frameworks available, such as Nvidia Tensor RT
[1] or Intel^{®} OpenVINO^{TM} Toolkit [3], which can highly optimize DL model for particular hardware. Since we primarily focus on models suitable for edge computing, we chose OpenVINO and its DL Deployment Toolkit as the inference engine for our solution. OpenVINO can import models from many DL frameworks as well as ONNX [2]representation which we use to convert models from PyTorch framework which is used in all our experiments.
Model  FPS  GMAC  
ResNet34VTNRGB  56  3.77  

51  4.2  
ResNet50VTNRGB  49  4.25  
MobileNetV2VTNRGB  177  0.4 
Table 5 shows the inference time on CPU of several models that employ the proposed approach. Faster than realtime speed is achieved for all models, making this method promising for edge computing.
5 Conclusions
In this work, we have proposed a new Video Transformer Network architecture for realtime Action Recognition. We have shown that adopting methods from Natural Language Processing along with using an appropriate CNN for Image Classification helps to achieve accuracy onpar with stateoftheart methods. Moreover, it has been demonstrated that the proposed approach favorably compares with other approaches, such as 3D Convolutionbased models or twostream methods. Specifically, it allows utilizing computational resources more effectively by embedding each input frame to lowerdimensional highlevel feature vector and then making a conclusion about the action operating only on embedding vectors by means of selfattention. This method allows achieving realtime inference on a generalpurpose CPU, providing capabilities for using AR algorithms at the edge. Our research also demonstrates that the selfattention mechanism is quite universal and can be applied to many tasks, such as Natural Language Processing, Speech Recognition or Computer Vision.
References
 [1] NVIDIA TensorRT Programmable Inference Accelerator. https://developer.nvidia.com/tensorrt.
 [2] ONNX. https://onnx.ai/.
 [3] OpenVINO Toolkit. https://software.intel.com/enus/openvinotoolkit.
 [4] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
 [5] S. Bai, J. Z. Kolter, and V. Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
 [6] C. Bucilua, R. Caruana, and A. NiculescuMizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541. ACM, 2006.
 [7] Z. Cao, T. Simon, S.E. Wei, and Y. Sheikh. Realtime multiperson 2d pose estimation using part affinity fields. arXiv preprint arXiv:1611.08050, 2016.
 [8] J. Carreira, V. Patraucean, L. Mazare, A. Zisserman, and S. Osindero. Massively parallel video networks. In The European Conference on Computer Vision (ECCV), September 2018.

[9]
J. Carreira and A. Zisserman.
Quo vadis, action recognition? a new model and the kinetics dataset.
In
Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on
, pages 4724–4733. IEEE, 2017.  [10] F. Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR, pages 1251–1258, 2017.
 [11] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.

[12]
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.  [13] M. Dehghani, S. Gouws, O. Vinyals, J. Uszkoreit, and Ł. Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819, 2018.
 [14] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. Imagenet: A largescale hierarchical image database. In CVPR, 2009.
 [15] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Longterm recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625–2634, 2015.
 [16] M. Everingham, L. V. Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
 [17] C. Feichtenhofer, A. Pinz, and R. Wildes. Spatiotemporal residual networks for video action recognition. In Advances in neural information processing systems, pages 3468–3476, 2016.
 [18] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional twostream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1933–1941, 2016.
 [19] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional twostream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1933–1941, 2016.
 [20] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
 [21] K. Hara, H. Kataoka, and Y. Satoh. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pages 18–22, 2018.
 [22] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask rcnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980–2988. IEEE, 2017.
 [23] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
 [24] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network, 2015.
 [25] S. Hochreiter and J. Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 [26] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arxiv:1704.04861, 2017.
 [27] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnetlevel accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
 [28] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
 [29] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

[30]
A. Krizhevsky, I. Sutskever, and G. E. Hinton.
Imagenet classification with deep convolutional neural networks.
In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.  [31] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: a large video database for human motion recognition. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2556–2563. IEEE, 2011.
 [32] T.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755, 2014.
 [33] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning efficient convolutional networks through network slimming. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2755–2763. IEEE, 2017.
 [34] N. Ma, X. Zhang, H.T. Zheng, and J. Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:1807.11164, 2018.
 [35] A. Piergiovanni and M. S. Ryoo. Representation flow for action recognition, 2018.
 [36] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
 [37] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.C. Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. arXiv preprint arXiv:1801.04381, 2018.
 [38] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
 [39] S. Sharma, R. Kiros, and R. Salakhutdinov. Action recognition using visual attention. 2016.
 [40] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean. Outrageously large neural networks: The sparselygated mixtureofexperts layer. arXiv preprint arXiv:1701.06538, 2017.
 [41] K. Simonyan and A. Zisserman. Twostream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pages 568–576, 2014.
 [42] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition, 2014.
 [43] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
 [44] S. Sun, Z. Kuang, L. Sheng, W. Ouyang, and W. Zhang. Optical flow guided feature: A fast and robust motion representation for video action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(June 2018), 2018.
 [45] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
 [46] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497, 2015.
 [47] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6450–6459, 2018.
 [48] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
 [49] H. Wang and C. Schmid. Action recognition with improved trajectories. In Proceedings of the IEEE international conference on computer vision, pages 3551–3558, 2013.
 [50] L. Wang, Y. Xiong, Z. Wang, and Y. Qiao. Towards good practices for very deep twostream convnets. arXiv preprint arXiv:1507.02159, 2015.
 [51] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision, pages 20–36. Springer, 2016.
 [52] X. Wang, R. Girshick, A. Gupta, and K. He. Nonlocal neural networks. arXiv preprint arXiv:1711.07971, 10, 2017.
 [53] S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy. Rethinking spatiotemporal feature learning: Speedaccuracy tradeoffs in video classification. In Proceedings of the European Conference on Computer Vision (ECCV), pages 305–321, 2018.
 [54] J. YueHei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4694–4702, 2015.
 [55] C. Zach, T. Pock, and H. Bischof. A duality based approach for realtime tvl 1 optical flow. In Joint Pattern Recognition Symposium, pages 214–223. Springer, 2007.
 [56] X. Zhang, X. Zhou, M. Lin, and J. Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices, 2017.
 [57] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 2881–2890, 2017.
Comments
There are no comments yet.