In recent years, convolutional neural networks (CNNs) have been applied to action recognition task and obtained state-of-the-art performance over the traditional arts. However, their widely application is hindered by video’s resolution. The relatively low resolution makes the task more complex. Firstly, most of the datasets used for studying action recognition have a fixed resolution,e.g. UCF101 (about 320240), HMDB51 (about 340256), sport-1M (about 640360) and so on . But the resolution in real world usually varies among different sources of video capturing, inevitably being low in e.g. surveillance scenario. There are also some situations where the region of interest (ROI) is quite small in relative high-resolution (HR) videos. Secondly, current recognition networks are not scale invariant due to the existence of fully-connected layers in their architecture. In other words, low-resolution (LR) videos cannot be directly fed into these well trained CNNs.
Re-training a new classifier for LR videos is a straightforward solution. However, there are some issues that limit its feasibility. Firstly, such method needs well labeled large-scale training data in the similar quality. Secondly, it would be very time consuming and laborious to train a video recognition network well. Moreover, we need to train many models for different input resolutions in this way. Another intuitive solution is to simply re-scale the input frames, such as using bicubic interpolation. Going further, super-resolution (SR) is an advanced alternative of simple re-scaling, which can benefit the classifier by adding details to LR inputs. Using SR as preprocessing eliminates the necessity to retrain classifiers. Previous work
has verified that image SR is generally helpful for computer vision tasks when dealing with LR input images. proposes to use some traditional SR methods for LR video action recognition, and  investigates a CNN-based SR method for action recognition.
Almost all of the existing works about image or video SR concern the visual quality of the super-resolved image or video. During training, mean-squared-error (MSE) is extensively used, which corresponds to signal fidelity i.e. PSNR. As it has been claimed that PSNR is not a good representative for visual quality, some perceptual loss is introduced in addition to MSE. Nonetheless, it is still not clear whether visual quality determines the quality of visual analytics results, e.g. action recognition accuracy. Intuitively, people may assume the visual quality and the “recognition quality” shall be consistent. But we argue that, since the analytics tasks are performed by computers instead of human, there can be inconsistency. In addition, it has been shown that for inverse problems like SR, even signal fidelity and perceptual quality can be contradictory , which further challenges the intuitive assumption.
In this paper, we study the video SR problem not for visual quality but for recognition quality, i.e. we use SR as a preprocessing step before feeding an LR video into a trained action recognition network. Since we focus on SR, we regard action recognition networks as “black boxes” and do not adjust them for LR video. In other words, we want to investigate an SR method whose performance is evaluated by a computer algorithm rather than by human. This work can also be viewed as bridging low-level and high-level vision tasks.
Oriented to the popular two-stream action recognition framework  which learns two separate networks, one for spatial color information and the other for temporal motion information, we propose two video SR methods for these two streams respectively. For the spatial stream which can be regarded as image classification, we observe that the moving object is more related to the recognition and should be paid more attention during SR enhancement. Thus, our Spatial-oriented SR (SoSR) takes weighted mean squared error guided by optical flow as loss to emphasize moving objects. For the temporal stream, we observe that video SR can result in the temporal discontinuity between consecutive video frames which may harm the quality of optical flow and incur drop in recognition accuracy. Thus, in our Temporal-oriented SR (ToSR), we enhance the consecutive frames together to ensure the temporal consistency in a video clip.
Our contributions can be summarized as follows.
We investigate SR methods to facilitate action recognition, assuming well-trained two-stream networks as “black boxes.”
For the spatial stream, we propose an optical flow guided weighted MSE loss to guide our SoSR to pay more attention to regions with action.
For the temporal stream, we propose ToSR which enhances the consecutive frames together to achieve temporal consistency.
To verify the effectiveness of our methods, we perform experiments with two state-of-the-art recognition networks on two widely used datasets–UCF101  and HMDB51 . Comprehensive experimental results show that our SoSR and ToSR indeed improve the recognition accuracy significantly. Especially, our SoSR is implemented upon a single frame SR network, but outperforms advanced multi-frame SR methods on recognition accuracy in the spatial stream; our ToSR obtains an accuracy of 61.24% and 58.73% on the HMDB51 dataset in the temporal stream, using two recognition networks respectively, which are quite close to the performance of HR videos: 62.16% and 59.41%. Our code will be released.
2 Related Work
We review related works at two aspects: action recognition and image/video SR. In both fields, CNN has been the mainstream and outperforms the traditional methods significantly. Thus we only mention several CNN-based approaches that are highly related to our work.
CNN for action recognition. In CNN-based action recognition, a key problem is how to properly incorporate spatial and temporal information in CNN architectures. Solutions can be divided into three categories: 3D convolution, RNN/LSTM, and two-stream. 3D CNN which learns spatio-temporal features was first presented in . Later on, C3D features and 3D CNN architectures [33, 34, 36, 5] appeared. There were also several works [29, 24, 42] focusing on improvements of 3D CNNs. RNN/LSTM is believed to cope with temporal information better, and thus [6, 38, 37] attempted to incorporate LSTMs to deal with action recognition. Two-stream CNN architecture was firstly proposed in . This architecture consists of two separate networks, one for exploiting spatial information from individual frames, and the other for using temporal information from optical flow; the outputs of two networks are then combined by late fusion. Several improvements were presented for two-stream [8, 9, 35]. In this paper, we design SR methods specifically for two-stream networks due to two reasons. First, two-stream approach seems leading to the best performance for action recognition on several benchmarks. Second, both 3D convolution and RNN/LSTM networks are not easily decomposed, but two-stream networks have a clear decomposition, which facilitates the investigation of SR. Specifically, we use two state-of-the-art methods known as Temporal Segment Network (TSN)  and Spatio-Temporal Residual Network (ST-Resnet)  in our experiments.
CNN for image SR.
Almost all of the existing image SR methods are designed to enhance the visual quality by adding more image details. In earlier years, PSNR is evaluated as a surrogate of visual quality and thus mean-squared-error is extensively used as loss function[7, 19, 18, 25, 30, 31, 22, 41]. More recently, visual quality is considered directly and several different kinds of loss functions are proposed, such as perceptual loss  and loss defined by generative adversarial network (GAN) . For example, Ledig et al.  proposed SRGAN which combined GAN loss and perceptual loss. It is also worth noting that PSNR and visual quality can be even contradictory .
CNN for video SR. Compared to single image SR, the temporal dimension provides much more information in video SR, and various methods have been proposed to exploit the temporal information. A majority of these methods have an explicit motion compensation module to align different frames. For example, Kappeler et al.  slightly modified SRCNN  and extracted features from frames that were aligned by optical flow. Caballero et al.  proposed an end-to-end SR network to learn motions between input LR frames and generate SR frames in real time. Tao et al.  introduced a new sub-pixel motion compensation (SPMC) layer to perform motion compensation and up-sampling jointly. Also several methods try to avoid the explicit motion compensation. For example, Jo et al.  proposed a network that used dynamic up-sampling filters. All the aforementioned works are pursuing higher PSNR for video SR. But in this paper, we consider video SR to improve action recognition accuracy. We focus on the loss functions instead of the network structures.
3 Analyses and Methods
Figure 1 depicts the pipeline of using SR for action recognition by two-stream networks. Given an LR video, we split it into frames on which we perform SR enhancement. We propose Spatial stream-oriented SR (SoSR) and Temporal stream-oriented SR (ToSR) for the two streams respectively. In other words, we enhance the LR video twice. We then calculate optical flow from the ToSR resulting video, and feed the optical flow together with frames from the SoSR resulting video into the following recognition module.
3.1 Action Recognition Module
In this paper, our SR methods are specifically designed for two-stream action recognition networks. Two-stream is a popular framework because of its simple but effective structure to incorporate both spatial and temporal information. Specifically, we use TSN  and ST-Resnet  in our experiments. There are minor differences between the two networks: TSN uses a weighted average of the classification scores predicted from the two streams, while ST-Resnet trains a fusion sub-network together with the two streams in an end-to-end fashion. Nonetheless, we focus on the SR part and we directly use the well-trained models provided by the authors, without any tuning.
3.2 Spatial-oriented SR
According to the two-stream architecture, the spatial stream performs recognition from individual frames by recognizing objects. That says, the spatial stream is equivalent to image classification. Inspired by previous work , we expect that SR can enhance the LR frames and add more image details with which SR helps in recognition. Thus here, we experiment with a representative image SR method, namely VDSR . However, we observed counterexamples in experiments. We calculate recognition accuracy for individual classes, and observed that VDSR sometimes performs even worse than the simple bicubic interpolation, more interestingly, the original HR frames can be even worse than super-resolved or interpolated frames. Such examples are summarized in Table 1. We distinguish several different cases according to the relative accuracy of different methods: HRVDSRBicubic for case (a), BicubicHRVDSR for case (b) and VDSRHRBicubic for case (c). Since LR frames lose details compared to HR frames, bicubic interpolation simply up-scales frames without adding details, and SR methods usually enhance interpolated frames with much more image details, we conjecture that image details can be either helpful or harmful for action recognition, especially in specific classes.
|Case||Class||Recognition Accuracy (%)|
In Figure 2, we visually analyze some frames to confirm our conjecture. In (a), which corresponds to HRVDSRBicubic, we indeed observed that many details about the bow and arrow lie in the HR frame, but are missing in Bicubic frame; the SR frame adds some details on the bow (shown in the blue box), which is helpful for recognition since the bow is directly related to the class Archery. In (b), which corresponds to BicubicHRVDSR, we observed that SR frame contains more details than Bicubic frame, but mostly on the background (shown in the blue box) rather than on the key object (shown in the red box); the added details are harmful for action recognition. In (c), which corresponds to VDSRHRBicubic, as the SR frame has more details on the human (which is directly related to the action recognition) but less details on the background (due to LR input), the recognition accuracy is boosted even over HR.
Based on the observation, we propose an SR method to selectively enhance the image regions that are related to action recognition. These regions usually correspond to high-motion regions, such as the bow in Figure 2 (a), the rotating rope in Figure 2 (b), and the walking woman in Figure 2 (c). In this paper, we select these regions according to the optical flow between frames since optical flow is a commonly chosen representation for motion information.
Most of SR networks use mean squared error (MSE) as loss function, which is to assume equal importance of every pixel. In contrast, we propose to use a weighted MSE (WMSE) based on optical flow to emphasize some pixels that are more important than others. In short, the loss function we used here is
where and are HR and SR frames respectively. and represent the magnitude of optical flow in the horizontal and vertical directions respectively. Here, the optical flow is calculated offline from the HR video using Flownet 2.0 , which we observed is slightly better than using TVL1 . and are the height and width of frames and , are pixel indexes. In this way, the loss can guide the network in a pixel-wise manner: pixels with larger motion have larger optical flow magnitudes that are corresponding to larger loss weights and thus are paid more attention during SR enhancement.
In addition to WMSE, we further adopt perceptual loss, which has been widely used in recent SR methods for improving visual quality [16, 21]. Using perceptual loss is indeed to minimize the difference of high-level image features between SR image and HR image. Since high-level features are closer to classification than low-level ones, it is also quite suitable for our task. In this paper, we use the outputs of ‘conv3_3’ of the VGG-16 network to calculate perceptual loss, similar to .
As mentioned before, the spatial stream is equivalent to image classification. We anticipate that single frame SR can perform well for the spatial stream and also has lower complexity than video SR. Here, we adopt the network structure of VDSR  for our proposed SoSR. We retrain the network with our training data (details in Section 4) and the following loss function:
where is a weight.
We conduct an ablation study about the proposed loss function. As shown in Table 2, the proposed WMSE performs much better than the usual MSE in terms of the final recognition accuracy, and perceptual loss further improves for SoSR.
3.3 Temporal-oriented SR
We now switch to the temporal stream. As described in the two-stream architecture, the temporal stream takes optical flow as input to utilize temporal information. The effectiveness of this design is verified by many action recognition networks [35, 3]. Thus, we want to investigate how SR affects the quality of optical flow. We again experiment with the representative image SR method–VDSR . Figure 3 shows the optical flow maps calculated from HR video, SR video, and bicubic interpolated video, respectively. Here the optical flow is calculated by the TVL1 method . From Figure 3, we can find that the optical flow from bicubic video has a lot of artifacts, and VDSR even worsens the optical flow. Thus, the traditional SR methods incur less appealing results of recognition accuracy.
Indeed, VDSR is an image SR network that enhances video frames individually and can cause temporal inconsistency. For high-quality optical flow, we need to ensure the temporal consistency between frames, which has also been studied in previous video SR works. For example, [2, 15] discussed the temporal consistency and its relation to visible flickering artifacts when displaying SR video. In Figure 4, we adopt the visualization method known as temporal profiles [2, 15] to display the flickering artifacts. As seen, VDSR indeed incurs more temporal discontinuity.
Based on the observation, we assume there is a relation between optical flow-based recognition accuracy and the temporal consistency in the SR video. Note that the existing video SR schemes usually perform SR frame by frame, which cannot guarantee the consistency between SR frames. Thus in this paper, we consider a siamese network for training video SR network.
The siamese network for training ToSR is shown in Figure 5. We use two copies of an SR network to enhance two consecutive frames respectively. First of all, we want the SR frames to have high quality, and we use two MSE losses for the two frames respectively, i.e.
. Moreover, we want to ensure the temporal continuity between SR frames. As our objective is to achieve high quality optical flow, it is straightforward to calculate the optical flow between SR frames and compare it with that between HR frames. However, this would require an optical flow estimation network to support end-to-end training. But recent optical flow networks[13, 12, 28] are too deep to be efficiently trained (they cause trouble to the error back-propagation). In this paper, we take another approach to estimate the temporal continuity. We adopt the optical flow from HR video, which can be calculated beforehand, to perform warping between two SR frames. Let the optical flow be , we use the relation to warp the SR frame . The warped result is compared against both SR and HR frames of the previous timestamp. Accordingly, we define two losses: and .
In summary, the loss function for ToSR is
where , , are weights.
Any existing image or video SR network can implement ToSR. In this paper, we investigate two possibilities. The first (ToSR1) is based on the VDSR network , which performs SR for frames individually. The second (ToSR2) is based on the VSR-DUF with 16 layers , which utilizes multiple LR frames for SR.
We perform experiments using three datasets: one natural video dataset CDVL-134 and two action recognition datasets, UCF101 and HMDB51. For the video SR task, there is not a commonly used dataset. CDVL-134 is a dataset collected by ourselves from CDVL111https://www.cdvl.org/, and contains 134 natural videos with various content, including landscapes, animals, activities and so on. Because the resolution of these videos varies from 480360 to 19201080, we resize them to around 320240 with bicubic interpolation while maintaining their aspect ratios. As for UCF101 and HMDB51, they are popular action recognition datasets. The former dataset contains 13,320 video clips belonging to 101 action categories, and the latter is composed of 51 action categories and 6,766 video clips. Both datasets provide three training/testing splits and we here only use the first split as a representative. For more details, please refer to  and  respectively.
4.2 Implement Details
Spatial-oriented SR. All LR video clips are generated by 4 down-sampling with bicubic interpolation. And we use bicubic interpolation to generate interpolated frames. FlowNet2.0  is applied on the HR frames to calculate optical flow which is then processed into weight maps. The HR frames, interpolated LR frames and weight maps are cut into 128128 aligned patches to produce training samples. In particular, we randomly select 120 frames from each video of CDVL-134 dataset and choose the top 10 crops with the largest area of motion. Excluding some obviously low-quality patches, there are totally 136,950 patches for training, and 9,830 patches for validation. Besides, we use a VDSR model222https://github.com/huangzehao/caffe-vdsr well trained on natural images as initialization and adopt pre-trained VGG-16 model333http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodeland respectively and decrease to 1/10 every quarter of maximal iteration times. Batch size is set to 50. As for loss weight, we use to make the resulting WMSE and perceptual loss to be at the same scale.
Temporal-oriented SR. All the LR video clips and patches are prepared similarly as SoSR, except that TVL1 
predictor is applied on HR frames to calculate optical flow for warping, and we have 143,250 patch pairs for training, and 10,386 pairs for validation. ToSR1 is implemented on Caffe with the same configuration as SoSR (except batch size is 60). ToSR2 is implemented on TensorFlow, and the initial learning rate is 0.01 and decreased to 1/10 every 10 epochs as recommended in. We use batch size 16 and fine-tune from the model provided by the authors of . As for loss weights, we set for ToSR1 and for ToSR2.
Baseline methods are VDSR , SPMC  and VSR-DUF  trained with MSE. Particularly, they are one single image SR method and two video SR methods. We train our SoSR and ToSRs on CDVL-134 and apply them on UCF101 and HMDB51 test split. Then TSN  and ST-Resnet  are used to obtain the recognition accuracy, shown in Table 3 and Table 4 respectively.
On spatial stream, we can find BicubicVDSRvideo SR in most cases. This result is quite intuitive because more advanced SR method would generate SR frames with more details and be more helpful to recognition on average. Further analysis is going as follow. Consider that Bicubic algorithm is independent of inputs and does not require any training, which can be regarded as a stable method among baseline methods. We draw a scatter plot to compare accuracy of TSN between different SR methods and Bicubic. From Figure 6, we can find that training SR networks with MSE, the points disperse throughout the plane which means these SR methods are unstable for the classifier. In other words, optimizing SR networks with MSE does not consistently improve recognition performance. Referring to Table 5, we can find SR methods perform well consistently in some classes while worse consistently in the other. However, situations are improved in figure of SoSR. According to the distribution and dispersion of points, our SoSR can generate SR frames easier to classified and has better performance stability in different classes. From Table 5, we can find that in easy classes, our SoSR has better recognition performance than VDSR, our based architecture, and in hard classes, improves recognition performance significantly, even beats Bicubic.
Switching to temporal stream where there is an obvious SPMCVSR-DUF among multi-frame SR methods, even VSR-DUF-16 beats SPMC with a big gap. This difference may be attributed to the design of network structure. SPMC performs explicit warping with optical flow estimated from LR frames which may introduce errors and undermine temporal consistency. While VSR-DUF uses 3D convolution directly operating on 7 consecutive LR frames to predict dynamic filters which are then used to up-sample central LR frame. This novel structure without explicit motion compensation may be the key for VSR-DUF to keep good performance. However, adding our optical flow restriction, the performance is even better. Comparing the accuracy of VDSR and our ToSR1 verifies that our training strategy could refine single frame SR networks by maintaining temporal consistency. But its performance is limited by lack of information about front and rear frames. Breaking this limitation, our ToSR2, SR network based on 16-layer VSR-DUF, outperforms VSR-DUF-52 and approaches HR frames’ performance.
Finally, we take a view of both streams in Table 3, 4. Comparing rows with similar architectures (VDSR, SoSR, ToSR1 and VSR-DUF, ToSR2 respectively), we can find a trade-off between accuracy of two streams. Besides the fusion results also verify the advantages of performing SR respectively on two streams.
4.4 Visual Inspection
To obtain video-level accuracy, we sample 25 frames from each video clip and perform recognition as recommended in . Figure 7 shows video-level accuracy of TSN and visual quality of example frames. Generally, from accuracy and visual quality of Bicubic, SoSR444Zooming-in inspection of SoSR results shows visible blocking artifacts, which we have confirmed is caused by the joint effect of the perceptual loss and the (even invisible) blocking artifacts in the input video. Refer to the supplementary material for more details. and HR frames, the recognition accuracy increases as the visual quality increases. But situations are slightly different among SR methods. This may result from imperfection of recognition CNN.
Switching to temporal stream, Figure 8 shows TVL1 optical flow calculated from different frames, from which we can find optical flow of both VSR-DUF and our ToSR2 have less artifacts. While our ToSR2 is more similar to HR than VSR-DUF, especially in highlight regions. Using view of video SR, temporal profiles are shown in Figure 9. We can observe that Bicubic and VDSR frames do not reconstruct enough image details and SPMC as well as VSR-DUF are shape but do not restore the handrail in each frame. In contrast, our ToSR2 has the best temporal consistency. Please refer to supplementary materials for more visualization.
In this paper, we consider the video SR problem not for visual quality, but for facilitating action recognition accuracy. Tailored for two-stream action recognition networks, we propose SoSR with optical flow guided weighted MSE loss, and ToSR with a siamese network to emphasize temporal consistency, for the spatial and temporal streams respectively. Experimental results demonstrate the advantages of our proposed SoSR and ToSR methods. In the future, we plan to test our methods on real-world videos, and also consider video SR and action recognition in a holistic network. The trade-off between PSNR or perceptual quality and recognition accuracy is also worth studying.
-  Y. Blau and T. Michaeli. The perception-distortion tradeoff. In CVPR, pages 6228–6237, 2018.
-  J. Caballero, C. Ledig, A. P. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi. Real-time video super-resolution with spatio-temporal networks and motion compensation. In CVPR, volume 1, pages 4778–4787, 2017.
-  J. Carreira and A. Zisserman. Quo vadis, action recognition? A new model and the kinetics dataset. In CVPR, pages 4724–4733, 2017.
-  D. Dai, Y. Wang, Y. Chen, and L. Van Gool. Is image super-resolution helpful for other vision tasks? In WACV, pages 1–9, 2016.
-  A. Diba, V. Sharma, and L. Van Gool. Deep temporal linear encoding networks. In CVPR, pages 2329–2338, 2017.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, pages 2625–2634, 2015.
-  C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):295–307, 2016.
-  C. Feichtenhofer, A. Pinz, and R. Wildes. Spatiotemporal residual networks for video action recognition. In NIPS, pages 3468–3476, 2016.
-  C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In CVPR, pages 1933–1941, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
-  S. Herath, M. Harandi, and F. Porikli. Going deeper into action recognition: A survey. Image and Vision Computing, 60:4–21, 2017.
-  T.-W. Hui, X. Tang, and C. C. Loy. LiteFlowNet: A lightweight convolutional neural network for optical flow estimation. In CVPR, pages 8981–8989, 2018.
-  E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, volume 2, pages 2462–2470, 2017.
-  S. Ji, W. Xu, M. Yang, and K. Yu. 3D convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):221–231, 2013.
-  Y. Jo, S. W. Oh, J. Kang, and S. J. Kim. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In CVPR, pages 3224–3232, 2018.
-  J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, pages 694–711, 2016.
-  A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos. Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2(2):109–122, 2016.
-  J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, pages 1646–1654, 2016.
-  J. Kim, J. K. Lee, and K. M. Lee. Deeply-recursive convolutional network for image super-resolution. In CVPR, pages 1637–1645, 2016.
-  H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: A large video database for human motion recognition. In ICCV, pages 2556–2563, 2011.
-  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, number 3, pages 4681–4690, 2017.
-  B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee. Enhanced deep residual networks for single image super-resolution. In CVPRW, number 2, pages 136–144, 2017.
-  K. Nasrollahi, S. Escalera, P. Rasti, G. Anbarjafari, X. Baro, H. J. Escalante, and T. B. Moeslund. Deep learning based super-resolution for improved action recognition. In Image Processing Theory, Tools and Applications, pages 67–72, 2015.
-  Z. Qiu, T. Yao, and T. Mei. Learning spatio-temporal representation with pseudo-3D residual networks. In ICCV, pages 5534–5542, 2017.
-  W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, pages 1874–1883, 2016.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, pages 568–576, 2014.
-  K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
-  D. Sun, X. Yang, M.-Y. Liu, and J. Kautz. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In CVPR, pages 8934–8943, 2018.
-  L. Sun, K. Jia, D.-Y. Yeung, and B. E. Shi. Human action recognition using factorized spatio-temporal convolutional networks. In ICCV, pages 4597–4605, 2015.
-  Y. Tai, J. Yang, and X. Liu. Image super-resolution via deep recursive residual network. In CVPR, number 2, pages 3147–3155, 2017.
-  Y. Tai, J. Yang, X. Liu, and C. Xu. MemNet: A persistent memory network for image restoration. In CVPR, pages 4539–4547, 2017.
-  X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia. Detail-revealing deep video super-resolution. In ICCV, pages 22–29, 2017.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, pages 4489–4497, 2015.
-  G. Varol, I. Laptev, and C. Schmid. Long-term temporal convolutions for action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1510–1517, 2018.
-  L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, pages 20–36, 2016.
-  X. Wang, L. Gao, P. Wang, X. Sun, and X. Liu. Two-stream 3D convnet fusion for action recognition in videos with arbitrary size and length. IEEE Transactions on Multimedia, 20(3):634–644, 2018.
-  Z. Wu, X. Wang, Y.-G. Jiang, H. Ye, and X. Xue. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In ACM MM, pages 461–470, 2015.
-  J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, pages 4694–4702, 2015.
C. Zach, T. Pock, and H. Bischof.
A duality based approach for realtime TV-L optical flow.
Joint Pattern Recognition Symposium, pages 214–223, 2007.
-  H. Zhang, D. Liu, and Z. Xiong. Convolutional neural network-based video super-resolution for action recognition. In FG, pages 746–750, 2018.
-  Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu. Image super-resolution using very deep residual channel attention networks. In ECCV, pages 1–16, 2018.
-  Y. Zhou, X. Sun, Z.-J. Zha, and W. Zeng. MiCT: Mixed 3D/2D convolutional tube for human action recognition. In CVPR, pages 449–458, 2018.