High-resolution videos are widely used over various computer vision tasks
. However, considering the increased storage burden or the high computational cost, it is usually required to first downscale the high-resolution videos. Then we can either compress the output low-resolution videos for saving storage cost or feed them to the downstream tasks to reduce the computational cost. Despite that this paradigm is prevalent, it has the following two disadvantages. First, it is non-trivial to restore the original high-resolution videos from the (compressed) low-resolution videos, even we use the latest super-resolution methods[30, 65, 48, 59]. Second, it is also a challenge for the downstream tasks to achieve high performance based on these low-resolution videos. Therefore, it raises a question that whether the downscaling operation can facilitate the reconstruction of the high-resolution videos and also preserve the most meaningful information for the downstream tasks.
Recently, this question has been partially studied as a single image rescaling problem [24, 27, 47, 62], which learns the image downscaling and upscaling operators jointly. However, how to adapt these methods from image to video domain and leverage the rich temporal information within videos are still open problems. More importantly, modeling the lost information during downscaling is non-trivial. Current methods either ignore the lost information [24, 27, 47] or assume it as an independent distribution in the latent space , while neglecting the internal relationship between the downscaled image and the lost information. Besides, all literature mentioned above have not explored how to apply the rescaling technique to the lossy image/video compression.
In this paper, we focus on building a video rescaling framework and propose a self-conditioned probabilistic learning approach to learn a pair of video downscaling and upscaling operators by exploiting the information dependency within the video itself. Specifically, we first design a learnable frequency analyzer to decompose the original high-resolution video into its downscaled version and the corresponding high-frequency component. Then, a Gaussian mixture distribution is leveraged to model the high-frequency component by conditioning on the downscaled video. For accurate estimation of the distribution parameters, we further introduce the local and global temporal aggregation modules to fuse the spatial information from adjacent downscaled video frames. Finally, the original video can be restored by a frequency synthesizer from the downscaled video and the high-frequency component sampled from the distribution. We integrate the components above as a novel self-conditioned video rescaling framework termed SelfC and optimize it by minimizing the negative log-likelihood for the distribution.
Furthermore, we apply our proposed SelfC in two practical applications, i.e. lossy video compression and video action recognition. In particular, to integrate our framework with the existing non-differential video codecs (e.g., H.264  and H.265 ), we propose an efficient and effective one-pass optimization strategy based on the control variates method and approximate the gradients of traditional codecs in the back-propagation procedure, which formulates an end-to-end optimization system.
Experimental results demonstrate that the proposed framework achieves state-of-the-art performance on the video rescaling task. More importantly, we further demonstrate the effectiveness of the framework in practical applications. For the lossy video compression task, compared with directly compressing the high-resolution videos, the video compression system based on our SelfC framework cuts the storage cost significantly (up to 30% reduction). For the video action recognition task, our framework reduces more than 60% computational complexity with negligible performance degradation.
In summary, our main contributions are:
We propose a probabilistic learning framework dubbed SelfC for the video rescaling task, which models the lost information during downscaling as a dynamic distribution conditioned on the downscaled video.
Our approach exploits rich temporal information in downscaled videos for an accurate estimation of the distribution parameters by introducing the specified local and global temporal aggregation modules.
We propose a gradient estimation method for non-differential lossy codecs based on the control variates method and Monte Carlo sampling technique, extending the framework to a video compression system.
2 Related Work
Video Upscaling after Downscaling. Traditional video downscaling approaches subsample the input high-resolution (HR) videos by a handcrafted kernel, such as Bilinear and Bicubic. For restoration, video super-resolution (SR) methods are utilized. Since the SR task is inherently ill-posed, previous SR works [30, 65, 48, 59, 16, 23]
mainly leverage a heavy neural network to hallucinate the lost details, only achieving unsatisfactory results. Taking the video downscaling method into consideration may help mitigate the ill-posedness of the video upscaling procedure.
There are already a few works on single image rescaling task in a similar spirit, which consider the downscaling and the upscaling of the image simultaneously. For example, Kim  proposed a task-aware downscaling model based on an auto-encoder framework. Later, Li 
proposed to use a convolution neural network (CNN) to estimate the downscaled low-resolution images for a given super-resolution method. For stereo matching task, Yang proposed a superpixel-based downsampling/upsampling scheme to effectively preserve object boundaries and fine details More recently, Xiao  proposed to leverage an invertiable neural network (INN) to model the two reciprocal steps, which relies on a very deep INN to map the complex distribution of the lost information to an independent and fixednormal distribution.
However, these methods neither leverage the temporal information between adjacent frames, which is important for video related tasks, nor consider the fact that the components of different frequencies in natural images or videos are conditionally dependent [56, 45, 38, 55].
Video Compression. Several traditional video compression algorithms have been proposed and widely deployed, such as H.264  and H.265 . Most of them follow the predictive coding architecture and rely on the sophisticated hand-crafted transformations to analyze the redundancy within the videos. Recently, fully end-to-end video codecs  such as DVC  have been proposed by considering the rate-distortion trade-off of the whole compression system. They demonstrate promising performance and may be further improved by feeding more ubiquitous videos in the wild. However, they haven’t been widely used by industrial and are lack of the hardware implementation. In contrast, our framework can be readily integrated with the best traditional video codecs and further saves the storage space of the compressed video significantly.
Video Action Recognition. Simonyan  first proposed the two-stream framework. Feichtenhofer  then improved it. Later, wang  proposed a new sparse frame sampling strategy. Recently, 3D networks  also show promising performance. Our work can accelerate the off-the-shelf action CNNs by 3-4 times while reserving the comparable performance. We mainly conduct experiments on light-weight 2D action CNNs (e.g., TSM ) based on 2D-ResNet50  for efficiency.
3 Proposed Method
An overview of our proposed SelfC framework is shown in Fig. 1 (a). During the downscaling procedure, given a high-resolution (HR) video, a frequency analyzer (FA) (Section 3.1) first converts it into video features , where the first channels are low-frequency (LF) component , the last channels are high-frequency (HF) component , and is the downscaling ratio. Then, is quantized to a LR video for storage. is discarded in this procedure.
During the upscaling procedure, given the LR video , the spatial-temporal prior network (STP-Net) (Section 3.3
) predicts the probability density function of the HF component:
as a continuous mixture of the parametric Gaussian distributions (Section3.2). Then, a case of the HF component related to LR video is drawn from the distribution. Finally, we reconstruct the HR video from the concatenation of HF component and LR video by the frequency synthesizer (FS).
3.1 Frequency Analyzer and Synthesizer
As shown in Fig. 1 (b), we first decompose the HR input video as the times downscaled low-frequency component and the residual high-frequency component , where denotes the spatial scale of the original video and denotes the video length. and represent bicubic downscaling and upscaling operations with scaling ratio . is the inverse operation of the pixel shuffling operation proposed in , where the scaling ratio is also . Then, we use a learnable transformation to transform and to the output features , where denotes the channel concatenation operation. Here the produced video feature consists of LF component and HF component . The network architecture for is very flexible in our framework and we use multiple stacking Dense2D-T blocks to implement by default. The architecture of Dense2D-T block is shown in Fig. 2, where we extend the vanilla Dense2D block  with the temporal modeling ability.
The architecture of frequency synthesizer is symmetric with the analyzer, as shown in Fig. 1 (c). Specifically, we use channel splitting, bicubic upscaling and pixel shuffling operations to synthesize the final high resolution videos based on the reconstructed video feature .
3.2 A Self-conditioned Probabilistic Model
Directly optimizing in Eq. (1) through gradient descent is unstable due to the unsmooth gradient  of the quantization module. Thus, we optimize instead during training procedure. Specifically, we represent the high-frequency component as a continuous multi-modal probability distribution conditioned on the low-frequency component , which is formulated as:
denotes the spatial-temporal location. We use a continuous Gaussian Mixture Model (GMM) to approximate with component number . The distributions are defined by the learnable mixture weights , means
and log variances. With these parameters, the distributions can be accurately determined as:
and denotes the spatial-temporal location.
3.3 Spatial-temporal Prior Network (STP-Net)
As shown in Fig. 1 (d), to estimate the parameters of the distribution above, we propose the STP-Net to model both the local and global temporal information. We first utilize the Dense2D-T block to extract the short term spatial-temporal features for each input frame. In this stage, only information from local frames, i.e., the previous or the next frames, are aggregated into the current frame, while the temporally long-range dependencies in videos are neglected. Therefore, we further introduce the attention mechanism for modeling the global temporal information. More specifically, the spatial dimension of the short-term spatial-temporal features is first reduced by a spatial aggregator, which is implemented as an average pooling operation followed by a full-connected (FC) layer. The output scale of the pooling operation is 32
32. Then we use dot-producting operation to generate the attention map, which represents the similarity scores between every two frames. Finally, we refine the local spatial-temporal features based on the similarity scores. We repeat the following procedure for 2 times to extract better video features. After that, a 3-layer multi layer perceptron (MLP) is used to estimate the parameters of the GMM distribution, where the linear layers are implemented as 3D convolutions of kernel size.
3.4 Quantization and Storage Medium
We use rounding operation as the quantization module, and store the output LR videos by lossless format, i.e., H.265 lossless mode. The gradient of the module is calculated by Straight-Through Estimator . We also discuss how to adapt the framework to more practical lossy video formats such as H.264 and H.265 in Section 3.6.
3.5 Training Strategy
Building a learned video rescaling framework is non-trivial, especially the generated low-resolution videos are expected to benefit both the upscaling procedure and the downstream tasks. We consider the following objectives.
Self-conditioned Probability Learning. First, to make sure the STP-Net can obtain an accurate estimation for the HF component , we directly minimize the negative log-likelihood of in Eq. (2):
where is the number of the training samples.
Mimicking Bicubic downscaling. Then the downscaled video is preferred to be similar to the original video, making its deployment for the downstream tasks easier. Therefore, we regularize the the downscaled video before quantization, i.e., , to mimic the bicubic downsampled :
Penalizing . Without any extra constraint, Eq. (5) can be easily minimized by tuning to one constant tensor for any input video. Thus, to avoid the trivial solution, the CNN parts of frequency analyzer and synthesizer are penalized by the photo-parametric loss (i.e., loss) between the video directly reconstructed from and the original input :
Minimizing reconstruction difference. Finally, the expected difference between the reconstructed video sampled from the model and the original video should be minimized:
where denotes a photo-metric loss (i.e., loss), denotes the channel-wise concatenation operation. In each training iteration, is sampled from the distribution constructed from the parameters output by STP-Net, conditioning on the LR video . To enable an end-to-end optimization, we apply the “reparametrization trick” [26, 41, 13] to make the sampling procedure differentiable. More details are provided in the supplementary material.
The total loss is then given by:
where , , and are the balancing parameters, and
is the scaling ratio. The loss function of our framework may seem a little bit complicated. However, we want to mention that the performance of our framework is not sensitive to these hyper-parameters and directly setting all the parameters to 1 already achieves reasonable performance.
3.6 Application I: Video Compression
In this section, we extend the proposed SelfC framework to a lossy video compression system and aim to demonstrate the effectiveness of our approach in reducing the storage size. The whole system is shown in Fig. 3. Specifically, we first use the SelfC framework to generate the downscaled video , which will be compressed by using the existing codecs, e.g., H.265. Then at the decoder side, the compressed videos will be decompressed and upscaled to the full resolution video.
Considering the traditional video codecs are non-differential, we further propose a novel optimization strategy. Specifically, we introduce a differential surrogate video perturbator , which is implemented as a deep neural network (DNN) consisting of 6 Dense2D-T blocks. During the back-propagation stage, the gradient of the codec can be approximated by that of , which is tractable. During the test stage, the surrogate DNN is removed and we directly use the H.265 codec for compression and decompression.
According to the control variates theory [10, 12], can be an low-variance gradient estimator for the video codec (i.e., ) when (1) the differences between the outputs of the two functions are minimized and (2) the correlation coefficients of the two output distributions are maximized.
Therefore, we introduce these two constraints to the optimization procedure of the proposed SelfC based video compression system. And the loss function for the surrogate video perturbator is formulated as:
where is set to a small value, i.e., 0.001, and is estimated within each batch by Monte Carlo sampling:
and denotes the batch size. Finally, the total loss function for the SelfC based video compression system is given by:
3.7 Application II: Efficient Action Recognition
We further apply the proposed SelfC framework to the video action recognition task. Specifically, we adopt the LR videos (i.e., ) downscaled by our framework as the input of action recognition CNNs for efficient action recognition. Considering the downscaler of our approach can preserve meaningful information for the downstream tasks and the complexity of itself can be rather low, inserting the downscaler before the off-the-shelf action CNNs can reduce the huge computational complexity of them with negligible performance drop. Moreover, the light-weightiness of the rescaling framework makes the joint optimization tractable. In fact, compared with bicubic downscaling operation, our downscaler in SelfC framework can still generate more informative low-resolution videos for the action recognition task even without the joint training procedure. Please see Section 4.5 for more experimental results.
We use Vimeo90K dataset  as our training data, which is also adopted by the recent video super resolution methods [21, 22, 59] and video compression methods [32, 18, 31]. For video rescaling task, the evaluation datasets are the test set of Vimeo90K (denoted by Vimeo90K-T), the widely-used Vid4 benchmark  and SPMCs-30 dataset . For video compression task, the evaluation datasets include UVG , MCL-JCV , VTL  and HEVC Class B . For video recognition task, we train and evaluate it on two large scale datasets requiring temporal relation reasoning, i.e., Something V1&V2 .
|Downscaling||Upscaling||#Frame||FLOPs||#Param.||Calendar (Y)||City (Y)||Foliage (Y)||Walk (Y)||Vid4-avg(Y)||Vid4-avg(RGB)|
|Bicubic||RSDN 9-128 ||2||0.13T||6.19M||24.60/0.8355||29.20/0.8527||26.84/0.7931||31.04/0.9210||27.92/0.8505||26.43/0.8349|
|Upscaling||Bicubic||TOFlow ||FRVSR ||DUF-52L ||RBPN ||PFNL||RSDN 9-128 ||IRN ||SelfC-small||SelfC-large|
4.2 Implementation Details
(1) Video rescaling: , , and are set as 0.1, 1, 1 and 1, respectively. Each training clip consists of 7 RGB patches of size 256256. The batch size is set as 32. We augment the training data with random horizontal flips and 90 rotations. We train our model with Adam optimizer  by setting as 0.9, as 0.99, and learning rate as
. The total training iteration number is about 240,000. The learning rate is divided by 10 every 100,000 iterations. We implement the models with the PyTorch framework and train them on a server with 8 NVIDIA 2080Ti GPUs. We draw 5 times from the generated distribution for each evaluation and report the averaged performance. We leverage the invertiable neural network (INN) architecture to implement the CNN parts of the paired frequency analyzer and synthesizer for fair comparison with IRN on parameter number because INN will cut the number of parameters by 50%. We propose the two following models: the SelfC-small and SelfC-large, which consist of 2 and 8 invertiable Dense2D-T blocks respectively. The detailed architecture of this block is in the supplementary material, roughly following . Training the SelfC-large model takes about 90 hours.
(2) Video compression: The rescaling ratio of the SelfC is set to 2. We use H.265 as our default codec in the experiments. is set as 100 to make sure the statistical distribution of the downscaled videos is more closed to the natural images, which stabilizes the performance of the whole system. The other details follow that of video rescaling task. The models are initialized from SelfC-large model but the number of invertiable Dense2D-T blocks is reduced to 4. The surrogate CNN is randomly initialized, and are jointly optimized with video rescaler.
(3) Action recognition: We insert the downscaler of our framework before the action recognition CNN (i.e., TSM ). The data augmentation pipeline also follows TSM. The downscaling ratio is 2. At inference time, we used just 1 clip per video and each clip contains 8 frames. We adopt 2 plain Dense2D-T blocks with intermediate channel number of 12 as the CNN part of frequency analyzer. Note that the downscaler is first pretrained on Vimeo90K dataset by the video rescaling task.
4.3 Results of Video Rescaling
As shown in Tab. 1 and Tab. 2, our method outperforms the recent state-of-the-art video super resolution methods on Vid4, SPMCs-30 and Vimeo90K-T by a large margin in terms of both PSNR and SSIM. For example, the average PSNR(Y) results on the Vid4 dataset for our SelfC-large is 31.11dB, while the corresponding result for the state-of-the-art video super resolution approach RSDN is only 27.92dB. Furthermore, we also provide the results of image rescaling method, i.e., IRN, in Tab. 1 and Tab. 2. It is obvious that our method outperforms IRN while also reduces the computational complexity by 2 times (SelfC-large) or 8 times (SelfC-small). This result clearly demonstrates that it is necessary to exploit the temporal relationship for video rescaling task, while the existing image rescaling methods like IRN ignored this temporal cue. We also show the qualitative comparison with other methods on Vid4 in Fig. 4. Please refer to the supplementary material for more qualitative results. Our method demonstrates much better details and sharper images than both video super-resolution methods and image rescaling method, proving the superiority of the video rescaling paradigm.
4.4 Results of Video Compression
For fair comparison, both the standard H.265 codec and the codec embedded in our framework follow the same setting in 
and use the FFmpeg with very fast mode. The evaluation metrics are PSNR and MS-SSIM.
Fig. 5 shows the experimental results on the UVG and MCL-JCV datasets. It is obvious that our method outperforms both the traditional methods and learning based method on video compression task by a large margin. On the UVG dataset, the proposed method achieves about 0.8dB gain at the same Bpp level in comparison with H.265. Although our method is only optimized by loss, it demonstrates strong performances in terms of both PSNR and MS-SSIM metrics.
We also evaluate the Bjøntegaard Delta Bit-Rate (BDBR)  by using H.265 as the anchor method, and calculate the average bit-rate difference at the same PSNR or MS-SSIM, which indicates the storage burden reduction quantitatively. As shown in Tab. 3 , our method saves the bit cost by over 25% averagely under the same PSNR and over 20% under the same MS-SSIM. Notably, we reduce the bit cost by about 40% on UVG dataset. This proves that video rescaling technique is a novel and effective way to improve the video compression performance, without considering much about the complicated details of the codecs, especially for the industrial lossy codecs.
We perform more analysis to verify the effectiveness of the “Video rescaling+Codec” paradigm and the proposed gradient estimation method. As shown in Fig. 6, it is observed that using Bicubic as the downscaler and upscaler in the video compression system (i.e., H.265+Bicubic) leads to much inferior result than the baseline. We also try to improve the result by using a state-of-the-art video super resolution method, i.e., TGA . The performance is indeed improved though still lower than the baseline method H.265. Considering the network parameters of TGA are 5.87M while ours are only 2.65M, this result further demonstrates the effectiveness of our SelfC framework. Finally, we provide experimental results (i.e., Ours W/O Gradient) when directly using the biased Straight-Through Estimator  for H.265 codec. The results show that the proposed gradient estimation method in Section 3.6 can bring nearly 0.3dB improvements.
|Input HR video||LR video||HF component||Sample 1 of||Sample 2 of|
4.5 Results of Efficient Video Action Recognition
We show the video action recognition results in Tab. 4. In the first group, when directly testing the action model pretrained on full resolution videos, we observe the performances of low-resolution videos downscaled by Bicubic and ours are both dropped drastically, because the classification networks are rather sensitive to the absolute scale of the input. However, our downscaler still performs better (30.4% vs. 32.7% on Something V1 dataset).
|Top1 (%)||Top5 (%)||Top1 (%)||Top5 (%)|
In the second group, we provide the experimental results when the action recognition CNN is fine-tuned on the low-resolution videos from Bicubic and our downscaler in SelfC. The details of fine-tuning procedure are in the the supplementary material. It is obvious that our method clearly outperforms bicubic downscaling by about in terms of Top1 accuracy on the two datasets. Notably, our downscaler is learnable. Therefore, we then fine-tune action recognition CNNs and our downscaler jointly. The results are in the third group. The end-to-end joint training further improves the performance by an obvious margin. On Something V2, the ultimate performances of our method nearly achieve that of performing recognition directly on HR videos, and our method improves the efficiency by over 3 times. The downscaler of IRN can not improve the efficiency of this task because its computation cost is even larger than the HR setting. We try to decrease the layer number of IRN but it no longer converges.
4.6 Ablation Studies on the Framework
In this section, we conduct experiments on video rescaling task to verify the effectiveness of the components in our framework. We first define the following 2 baselines: (1) IRN 
, which is the most recent state-of-the-art image rescaling method. For fair comparison, we retrain it on Vimeo90K dataset using the codes open-sourced by the authors. (2) Auto-Enc, which is a simple auto encoder-decode architecture by removing the STP-Net of our model. The experimental results are shown in Tab.5.
First, the Auto-Enc baseline shows more inferior performance than both IRN and our method. This proves that explicitly modeling the lost information is important. IRN is inferior to our small model although IRN leverages an 8 times heavier backbone. We also tried to equip IRN with the temporal modeling ability by replacing its backbone from Dense2D to Dense2D-T. Surprisingly, the performance of the new model IRN
decreases by 0.47dB. The reason is that IRN relies on the complex non-linear transformation to transform the real distribution of the lost information to the normal distribution while the transformation ability of the Dense2D-T is weaker (missing 0.73M parameters).
For our method, we start from the most simple model denoted by SelfC-basic, where backbone consists of only spatial convolutions, and the STP-Net only outputs a simple Gaussian distribution. The performance of this model is comparable with IRN but with 6 fewer parameters. This proves the efficiency and superiority of the proposed self-conditioned distribution modeling scheme. Then, we introduce an improved model denoted by SelfC-basicT. The temporal modeling ability of it is stronger by changing the basic block from Dense2D to Dense2D-T. This leads to 0.45dB improvement while reducing the parameters, proving the effectiveness of the Dense2D-T block for video task. Further, we increase the mixture number of the GMM model to 5. The resulted SelfC-small model outperforms all the baselines by a large margin (30.28dB) with only 0.68M parameters. Our model is also scalable with larger backbone network. Enlarging that by 4 times improves the performance by 0.83dB. For more ablation studies on the depth of backbone network, comparison of different probabilistic modeling methods, the architecture of the STP-Net and the loss functions, please refer to the supplementary material.
4.7 Visualization Results
While the previous quantitative results validate the superiority of the proposed self-conditioned modeling scheme on several tasks, it is interesting to investigate the intermediate components output by our model, especially the distribution of the high-frequency (HF) component predicted by STP-Net. Note that the distribution is a mixture of Gaussian and includes multiple channels, we draw two samples of from and randomly select 1 channel of them for visualization. The output from the frequency analyzer is adopted as the ground-truth sample.
As shown in Fig. 7, we first see that the LR video downscaled by our method is modulated into some mandatory information for reconstructing the HF components more easily, compared to Bicubic. Also, the sampled HF components can restore the ground-truth of that accurately in terms of key structures, i.e., the windows of the building, while retaining a certain degree of the randomness. This is consistent with our learning objectives.
We have proposed a video-rescaling framework to learn a pair of downscaling and upscaling operations. Extensive experiments demonstrated that our method can outperform the previous methods with a large margin while with much fewer parameters and computational cost. Moreover, the learned downscaling operator facilitates the tasks of video compression and efficient action recognition significantly.
Scale-space flow for end-to-end optimized video compression.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8503–8512. Cited by: §2.
Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv. Cited by: §3.4, §4.4.
-  (2001) Calculation of average psnr differences between rd-curves. VCEG-M33. Cited by: §4.4.
-  (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, Cited by: §2.
-  (2019) Neural inter-frame compression for video coding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6421–6429. Cited by: §2.
-  (2016) Video-based emotion recognition using cnn-rnn and c3d hybrid networks. In Proceedings of the 18th ACM international conference on multimodal interaction, pp. 445–450. Cited by: §1.
-  (2019) Slowfast networks for video recognition. In ICCV, Cited by: §2.
-  (2016) Convolutional two-stream network fusion for video action recognition. In CVPR, Cited by: §2.
-  (2019) Efficient video super-resolution through recurrent latent space propagation. In ICCVW, Cited by: Table 1.
-  (2002) Some new perspectives on the method of control variates. In Monte Carlo and Quasi-Monte Carlo Methods 2000, Cited by: §3.6.
-  (2017) The” something something” video database for learning and evaluating visual common sense.. In ICCV, Cited by: §4.1.
-  (2017) Backpropagation through the void: optimizing control variates for black-box gradient estimation. arXiv. Cited by: §3.6.
-  (2016) Stochastic backpropagation through mixture density distributions. arXiv. Cited by: §3.5.
Video compression with rate-distortion autoencoders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7033–7042. Cited by: §2.
-  (2017) Learning spatio-temporal features with 3d residual networks for action recognition. In ICCVW, Cited by: §2.
-  (2019) Recurrent back-projection network for video super-resolution. In CVPR, Cited by: §2, Table 1, Table 2.
-  (2016) Deep residual learning for image recognition. In CVPR, Cited by: §2.
-  (2020) Improving deep video compression by resolution-adaptive flow coding. In ECCV, Cited by: §4.1.
-  (2021) FVC: a new framework towards deep video compression in feature space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1502–1511. Cited by: §2.
-  (2014) Densenet: implementing efficient convnet descriptor pyramids. In arXiv, Cited by: §3.1.
-  (2020) Video super-resolution with recurrent structure-detail network. In ECCV, Cited by: §4.1, Table 1, Table 2.
-  (2020) Video super-resolution with temporal group attention. In CVPR, Cited by: §4.1, §4.4, Table 1.
-  (2018) Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In CVPR, Cited by: §2, Table 1, Table 2.
-  (2018) Task-aware image downscaling. In ECCV, Cited by: §1, §2, §3.2.
-  (2014) Adam: a method for stochastic optimization. In arXiv, Cited by: §4.2.
-  (2013) Auto-encoding variational bayes. arXiv. Cited by: §3.5.
-  (2018) Learning a convolutional neural network for image compact-resolution. In TIP, Cited by: §1, §2.
-  (2019) Tsm: temporal shift module for efficient video understanding. In ICCV, Cited by: §2, §4.2.
-  (2020) M-lvc: multiple frames prediction for learned video compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3546–3554. Cited by: §2.
-  (2013) On bayesian adaptive video super resolution. In TPAMI, Cited by: §1, §2, §4.1, Table 1.
-  (2020) Content adaptive and error propagation aware deep video compression. In European Conference on Computer Vision, pp. 456–472. Cited by: §2, §4.1.
-  (2019) Dvc: an end-to-end deep video compression framework. In CVPR, Cited by: §2, Figure 5, §4.1, §4.4.
Deep kalman filtering network for video compression artifact reduction. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 568–584. Cited by: §1.
-  (2020) An end-to-end learning framework for video compression. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.
-  (2019) Deep non-local kalman network for video compression artifact reduction. IEEE Transactions on Image Processing 29, pp. 1725–1737. Cited by: §1.
-  (2018) Fine-grained video classification and captioning. In arXiv, Cited by: §4.1.
Pytorch: an imperative style, high-performance deep learning library. In NeurIPS, Cited by: §4.2.
-  (2003) Image denoising using scale mixtures of gaussians in the wavelet domain. In TIP, Cited by: §2.
-  (2017) Learning spatio-temporal representation with pseudo-3d residual networks. In ICCV, Cited by: §2.
-  (2009) Gaussian mixture models.. In EOB, Cited by: §3.2.
-  (2014) Stochastic backpropagation and approximate inference in deep generative models. In ICML, Cited by: §3.5.
-  (2018) Frame-recurrent video super-resolution. In CVPR, Cited by: Table 1, Table 2.
-  (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, Cited by: §3.1.
-  (2014) Two-stream convolutional networks for action recognition in videos. In NeurIPS, Cited by: §1, §2.
-  (2000) Image denoising using a local gaussian scale mixture model in the wavelet domain. In Wavelet Applications in Signal and Image Processing VIII, Cited by: §2.
-  (2012) Overview of the high efficiency video coding (hevc) standard. In TCSVT, Cited by: §1, §2, §4.1.
-  (2020) Learned image downscaling for upscaling using content adaptive resampler. In TIP, Cited by: §1.
-  (2017) Detail-revealing deep video super-resolution. In ICCV, Cited by: §1, §2, §4.1, Table 1.
-  (2020) Self-supervised motion representation via scattering local motion cues. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, pp. 71–89. Cited by: §2.
-  (2019) Video-based early asd detection via temporal pyramid networks. In ICME, Cited by: §1.
-  (2015) Learning spatiotemporal features with 3d convolutional networks. In ICCV, Cited by: §2.
-  (2018) A closer look at spatiotemporal convolutions for action recognition. In CVPR, Cited by: §2.
-  Ultra video group test sequences. http://ultravideo.cs.tut.fi, accessed: 2019- 11-06. Cited by: §4.1.
-  Video trace library. http://trace.kom.aau.dk/yuv/index.html, accessed: 2019- 11-06. Cited by: §4.1.
-  (2001) Random cascades on wavelet trees and their use in analyzing and modeling natural images. In ACHA, Cited by: §2.
-  (1999) Scale mixtures of gaussians and the statistics of natural images. In NeurIPS, Cited by: §2.
-  (2016) MCL-jcv: a jnd-based h. 264/avc video quality assessment dataset. In ICIP, Cited by: §4.1.
-  (2018) Temporal segment networks for action recognition in videos. In TPAMI, Cited by: §2.
-  (2019) Edvr: video restoration with enhanced deformable convolutional networks. In CVPRW, Cited by: §1, §2, §4.1, Table 1.
-  (2003) Multiscale structural similarity for image quality assessment. In ACSSC, Cited by: §4.4.
-  (2003) Overview of the h. 264/avc video coding standard. TCSVT. Cited by: §1, §2.
-  (2020) Invertible image rescaling. In ECCV, Cited by: §1, §2, §4.2, §4.6, Table 1, Table 2.
-  (2015) Empirical evaluation of rectified activations in convolutional network. In arXiv, Cited by: Figure 2.
-  (2015) A discriminative cnn video representation for event detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1798–1807. Cited by: §1.
-  (2019) Video enhancement with task-oriented flow. In IJCV, Cited by: §1, §2, §4.1, Table 1, Table 2.
DeHiB: deep hidden backdoor attack on semi-supervised learning via adversarial perturbation. In
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 10585–10593. Cited by: §1.
-  (2020) Superpixel segmentation with fully convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13964–13973. Cited by: §2.
-  (2020) Learning for video compression with hierarchical quality and recurrent enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6628–6637. Cited by: §2.
-  (2019) Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations. In ICCV, Cited by: Table 1, Table 2.