1 Introduction
Highresolution videos are widely used over various computer vision tasks
[44][64][6][35][33][50][66]. However, considering the increased storage burden or the high computational cost, it is usually required to first downscale the highresolution videos. Then we can either compress the output lowresolution videos for saving storage cost or feed them to the downstream tasks to reduce the computational cost. Despite that this paradigm is prevalent, it has the following two disadvantages. First, it is nontrivial to restore the original highresolution videos from the (compressed) lowresolution videos, even we use the latest superresolution methods
[30, 65, 48, 59]. Second, it is also a challenge for the downstream tasks to achieve high performance based on these lowresolution videos. Therefore, it raises a question that whether the downscaling operation can facilitate the reconstruction of the highresolution videos and also preserve the most meaningful information for the downstream tasks.Recently, this question has been partially studied as a single image rescaling problem [24, 27, 47, 62], which learns the image downscaling and upscaling operators jointly. However, how to adapt these methods from image to video domain and leverage the rich temporal information within videos are still open problems. More importantly, modeling the lost information during downscaling is nontrivial. Current methods either ignore the lost information [24, 27, 47] or assume it as an independent distribution in the latent space [62], while neglecting the internal relationship between the downscaled image and the lost information. Besides, all literature mentioned above have not explored how to apply the rescaling technique to the lossy image/video compression.
In this paper, we focus on building a video rescaling framework and propose a selfconditioned probabilistic learning approach to learn a pair of video downscaling and upscaling operators by exploiting the information dependency within the video itself. Specifically, we first design a learnable frequency analyzer to decompose the original highresolution video into its downscaled version and the corresponding highfrequency component. Then, a Gaussian mixture distribution is leveraged to model the highfrequency component by conditioning on the downscaled video. For accurate estimation of the distribution parameters, we further introduce the local and global temporal aggregation modules to fuse the spatial information from adjacent downscaled video frames. Finally, the original video can be restored by a frequency synthesizer from the downscaled video and the highfrequency component sampled from the distribution. We integrate the components above as a novel selfconditioned video rescaling framework termed SelfC and optimize it by minimizing the negative loglikelihood for the distribution.
Furthermore, we apply our proposed SelfC in two practical applications, i.e. lossy video compression and video action recognition. In particular, to integrate our framework with the existing nondifferential video codecs (e.g., H.264 [61] and H.265 [46]), we propose an efficient and effective onepass optimization strategy based on the control variates method and approximate the gradients of traditional codecs in the backpropagation procedure, which formulates an endtoend optimization system.
Experimental results demonstrate that the proposed framework achieves stateoftheart performance on the video rescaling task. More importantly, we further demonstrate the effectiveness of the framework in practical applications. For the lossy video compression task, compared with directly compressing the highresolution videos, the video compression system based on our SelfC framework cuts the storage cost significantly (up to 30% reduction). For the video action recognition task, our framework reduces more than 60% computational complexity with negligible performance degradation.
In summary, our main contributions are:

We propose a probabilistic learning framework dubbed SelfC for the video rescaling task, which models the lost information during downscaling as a dynamic distribution conditioned on the downscaled video.

Our approach exploits rich temporal information in downscaled videos for an accurate estimation of the distribution parameters by introducing the specified local and global temporal aggregation modules.

We propose a gradient estimation method for nondifferential lossy codecs based on the control variates method and Monte Carlo sampling technique, extending the framework to a video compression system.
2 Related Work
Video Upscaling after Downscaling. Traditional video downscaling approaches subsample the input highresolution (HR) videos by a handcrafted kernel, such as Bilinear and Bicubic. For restoration, video superresolution (SR) methods are utilized. Since the SR task is inherently illposed, previous SR works [30, 65, 48, 59, 16, 23]
mainly leverage a heavy neural network to hallucinate the lost details, only achieving unsatisfactory results. Taking the video downscaling method into consideration may help mitigate the illposedness of the video upscaling procedure.
There are already a few works on single image rescaling task in a similar spirit, which consider the downscaling and the upscaling of the image simultaneously. For example, Kim [24] proposed a taskaware downscaling model based on an autoencoder framework. Later, Li [27]
proposed to use a convolution neural network (CNN) to estimate the downscaled lowresolution images for a given superresolution method. For stereo matching task, Yang
[67] proposed a superpixelbased downsampling/upsampling scheme to effectively preserve object boundaries and fine details More recently, Xiao [62] proposed to leverage an invertiable neural network (INN) to model the two reciprocal steps, which relies on a very deep INN to map the complex distribution of the lost information to an independent and fixednormal distribution.However, these methods neither leverage the temporal information between adjacent frames, which is important for video related tasks, nor consider the fact that the components of different frequencies in natural images or videos are conditionally dependent [56, 45, 38, 55].
Video Compression. Several traditional video compression algorithms have been proposed and widely deployed, such as H.264 [61] and H.265 [46]. Most of them follow the predictive coding architecture and rely on the sophisticated handcrafted transformations to analyze the redundancy within the videos. Recently, fully endtoend video codecs [14][5][68][1][29][31][19][34] such as DVC [32] have been proposed by considering the ratedistortion tradeoff of the whole compression system. They demonstrate promising performance and may be further improved by feeding more ubiquitous videos in the wild. However, they haven’t been widely used by industrial and are lack of the hardware implementation. In contrast, our framework can be readily integrated with the best traditional video codecs and further saves the storage space of the compressed video significantly.
Video Action Recognition. Simonyan [44] first proposed the twostream framework. Feichtenhofer [8] then improved it. Later, wang [58] proposed a new sparse frame sampling strategy. Recently, 3D networks [51][4][15][52][39][7][49] also show promising performance. Our work can accelerate the offtheshelf action CNNs by 34 times while reserving the comparable performance. We mainly conduct experiments on lightweight 2D action CNNs (e.g., TSM [28]) based on 2DResNet50 [17] for efficiency.
3 Proposed Method
An overview of our proposed SelfC framework is shown in Fig. 1 (a). During the downscaling procedure, given a highresolution (HR) video, a frequency analyzer (FA) (Section 3.1) first converts it into video features , where the first channels are lowfrequency (LF) component , the last channels are highfrequency (HF) component , and is the downscaling ratio. Then, is quantized to a LR video for storage. is discarded in this procedure.
During the upscaling procedure, given the LR video , the spatialtemporal prior network (STPNet) (Section 3.3
) predicts the probability density function of the HF component
:(1) 
We model
as a continuous mixture of the parametric Gaussian distributions (Section
3.2). Then, a case of the HF component related to LR video is drawn from the distribution. Finally, we reconstruct the HR video from the concatenation of HF component and LR video by the frequency synthesizer (FS).3.1 Frequency Analyzer and Synthesizer
As shown in Fig. 1 (b), we first decompose the HR input video as the times downscaled lowfrequency component and the residual highfrequency component , where denotes the spatial scale of the original video and denotes the video length. and represent bicubic downscaling and upscaling operations with scaling ratio . is the inverse operation of the pixel shuffling operation proposed in [43], where the scaling ratio is also . Then, we use a learnable transformation to transform and to the output features , where denotes the channel concatenation operation. Here the produced video feature consists of LF component and HF component . The network architecture for is very flexible in our framework and we use multiple stacking Dense2DT blocks to implement by default. The architecture of Dense2DT block is shown in Fig. 2, where we extend the vanilla Dense2D block [20] with the temporal modeling ability.
The architecture of frequency synthesizer is symmetric with the analyzer, as shown in Fig. 1 (c). Specifically, we use channel splitting, bicubic upscaling and pixel shuffling operations to synthesize the final high resolution videos based on the reconstructed video feature .
3.2 A Selfconditioned Probabilistic Model
Directly optimizing in Eq. (1) through gradient descent is unstable due to the unsmooth gradient [24] of the quantization module. Thus, we optimize instead during training procedure. Specifically, we represent the highfrequency component as a continuous multimodal probability distribution conditioned on the lowfrequency component , which is formulated as:
(2) 
where
denotes the spatialtemporal location. We use a continuous Gaussian Mixture Model (GMM)
[40] to approximate with component number . The distributions are defined by the learnable mixture weights , meansand log variances
. With these parameters, the distributions can be accurately determined as:(3) 
where
(4) 
and denotes the spatialtemporal location.
3.3 Spatialtemporal Prior Network (STPNet)
As shown in Fig. 1 (d), to estimate the parameters of the distribution above, we propose the STPNet to model both the local and global temporal information. We first utilize the Dense2DT block to extract the short term spatialtemporal features for each input frame. In this stage, only information from local frames, i.e., the previous or the next frames, are aggregated into the current frame, while the temporally longrange dependencies in videos are neglected. Therefore, we further introduce the attention mechanism for modeling the global temporal information. More specifically, the spatial dimension of the shortterm spatialtemporal features is first reduced by a spatial aggregator, which is implemented as an average pooling operation followed by a fullconnected (FC) layer. The output scale of the pooling operation is 32
32. Then we use dotproducting operation to generate the attention map, which represents the similarity scores between every two frames. Finally, we refine the local spatialtemporal features based on the similarity scores. We repeat the following procedure for 2 times to extract better video features. After that, a 3layer multi layer perceptron (MLP) is used to estimate the parameters of the GMM distribution, where the linear layers are implemented as 3D convolutions of kernel size
.3.4 Quantization and Storage Medium
We use rounding operation as the quantization module, and store the output LR videos by lossless format, i.e., H.265 lossless mode. The gradient of the module is calculated by StraightThrough Estimator [2]. We also discuss how to adapt the framework to more practical lossy video formats such as H.264 and H.265 in Section 3.6.
3.5 Training Strategy
Building a learned video rescaling framework is nontrivial, especially the generated lowresolution videos are expected to benefit both the upscaling procedure and the downstream tasks. We consider the following objectives.
Selfconditioned Probability Learning. First, to make sure the STPNet can obtain an accurate estimation for the HF component , we directly minimize the negative loglikelihood of in Eq. (2):
(5) 
where is the number of the training samples.
Mimicking Bicubic downscaling. Then the downscaled video is preferred to be similar to the original video, making its deployment for the downstream tasks easier. Therefore, we regularize the the downscaled video before quantization, i.e., , to mimic the bicubic downsampled :
(6) 
Penalizing . Without any extra constraint, Eq. (5) can be easily minimized by tuning to one constant tensor for any input video. Thus, to avoid the trivial solution, the CNN parts of frequency analyzer and synthesizer are penalized by the photoparametric loss (i.e., loss) between the video directly reconstructed from and the original input :
(7) 
Minimizing reconstruction difference. Finally, the expected difference between the reconstructed video sampled from the model and the original video should be minimized:
(8) 
where denotes a photometric loss (i.e., loss), denotes the channelwise concatenation operation. In each training iteration, is sampled from the distribution constructed from the parameters output by STPNet, conditioning on the LR video . To enable an endtoend optimization, we apply the “reparametrization trick” [26, 41, 13] to make the sampling procedure differentiable. More details are provided in the supplementary material.
The total loss is then given by:
(9) 
where , , and are the balancing parameters, and
is the scaling ratio. The loss function of our framework may seem a little bit complicated. However, we want to mention that the performance of our framework is not sensitive to these hyperparameters and directly setting all the parameters to 1 already achieves reasonable performance.
3.6 Application I: Video Compression
In this section, we extend the proposed SelfC framework to a lossy video compression system and aim to demonstrate the effectiveness of our approach in reducing the storage size. The whole system is shown in Fig. 3. Specifically, we first use the SelfC framework to generate the downscaled video , which will be compressed by using the existing codecs, e.g., H.265. Then at the decoder side, the compressed videos will be decompressed and upscaled to the full resolution video.
Considering the traditional video codecs are nondifferential, we further propose a novel optimization strategy. Specifically, we introduce a differential surrogate video perturbator , which is implemented as a deep neural network (DNN) consisting of 6 Dense2DT blocks. During the backpropagation stage, the gradient of the codec can be approximated by that of , which is tractable. During the test stage, the surrogate DNN is removed and we directly use the H.265 codec for compression and decompression.
According to the control variates theory [10, 12], can be an lowvariance gradient estimator for the video codec (i.e., ) when (1) the differences between the outputs of the two functions are minimized and (2) the correlation coefficients of the two output distributions are maximized.
Therefore, we introduce these two constraints to the optimization procedure of the proposed SelfC based video compression system. And the loss function for the surrogate video perturbator is formulated as:
(10) 
where is set to a small value, i.e., 0.001, and is estimated within each batch by Monte Carlo sampling:
(11) 
where
(12) 
and denotes the batch size. Finally, the total loss function for the SelfC based video compression system is given by:
(13) 
3.7 Application II: Efficient Action Recognition
We further apply the proposed SelfC framework to the video action recognition task. Specifically, we adopt the LR videos (i.e., ) downscaled by our framework as the input of action recognition CNNs for efficient action recognition. Considering the downscaler of our approach can preserve meaningful information for the downstream tasks and the complexity of itself can be rather low, inserting the downscaler before the offtheshelf action CNNs can reduce the huge computational complexity of them with negligible performance drop. Moreover, the lightweightiness of the rescaling framework makes the joint optimization tractable. In fact, compared with bicubic downscaling operation, our downscaler in SelfC framework can still generate more informative lowresolution videos for the action recognition task even without the joint training procedure. Please see Section 4.5 for more experimental results.
4 Experiments
4.1 Dataset
We use Vimeo90K dataset [65] as our training data, which is also adopted by the recent video super resolution methods [21, 22, 59] and video compression methods [32, 18, 31]. For video rescaling task, the evaluation datasets are the test set of Vimeo90K (denoted by Vimeo90KT), the widelyused Vid4 benchmark [30] and SPMCs30 dataset [48]. For video compression task, the evaluation datasets include UVG [53], MCLJCV [57], VTL [54] and HEVC Class B [46]. For video recognition task, we train and evaluate it on two large scale datasets requiring temporal relation reasoning, i.e., Something V1&V2 [11][36].
Downscaling  Upscaling  #Frame  FLOPs  #Param.  Calendar (Y)  City (Y)  Foliage (Y)  Walk (Y)  Vid4avg(Y)  Vid4avg(RGB) 
Bicubic  Bicubic  1  N/A  N/A  18.83/0.4936  23.84/0.5234  21.52/0.4438  23.01/0.7096  21.80/0.5426  20.37/0.5106 
Bicubic  SPMC [48]  3      /  /  /  /  25.52/0.76  / 
Bicubic  Liu [30]  5      21.61/  26.29/  24.99/  28.06/  25.23/  / 
Bicubic  TOFlow [65]  7  0.81T  1.41M  22.29/0.7273  26.79/0.7446  25.31/0.7118  29.02/0.8799  25.85/0.7659  24.39/0.7438 
Bicubic  FRVSR [42]  2  0.14T  5.05M  22.67/0.7844  27.70/0.8063  25.83/0.7541  29.72/0.8971  26.48/0.8104  25.01/0.7917 
Bicubic  DUF52L [23]  7  0.62T  5.82M  24.17/0.8161  28.05/0.8235  26.42/0.7758  30.91/0.9165  27.38/0.8329  25.91/0.8166 
Bicubic  RBPN [16]  7  9.30T  12.2M  24.02/0.8088  27.83/0.8045  26.21/0.7579  30.62/0.9111  27.17/0.8205  25.65/0.7997 
Bicubic  EDVRL [59]  7  0.93T  20.6M  24.05/0.8147  28.00/0.8122  26.34/0.7635  31.02/0.9152  27.35/0.8264  25.83/0.8077 
Bicubic  PFNL [69]  7  0.70T  3.00M  23.56/0.8232  28.11/0.8366  26.42/0.7761  30.55/0.9103  27.16/0.8365  25.67/0.8189 
Bicubic  RLSP [9]  3  0.09T  4.21M  24.36/0.8235  28.22/0.8362  26.66/0.7821  30.71/0.9134  27.48/0.8388  25.69/0.8153 
Bicubic  TGA [22]  7  0.23T  5.87M  24.50/0.8285  28.50/0.8442  26.59/0.7795  30.96/0.9171  27.63/0.8423  26.14/0.8258 
Bicubic  RSDN 9128 [21]  2  0.13T  6.19M  24.60/0.8355  29.20/0.8527  26.84/0.7931  31.04/0.9210  27.92/0.8505  26.43/0.8349 
IRN  IRN [62]  1  0.20T  4.36M  25.05/0.8424  29.84/0.8825  28.79/0.8604  34.24/0.9584  29.48/0.8859  27.58/0.8618 
SelfCsmall  SelfCsmall  7  0.026T  0.68M  26.02/0.8675  30.26/0.8953  29.73/0.8829  34.87/0.9747  30.28/0.9021  27.98/0.8763 
SelfClarge  SelfClarge  7  0.093T  2.65M  27.12/0.9018  31.16/0.9171  30.59/0.9123  35.56/0.9730  31.11/0.9261  29.06/0.9065 
Downscaling  Bicubic  Bicubic  Bicubic  Bicubic  Bicubic  Bicubic  Bicubic  IRN  SelfCsmall  SelfClarge 

Upscaling  Bicubic  TOFlow [65]  FRVSR [42]  DUF52L [23]  RBPN [16]  PFNL[69]  RSDN 9128 [21]  IRN [62]  SelfCsmall  SelfClarge 
SPMCs30(Y)  23.29/0.6385  27.86/0.8237  28.16/0.8421  29.63/0.8719  29.73/0.8663  29.74/0.8792  /  34.45/0.9358  34.51/0.9372  37.86/0.9710 
SPMCs30(RGB)  21.83/0.6133  26.38/0.8072  26.68/0.8271  28.10/0.8582  28.23/0.8561  27.24/0.8495  /  32.26/0.9185  32.33/0.9201  35.19/0.9567 
Vimeo90KT(Y)  31.30/0.8687  34.62/0.9212  35.64/0.9319  36.87/0.9447  37.20/0.9458  /  37.23/0.9471  37.47/0.9686  37.18/0.9701  38.08/0.9762 
Vimeo90KT(RGB)  29.77/0.8490  32.78/0.9040  33.96/0.9192  34.96/0.9313  35.39/0.9340  /  35.32/0.9344  35.14/0.9535  35.02/0.9542  35.55/0.9627 
4.2 Implementation Details
(1) Video rescaling: , , and are set as 0.1, 1, 1 and 1, respectively. Each training clip consists of 7 RGB patches of size 256256. The batch size is set as 32. We augment the training data with random horizontal flips and 90 rotations. We train our model with Adam optimizer [25] by setting as 0.9, as 0.99, and learning rate as
. The total training iteration number is about 240,000. The learning rate is divided by 10 every 100,000 iterations. We implement the models with the PyTorch framework
[37] and train them on a server with 8 NVIDIA 2080Ti GPUs. We draw 5 times from the generated distribution for each evaluation and report the averaged performance. We leverage the invertiable neural network (INN) architecture to implement the CNN parts of the paired frequency analyzer and synthesizer for fair comparison with IRN on parameter number because INN will cut the number of parameters by 50%. We propose the two following models: the SelfCsmall and SelfClarge, which consist of 2 and 8 invertiable Dense2DT blocks respectively. The detailed architecture of this block is in the supplementary material, roughly following [62]. Training the SelfClarge model takes about 90 hours.(2) Video compression: The rescaling ratio of the SelfC is set to 2. We use H.265 as our default codec in the experiments. is set as 100 to make sure the statistical distribution of the downscaled videos is more closed to the natural images, which stabilizes the performance of the whole system. The other details follow that of video rescaling task. The models are initialized from SelfClarge model but the number of invertiable Dense2DT blocks is reduced to 4. The surrogate CNN is randomly initialized, and are jointly optimized with video rescaler.
(3) Action recognition: We insert the downscaler of our framework before the action recognition CNN (i.e., TSM [28]). The data augmentation pipeline also follows TSM. The downscaling ratio is 2. At inference time, we used just 1 clip per video and each clip contains 8 frames. We adopt 2 plain Dense2DT blocks with intermediate channel number of 12 as the CNN part of frequency analyzer. Note that the downscaler is first pretrained on Vimeo90K dataset by the video rescaling task.
4.3 Results of Video Rescaling
As shown in Tab. 1 and Tab. 2, our method outperforms the recent stateoftheart video super resolution methods on Vid4, SPMCs30 and Vimeo90KT by a large margin in terms of both PSNR and SSIM. For example, the average PSNR(Y) results on the Vid4 dataset for our SelfClarge is 31.11dB, while the corresponding result for the stateoftheart video super resolution approach RSDN is only 27.92dB. Furthermore, we also provide the results of image rescaling method, i.e., IRN, in Tab. 1 and Tab. 2. It is obvious that our method outperforms IRN while also reduces the computational complexity by 2 times (SelfClarge) or 8 times (SelfCsmall). This result clearly demonstrates that it is necessary to exploit the temporal relationship for video rescaling task, while the existing image rescaling methods like IRN ignored this temporal cue. We also show the qualitative comparison with other methods on Vid4 in Fig. 4. Please refer to the supplementary material for more qualitative results. Our method demonstrates much better details and sharper images than both video superresolution methods and image rescaling method, proving the superiority of the video rescaling paradigm.
4.4 Results of Video Compression
For fair comparison, both the standard H.265 codec and the codec embedded in our framework follow the same setting in [32]
and use the FFmpeg with very fast mode. The evaluation metrics are PSNR and MSSSIM
[60].Fig. 5 shows the experimental results on the UVG and MCLJCV datasets. It is obvious that our method outperforms both the traditional methods and learning based method on video compression task by a large margin. On the UVG dataset, the proposed method achieves about 0.8dB gain at the same Bpp level in comparison with H.265. Although our method is only optimized by loss, it demonstrates strong performances in terms of both PSNR and MSSSIM metrics.
Dataset  UVG  VTL  MCLJCV  Average  


39.86  27.45  16.75  28.02  

29.01  14.42  26.59  23.34 
We also evaluate the Bjøntegaard Delta BitRate (BDBR) [3] by using H.265 as the anchor method, and calculate the average bitrate difference at the same PSNR or MSSSIM, which indicates the storage burden reduction quantitatively. As shown in Tab. 3 , our method saves the bit cost by over 25% averagely under the same PSNR and over 20% under the same MSSSIM. Notably, we reduce the bit cost by about 40% on UVG dataset. This proves that video rescaling technique is a novel and effective way to improve the video compression performance, without considering much about the complicated details of the codecs, especially for the industrial lossy codecs.
We perform more analysis to verify the effectiveness of the “Video rescaling+Codec” paradigm and the proposed gradient estimation method. As shown in Fig. 6, it is observed that using Bicubic as the downscaler and upscaler in the video compression system (i.e., H.265+Bicubic) leads to much inferior result than the baseline. We also try to improve the result by using a stateoftheart video super resolution method, i.e., TGA [22]. The performance is indeed improved though still lower than the baseline method H.265. Considering the network parameters of TGA are 5.87M while ours are only 2.65M, this result further demonstrates the effectiveness of our SelfC framework. Finally, we provide experimental results (i.e., Ours W/O Gradient) when directly using the biased StraightThrough Estimator [2] for H.265 codec. The results show that the proposed gradient estimation method in Section 3.6 can bring nearly 0.3dB improvements.
Input HR video  LR video  HF component  Sample 1 of  Sample 2 of  

4.5 Results of Efficient Video Action Recognition
We show the video action recognition results in Tab. 4. In the first group, when directly testing the action model pretrained on full resolution videos, we observe the performances of lowresolution videos downscaled by Bicubic and ours are both dropped drastically, because the classification networks are rather sensitive to the absolute scale of the input. However, our downscaler still performs better (30.4% vs. 32.7% on Something V1 dataset).




v1  v2  
Top1 (%)  Top5 (%)  Top1 (%)  Top5 (%)  
HR  TSM  33G  24.31M  45.6  74.2  58.8  85.4  
Bicubic  TSM  8.6G  24.31M  30.4  57.5  40.1  68.7  
Ours  TSM  10.8G  24.35M  32.7  59.5  41.8  71.5  
Bicubic (FT)  TSM  8.6G  24.31M  42.1  72.1  56.0  83.9  
Ours (FT)  TSM  10.8G  24.35M  43.5  72.8  57.4  84.8  
Ours (E2E)  TSM  10.8G  24.35M  44.6  73.3  58.3  85.5 
In the second group, we provide the experimental results when the action recognition CNN is finetuned on the lowresolution videos from Bicubic and our downscaler in SelfC. The details of finetuning procedure are in the the supplementary material. It is obvious that our method clearly outperforms bicubic downscaling by about in terms of Top1 accuracy on the two datasets. Notably, our downscaler is learnable. Therefore, we then finetune action recognition CNNs and our downscaler jointly. The results are in the third group. The endtoend joint training further improves the performance by an obvious margin. On Something V2, the ultimate performances of our method nearly achieve that of performing recognition directly on HR videos, and our method improves the efficiency by over 3 times. The downscaler of IRN can not improve the efficiency of this task because its computation cost is even larger than the HR setting. We try to decrease the layer number of IRN but it no longer converges.
4.6 Ablation Studies on the Framework
In this section, we conduct experiments on video rescaling task to verify the effectiveness of the components in our framework. We first define the following 2 baselines: (1) IRN [62]
, which is the most recent stateoftheart image rescaling method. For fair comparison, we retrain it on Vimeo90K dataset using the codes opensourced by the authors. (2) AutoEnc, which is a simple auto encoderdecode architecture by removing the STPNet of our model. The experimental results are shown in Tab.
5.Methods  Backbone  Probability model  Param(M)  Vid4(Y)  
PSNR(dB)  SSIM  
AutoEnc  16Dense2DT    3.63  27.82  0.8297  
IRN  16Dense2D 

4.36  29.48  0.8859  
IRN  16Dense2DT 

3.63  29.01  0.8786  
SelfCbasic  2Dense2D  GMM(K=1)  0.73  29.22  0.8842  
SelfCbasicT  2Dense2DT  GMM(K=1)  0.64  29.67  0.8864  
SelfCsmall  2Dense2DT  GMM(K=5)  0.68  30.28  0.9021  
SelfClarge  8 Dense2DT  GMM(K=5)  2.65  31.11  0.9261  
First, the AutoEnc baseline shows more inferior performance than both IRN and our method. This proves that explicitly modeling the lost information is important. IRN is inferior to our small model although IRN leverages an 8 times heavier backbone. We also tried to equip IRN with the temporal modeling ability by replacing its backbone from Dense2D to Dense2DT. Surprisingly, the performance of the new model IRN
decreases by 0.47dB. The reason is that IRN relies on the complex nonlinear transformation to transform the real distribution of the lost information to the normal distribution while the transformation ability of the Dense2DT is weaker (missing 0.73M parameters).
For our method, we start from the most simple model denoted by SelfCbasic, where backbone consists of only spatial convolutions, and the STPNet only outputs a simple Gaussian distribution. The performance of this model is comparable with IRN but with 6 fewer parameters. This proves the efficiency and superiority of the proposed selfconditioned distribution modeling scheme. Then, we introduce an improved model denoted by SelfCbasicT. The temporal modeling ability of it is stronger by changing the basic block from Dense2D to Dense2DT. This leads to 0.45dB improvement while reducing the parameters, proving the effectiveness of the Dense2DT block for video task. Further, we increase the mixture number of the GMM model to 5. The resulted SelfCsmall model outperforms all the baselines by a large margin (30.28dB) with only 0.68M parameters. Our model is also scalable with larger backbone network. Enlarging that by 4 times improves the performance by 0.83dB. For more ablation studies on the depth of backbone network, comparison of different probabilistic modeling methods, the architecture of the STPNet and the loss functions, please refer to the supplementary material.
4.7 Visualization Results
While the previous quantitative results validate the superiority of the proposed selfconditioned modeling scheme on several tasks, it is interesting to investigate the intermediate components output by our model, especially the distribution of the highfrequency (HF) component predicted by STPNet. Note that the distribution is a mixture of Gaussian and includes multiple channels, we draw two samples of from and randomly select 1 channel of them for visualization. The output from the frequency analyzer is adopted as the groundtruth sample.
As shown in Fig. 7, we first see that the LR video downscaled by our method is modulated into some mandatory information for reconstructing the HF components more easily, compared to Bicubic. Also, the sampled HF components can restore the groundtruth of that accurately in terms of key structures, i.e., the windows of the building, while retaining a certain degree of the randomness. This is consistent with our learning objectives.
5 Conclusion
We have proposed a videorescaling framework to learn a pair of downscaling and upscaling operations. Extensive experiments demonstrated that our method can outperform the previous methods with a large margin while with much fewer parameters and computational cost. Moreover, the learned downscaling operator facilitates the tasks of video compression and efficient action recognition significantly.
References

[1]
(2020)
Scalespace flow for endtoend optimized video compression.
In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
, pp. 8503–8512. Cited by: §2. 
[2]
(2013)
Estimating or propagating gradients through stochastic neurons for conditional computation
. arXiv. Cited by: §3.4, §4.4.  [3] (2001) Calculation of average psnr differences between rdcurves. VCEGM33. Cited by: §4.4.
 [4] (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, Cited by: §2.
 [5] (2019) Neural interframe compression for video coding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6421–6429. Cited by: §2.
 [6] (2016) Videobased emotion recognition using cnnrnn and c3d hybrid networks. In Proceedings of the 18th ACM international conference on multimodal interaction, pp. 445–450. Cited by: §1.
 [7] (2019) Slowfast networks for video recognition. In ICCV, Cited by: §2.
 [8] (2016) Convolutional twostream network fusion for video action recognition. In CVPR, Cited by: §2.
 [9] (2019) Efficient video superresolution through recurrent latent space propagation. In ICCVW, Cited by: Table 1.
 [10] (2002) Some new perspectives on the method of control variates. In Monte Carlo and QuasiMonte Carlo Methods 2000, Cited by: §3.6.
 [11] (2017) The” something something” video database for learning and evaluating visual common sense.. In ICCV, Cited by: §4.1.
 [12] (2017) Backpropagation through the void: optimizing control variates for blackbox gradient estimation. arXiv. Cited by: §3.6.
 [13] (2016) Stochastic backpropagation through mixture density distributions. arXiv. Cited by: §3.5.

[14]
(2019)
Video compression with ratedistortion autoencoders
. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7033–7042. Cited by: §2.  [15] (2017) Learning spatiotemporal features with 3d residual networks for action recognition. In ICCVW, Cited by: §2.
 [16] (2019) Recurrent backprojection network for video superresolution. In CVPR, Cited by: §2, Table 1, Table 2.
 [17] (2016) Deep residual learning for image recognition. In CVPR, Cited by: §2.
 [18] (2020) Improving deep video compression by resolutionadaptive flow coding. In ECCV, Cited by: §4.1.
 [19] (2021) FVC: a new framework towards deep video compression in feature space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1502–1511. Cited by: §2.
 [20] (2014) Densenet: implementing efficient convnet descriptor pyramids. In arXiv, Cited by: §3.1.
 [21] (2020) Video superresolution with recurrent structuredetail network. In ECCV, Cited by: §4.1, Table 1, Table 2.
 [22] (2020) Video superresolution with temporal group attention. In CVPR, Cited by: §4.1, §4.4, Table 1.
 [23] (2018) Deep video superresolution network using dynamic upsampling filters without explicit motion compensation. In CVPR, Cited by: §2, Table 1, Table 2.
 [24] (2018) Taskaware image downscaling. In ECCV, Cited by: §1, §2, §3.2.
 [25] (2014) Adam: a method for stochastic optimization. In arXiv, Cited by: §4.2.
 [26] (2013) Autoencoding variational bayes. arXiv. Cited by: §3.5.
 [27] (2018) Learning a convolutional neural network for image compactresolution. In TIP, Cited by: §1, §2.
 [28] (2019) Tsm: temporal shift module for efficient video understanding. In ICCV, Cited by: §2, §4.2.
 [29] (2020) Mlvc: multiple frames prediction for learned video compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3546–3554. Cited by: §2.
 [30] (2013) On bayesian adaptive video super resolution. In TPAMI, Cited by: §1, §2, §4.1, Table 1.
 [31] (2020) Content adaptive and error propagation aware deep video compression. In European Conference on Computer Vision, pp. 456–472. Cited by: §2, §4.1.
 [32] (2019) Dvc: an endtoend deep video compression framework. In CVPR, Cited by: §2, Figure 5, §4.1, §4.4.

[33]
(2018)
Deep kalman filtering network for video compression artifact reduction
. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 568–584. Cited by: §1.  [34] (2020) An endtoend learning framework for video compression. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.
 [35] (2019) Deep nonlocal kalman network for video compression artifact reduction. IEEE Transactions on Image Processing 29, pp. 1725–1737. Cited by: §1.
 [36] (2018) Finegrained video classification and captioning. In arXiv, Cited by: §4.1.

[37]
(2019)
Pytorch: an imperative style, highperformance deep learning library
. In NeurIPS, Cited by: §4.2.  [38] (2003) Image denoising using scale mixtures of gaussians in the wavelet domain. In TIP, Cited by: §2.
 [39] (2017) Learning spatiotemporal representation with pseudo3d residual networks. In ICCV, Cited by: §2.
 [40] (2009) Gaussian mixture models.. In EOB, Cited by: §3.2.
 [41] (2014) Stochastic backpropagation and approximate inference in deep generative models. In ICML, Cited by: §3.5.
 [42] (2018) Framerecurrent video superresolution. In CVPR, Cited by: Table 1, Table 2.
 [43] (2016) Realtime single image and video superresolution using an efficient subpixel convolutional neural network. In CVPR, Cited by: §3.1.
 [44] (2014) Twostream convolutional networks for action recognition in videos. In NeurIPS, Cited by: §1, §2.
 [45] (2000) Image denoising using a local gaussian scale mixture model in the wavelet domain. In Wavelet Applications in Signal and Image Processing VIII, Cited by: §2.
 [46] (2012) Overview of the high efficiency video coding (hevc) standard. In TCSVT, Cited by: §1, §2, §4.1.
 [47] (2020) Learned image downscaling for upscaling using content adaptive resampler. In TIP, Cited by: §1.
 [48] (2017) Detailrevealing deep video superresolution. In ICCV, Cited by: §1, §2, §4.1, Table 1.
 [49] (2020) Selfsupervised motion representation via scattering local motion cues. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, pp. 71–89. Cited by: §2.
 [50] (2019) Videobased early asd detection via temporal pyramid networks. In ICME, Cited by: §1.
 [51] (2015) Learning spatiotemporal features with 3d convolutional networks. In ICCV, Cited by: §2.
 [52] (2018) A closer look at spatiotemporal convolutions for action recognition. In CVPR, Cited by: §2.
 [53] Ultra video group test sequences. http://ultravideo.cs.tut.fi, accessed: 2019 1106. Cited by: §4.1.
 [54] Video trace library. http://trace.kom.aau.dk/yuv/index.html, accessed: 2019 1106. Cited by: §4.1.
 [55] (2001) Random cascades on wavelet trees and their use in analyzing and modeling natural images. In ACHA, Cited by: §2.
 [56] (1999) Scale mixtures of gaussians and the statistics of natural images. In NeurIPS, Cited by: §2.
 [57] (2016) MCLjcv: a jndbased h. 264/avc video quality assessment dataset. In ICIP, Cited by: §4.1.
 [58] (2018) Temporal segment networks for action recognition in videos. In TPAMI, Cited by: §2.
 [59] (2019) Edvr: video restoration with enhanced deformable convolutional networks. In CVPRW, Cited by: §1, §2, §4.1, Table 1.
 [60] (2003) Multiscale structural similarity for image quality assessment. In ACSSC, Cited by: §4.4.
 [61] (2003) Overview of the h. 264/avc video coding standard. TCSVT. Cited by: §1, §2.
 [62] (2020) Invertible image rescaling. In ECCV, Cited by: §1, §2, §4.2, §4.6, Table 1, Table 2.
 [63] (2015) Empirical evaluation of rectified activations in convolutional network. In arXiv, Cited by: Figure 2.
 [64] (2015) A discriminative cnn video representation for event detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1798–1807. Cited by: §1.
 [65] (2019) Video enhancement with taskoriented flow. In IJCV, Cited by: §1, §2, §4.1, Table 1, Table 2.

[66]
(2021)
DeHiB: deep hidden backdoor attack on semisupervised learning via adversarial perturbation
. InProceedings of the AAAI Conference on Artificial Intelligence
, Vol. 35, pp. 10585–10593. Cited by: §1.  [67] (2020) Superpixel segmentation with fully convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13964–13973. Cited by: §2.
 [68] (2020) Learning for video compression with hierarchical quality and recurrent enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6628–6637. Cited by: §2.
 [69] (2019) Progressive fusion video superresolution network via exploiting nonlocal spatiotemporal correlations. In ICCV, Cited by: Table 1, Table 2.