Log In Sign Up

NTIRE 2020 Challenge on Video Quality Mapping: Methods and Results

This paper reviews the NTIRE 2020 challenge on video quality mapping (VQM), which addresses the issues of quality mapping from source video domain to target video domain. The challenge includes both a supervised track (track 1) and a weakly-supervised track (track 2) for two benchmark datasets. In particular, track 1 offers a new Internet video benchmark, requiring algorithms to learn the map from more compressed videos to less compressed videos in a supervised training manner. In track 2, algorithms are required to learn the quality mapping from one device to another when their quality varies substantially and weakly-aligned video pairs are available. For track 1, in total 7 teams competed in the final test phase, demonstrating novel and effective solutions to the problem. For track 2, some existing methods are evaluated, showing promising solutions to the weakly-supervised video quality mapping problem.


page 3

page 8

page 9

page 10


NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results

This paper reviews the first NTIRE challenge on quality enhancement of c...

AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results

This paper reviews the Challenge on Super-Resolution of Compressed Image...

NTIRE 2020 Challenge on Image and Video Deblurring

Motion blur is one of the most common degradation artifacts in dynamic s...

AIM 2019 Challenge on Real-World Image Super-Resolution: Methods and Results

This paper reviews the AIM 2019 challenge on real world super-resolution...

LID 2020: The Learning from Imperfect Data Challenge Results

Learning from imperfect data becomes an issue in many industrial applica...

The Sillwood Technologies System for the VoiceMOS Challenge 2022

In this paper we describe our entry for the VoiceMOS Challenge 2022 for ...

1 Introduction

Human captured and transmitted videos often suffer from various quality issues. For instance, despite the incredible development of current smartphone or depth cameras, compact sensors and lenses still make DSLR-quality unattainable for them. Due to bandwidth limit over internet, videos have to be compressed for easier transmission. The compressed videos inevitably suffer from compression artifacts. Therefore, quality enhancement over such videos are highly in demand.

The challenge aims at pushing competing methods into effective and efficient solutions to the newly emerging video quality mapping (VQM) tasks. Following [21], two tracks are studied in this challenge. Track 1 is configured to the task of fully-supervised video quality mapping between more compressed videos to less compressed videos collected from the Internet, while track 2 are designed for the weakly-supervised video quality mapping from a ZED camera to a Canon 5D Mark IV camera. Competing methods are evaluated with the most prominent metrics in the field, i.e

., Peak Signal-to-Noise Ratio (PSNR) and structural similarity index (SSIM).

Since PSNR and SSIM are not always well correlated with human perception of quality, we also consider to leverage perceptual measures, such as the Learned Perceptual Image Patch Similarity (LPIPS) [44] metric as well as mean opinion scores (MOS), which aim to evaluate the quality of the outputs according to human visual perception.

This challenge is one of the NTIRE 2020 associated challenges on: deblurring [26], nonhomogeneous dehazing [4]

, perceptual extreme super-resolution 

[43], video quality mapping (this paper), real image denoising [1], real-world super-resolution [25], spectral reconstruction from RGB image [5] and demoireing [42].

2 Related Work

Quality Enhancement on Compressed Videos aims to eliminate visual artifacts of compressed videos, which are transmitted over the bandwidth-limited Internet and often suffers from compression artifacts. There are emerging several algorithms like [10, 36, 40], which generally employ the original (uncompressed or less compressed) videos for full supervision on video quality map learning. For instance, [36] proposes an Auto-Decoder to learn the non-linear mapping from the decoded video to the original one, such that the artifacts can be removed and details can be enhanced on compressed videos. [10] suggests a post-processing algorithm for artifact reduction on compressed videos. Based on the observation that High in Efficiency Video Coding (HEVC) adopts variable block size transform, the suggested algorithm integrates variable filter size into convolutional networks for better reduction of the quantization error. To take advantage of the information available in the neighboring frames, [40] proposes a deep network to take both current frame and its adjacent high-quality frames into account for better enhancement on compressed videos.

Video Super-Resolution (VSR)

methods are used as well to enhance the texture quality of videos. The requirements for VSR and VQM are similar. It is important to enforce temporally consistent transitions between enhanced frames and to accumulate information over time, which is a fundamental difference to single image enhancement methods. Most deep learning based methods adopt the idea of concatenating adjacent frames with explicit motion compensation in order to leverage temporal information

[20, 33, 7]. A more recent method [19]

successfully explores the application of 3D convolutions as a natural extension for video data, without explicit motion compensation. In contrast to single image enhancers, many applications for video require real-time performance. Therefore, efficient algorithms for video processing are in high demand. Temporal information can be very efficiently aggregated with recurrent neural networks (RNN) which are developed in

[31, 11]. For instance, [31] efficiently warps the previous high-resolution output towards the current frame according to optical flow. In [11], runtimes are further improved by propagating an additional hidden state, which handles implicit processing of temporal information without explicit motion compensation. Perceptual improvements over fully-supervised VSR methods are realized with generative adversarial networks (GAN) by [9] and [28].

Quality Enhancement on Device Captured Videos

aims at enhancing the perceived quality of videos taken by devices, which includes enhancements like increasing color vividness, boosting contrast, sharpening up textures, etc. However, the major issue of enhancing such videos is the extreme challenge of collecting well-aligned training data, i.e., input and target videos that are aligned in both the spatial and the temporal domain. A few approaches address this problem using reinforcement learning based techniques like

[14, 27, 22], which aims at creating pseudo input-retouched pairs by applying retouching operations sequentially.

Another direction is to develop Generative Adversarial Network (GAN) based methods for this task. For example, [8] proposes a method for image enhancement by learning from unpaired photographs. The method learns an enhancing map from a set of low-quality photos to a set of high-quality photographs using the GAN technique [12], which has proven to be good at learning real data distributions. Similarly, [18] leverages the GAN technique to learn the distribution of separate visual elements (i.e., color and texture) of images, such that the low-quality images can be mapped easier to the high-quality image domain which is encoded with more vivid colors and more sharpened textures. More recently, [16] suggests a divide-and-conquer adversarial learning method to further decompose the photo enhancement problem into multiple sub-problems. Such sub-problems are divided hierarchically: 1) a perception-based division for learning on additive and multiplicative components, 2) a frequency-based division in the GAN context for learning on the low- and high-frequency based distributions, and 3) a dimension-based division for factorization of high-dimensional distributions. To further smooth the temporal semantics during the enhancement, an efficient recurrent design of the GAN model is introduced. To the best of our knowledge, except for [16], there are very few works specially for weakly-supervised video enhancement.

Figure 1: Track 1 (a)-(b): quality mapping from more compressed (a) to less compressed (b) videos which are well aligned. Track 2 (c)-(d): quality mapping from low-quality videos captured by a ZED camera (c) to high-quality Canon DSLR videos (d), which are roughly aligned.

3 Challenge Setup

3.1 Track 1: Supervised VQM

For this track, we introduce the IntVid dataset [21]. It consists of videos downloaded from Internet websites. The collected videos cover 12 diverse scenarios: city, coffee, fashion, food, lifestyle, music/dance, narrative, nature, sports, talk, technique and transport. The resolution of the crawled videos is mostly 19201080. Their duration varies from 8 seconds to 360 seconds with frame rates in the range of 23.98-25.00 FPS.

As most of the collected videos consist of changing scenes, a popular scene detection tool named PySceneDetect111 is used to split the videos into three separate sets of clips for training, validation and test respectively. In particular, most of the resulting video clips are selected such that the majority of the original video content is employed for training. For the validation and test video clips, the video length is fixed to 4 seconds containing 120 frames, which are saved as PNG image files.

Due to the bandwidth limit of Internet, video compression techniques are often applied to reduce the coding bit-rate. Inspired by this, [21] applied the standard video coding system H.264 to compress the collected videos. As a result, a total of 60 paired compressed and uncompressed videos are generated for training, and 32 paired compressed/uncompressed clips are produced for validation and testing. One example for track 1 is shown in Fig.1 (a)-(b).

3.2 Track 2: Weakly-Supervised VQM

For this track, we employ the Vid3oC dataset [21], which records videos with a rig containing three cameras. In particular, we use the Canon 5D Mark IV DSLR camera to serve as a high-quality reference, while utilizing the ZED camera, which additionally records depth information, to provide sequences of the same scene with a significantly lower video quality level. As the track focuses on the RGB-based visual quality mapping, we remove the depth information from the ZED camera. Using the two cameras, videos are recorded in the area in and around Zurich, Switzerland during the summer months. The locations and scenes are carefully chosen to ensure variety in content, appearance, and the dynamic nature. The length of each recording is between 30 and 60 seconds. Videos are captured in 30 FPS, using the highest resolution (i.e., 19201080) available at that frame rate.

In [21], the recorded videos are split into a training set of 50 weakly-paired videos, together with a validation and test set of 16 videos each. For all sets, a rough temporal alignment is performed based on the visual recording of a digital clock, which is captured by both cameras in the beginning of each video. The training videos are then trimmed down to 25-50 seconds by removing the first few seconds (which include the timer) and encoded with H.264. For each video in the validation and test set, a 4-second interval is selected. Each of such small video clips contains 120 frames, which are stored as individual PNG image files. Fig.1 (c)-(d) illustrates one example for track 2.

3.3 Evaluation Protocol

Validation phase: During the validation phase, the source domain videos for the validation set were provided on CodaLab. While the participants had no direct access to the validation ground truth, they could get feedback through the online server on CodaLab. Due to the storage limits on the servers, the participants could only submit a subset of frames for the online evaluation. PSNR and SSIM were reported for both tracks, even though track 2 only has weakly-aligned targets. The participants were allowed to make 10 submissions per day, and 20 submissions in total for the whole validation phase.

Test phase: In the test phase, participants are expected to submit their final results to the CodaLab test server. Compared to the validation phase, no feedback was given in terms of PSNR/SSIM to prevent comparisons with other teams and overfitting to the test data. By the deadline, the participants were required to provide the full set of frames, from which the final results were obtained.

4 Challenge Teams and Methods

In total 7 teams submitted their solutions to track 1. One team asked to anonymize their team name and references, since they found out to be using inappropriate extra-data for training after the test phase submission deadline. No submissions were made for track 2.

Figure 2: Illustration of the network design suggested by team GTQ.
Figure 3: Illustration of the hierarchical feature fusion block (HFFB) suggested by team GTQ.

4.1 GTQ team

The team proposes a modified deformable convolution network to achieve high quality video mapping as shown in Fig. 2. The framework first down-samples the input frames with scale factor 4 through a space to depth shuffling operation. Then, the extracted features pass through an alignment module which applies a cascade of deformable convolutions [47] to perform implicit motion compensation. In the alignment module, the team takes advantage of hierarchical feature fusion blocks (HFFB) [17] to predict more precise offset and modulation scalars used in deformable convolutions. As shown in Fig. 3, HFFB introduces a spatial pyramid of dilated convolutions to effectively enlarge the receptive field with relatively low computational cost, which contributes to dealing with complicated and large motions between frames. After the alignment operation, the features are concatenated and fed into stacked residual in residual dense blocks (RRDB) [39] to reconstruct high quality frames.

Figure 4: The network architecture of the proposed C2CNet by team ECNU.

4.2 ECNU team

Team ECNU proposes a Compression to Compression Network (C2CNet). The input to C2CNet is a more compressed video frame and the ground truth is a less compressed video frame. As shown in Fig. 4, C2CNet is composed of a head convolutional layer, a de-sub-pixel convolutional layer composed of an inverse pixel-shuffle layer and a convolution, a non-linear feature mapping module composed of 64 Adaptive WDSR-A-Blocks, a convolution and a short skip connection with residual scaling =0.2, an upsampling skip connection, a sub-pixel convolutional layer composed of a convolution and a pixel-shuffle layer, a global skip connection and a tail convolution. The number of channels for C2CNet is 128. The Adaptive WDSR-A-Block is composed of 64, 256 and 64 channels. The Adaptive WDSR-A-Block is modified from a WDSR-A-Block [41], by adding learnable weight (initialized with 1) for body scaling and learnable weight (initialized with ) for residual scaling. Each convolution is followed by a weight normalization layer (omitted in Fig. 4).

Figure 5: Architecture of team GIL’s model. (a) Overall Network architecture. (b) MU block. (c) RRCU (t=3) at the left and Unfolded DRCL-C (t=3) at the right.

4.3 GIL team

The team employs a network with two-stage architecture proposed in FastDVDnet [34] and is shown in Fig. 4(a). It takes five consecutive frames as an input and generates a restored central frame. Three MU blocks in the first stage (shown in green) share parameters. Each MU block is a modified U-Net [29] shown in Fig. 4(b)

. It uses a convolutional layer with stride=2 for down-sampling and a pixel-shuffle

[32] layer for up-sampling. It features a skip connection for global residual learning and contains several RRCUs (recurrent residual convolutional unit) inspired from R2U-Net [3]. Each RRCU consists of two DRCL-C (dense recurrent convolutional layer-concatenate) and a skip connection for residual learning. Figures of RRCU and DRCL-C are shown in Fig. 4(c). States of the DRCL-C change over discrete time steps and the maximum time step is limited to 3. The DRCL-C is different from a standard RCL (recurrent convolutional layer) [23]. It reuses previous features by concatenating them [15]. A convolutional layer with 1x1 filters is used after every concatenation in DRCL-C to make the number of channels constant. The network has approximately 3.6 million parameters.

Figure 6: Schematic representation of team TCL’s approach.

4.4 TCL team

The team uses a pyramidal architecture with deformable convolutions and spatio-temporal attention based on the work of [38] along with a single-frame U-Net [29]. The overview of the method is illustrated in Fig. 6. By combining these two methods, the local frame structure is preserved with the usage of U-Net and additional information from neighboring frames along with motion compensation, mostly by exploiting the PCD module from [38], is used to enhance output quality. Both networks are trained separately and the final result is obtained by a weighted sum with weight parameter found by grid search, which is validated on a hold-out set from the training frames.

Figure 7: Network architectures used by team JOJO-MVIG.

4.5 JOJO-MVIG team

The team proposes a unified dual-path model to jointly utilize spatial and temporal information and map low-quality compressed frames to high-quality ones. As shown in Fig. 7

, the model consists of a feature extraction stage, two spatio-temporal fusion paths, and a reconstruction module. The overall design of the pipeline follows


In the feature extraction part, the multi-level features are calculated. The fusion stage explores spatial and temporal correlation across input frames and fuses useful information. Two fusion paths are designed for motion compensation and global pooling. The motion compensation fusion part measures and compensates the motion across frames by aligning them to the reference frame. The fusion is performed on aligned frames/features. The team adopts the alignment and fusion part from EDVR [38] for the motion compensation part.

Compared to the motion compensation path, the global pooling fusion path requires no alignment and adopts a U-net [30]

like architecture in which global max-pooling layers are inserted into all residual blocks. Global pooling has been used in

[2] to conduct permutation invariant deblurring. Here global pooling is used to exchange information between different frames, and since max-pooling is a selective process, different frames vote for the best information for restoration. Furthermore, the team adopts the CARAFE Module [35] to enable pixel-specific content-aware up-sampling. More specifically, the team uses 7 frames as input, with reconstruction blocks consisting of 40 residual blocks and feature extraction module consisting of 5 residual blocks. The channel number for each residual block is set to 128.

Figure 8: Illustration of the proposed framework by BossGao. The team exploits cutting-edge deep neural architectures for the video quality mapping task, i.e PCD align module, TSA fusion module, residual blocks and RDN blocks. For progressive training, first, the PCD align module and the 1st Restoration module are trained together. Next, the TSA fusion module is plugged in and the existing parameters are used as initialization. Then, the new framework with TSA module is trained again. More restoration modules can be stacked to get a deeper framework, which can be trained to achieve better performance.

4.6 BossGao team

The BossGao team exploits cutting-edge deep neural architectures for the video quality mapping task. Specifically, the team develops the following frameworks:

  • Framework1: PCD+TSA+10ResBlocks+30ResBlocks

  • Framework2: PCD+RDN1

  • Framework3: PCD+TSA+RDN1

  • Framework4: PCD+TSA+RDN2

where 10 ResBlock means 10 residual blocks [24], and there are two convolution layers in each ResBlock. RDN1 denotes 10 RDBs [45] with 8 convolution layers in each RDN. RDN2 denotes 8 RDBs with 6 convolution layers in each RDN. PCD and TSA are proposed in [37]. The framework is illustrated in Fig. 8.

Another contribution of the team is that, they propose to train the modules in these frameworks progressively. They train a framework by starting with fewer modules. More modules are added in progressively. When new modules are plugged in, the existing parameters are used as initialization, and the new modules and old modules are trained together. The modules in their frameworks are added in a carefully arranged order. Specifically, a framework with a PCD module and shallower restoration modules is trained first. Then, a TSA module is plugged in. Furthermore, more restoration modules can be stacked on to get a deeper frameworks. Frameworks trained by their method achieve better performance than the corresponding networks that are trained once-off.

In the final phase, the frameworks with the best performance are selected to produce the final test videos, i.e. Framework1, Framework3 and Framework4. Framework2 is only used for the last submission in the development phase.

4.7 DPE (baseline for track 2)

DPE [8] is originally developed for weakly-supervised photo enhancement. For track 2, we apply it to enhance videos frame by frame. In particular, DPE treats the problem with a two-way GAN whose structure is similar to CycleGAN [46]. To address the unstable training issue of GANs and obtain high-quality results, DPE proposes a few improvements along the way of constructing the two-way GAN. First, it suggests to augment the U-Net [29]

with global features for the design of the generator. In addition, individual batch normalization layers are proposed for the same type of generators. For better GAN training, DPE proposes an adaptive weighting Wasserstein GAN scheme.

4.8 WESPE (baseline for track 2)

Similar to DPE [8], WESPE [18] is another baseline that exploits the GAN technique for weakly supervised per-frame enhancement. The WESPE model comprises a generator paired with an inverse generator . In addition, two adversarial discriminators and and total variation (TV) complete the model’s objective definition. aims at distinguishing between high-quality image and enhanced image based on image colors, and distinguishes between and based on image texture. More specially, the objective of WESPE consists of: i) content consistency loss to ensure preserves ’s content, ii) two adversarial losses ensuring generated images lie in the target domain : a color loss and a texture loss, and iii) TV loss to regularize towards smoother results.

4.9 DACAL (baseline for track 2)

For track 2, we suggest the DACAL method [16] as the last baseline, which enhances videos directly. To further reduce the problem complexity, DACAL decomposes the photo enhancement process into multiple sub-problems. On the top level, a perception-based division is suggested to learn additive and multiplicative components, required to translate a low-quality image or video into its high-quality counterpart. On the intermediate level, a frequency-based division is exploited in the GAN context to learn the low- and high-frequency based distribution separately in a weakly-supervised manner. On the bottom level, a dimension-based division is suggested to factorize high-dimensional distributions into multiple one-dimensional marginal distributions for better training on the GAN model. To better deal with the temporal consistency of the enhancement, DACAL introduces an efficient recurrent design of the GAN model.

Method PSNR SSIM LPIPS TrainingReq TrainingTime TestReq TestTime Parameters ExtraData


BossGao 32.419 0.905 0.177 8V100 5-10d 1V100 4s n/a No
JOJO-MVIG 32.167 0.901 0.182 21080Ti 4d 11080Ti 2.07s 22.75M No
GTQ 32.126 0.900 0.187 22080Ti 5d 12080Ti 9.74s 19.76M No
ECNU 31.719 0.896 0.198 21080Ti 2-3d 11080Ti 1.1s n/a No
TCL 31.701 0.897 0.193 21080Ti 3d 11080Ti 25s 8.92M No
GIL 31.579 0.894 0.195 1970Ti 6d 1970Ti 11.37s 3.60M No
-th team 30.598 0.878 0.176 n/a 4d n/a 0.5s 7.92M Yes
No processing 30.553 0.877 0.176
Table 1: Quantitative results for Track 1. Bold: best, Underline: second and third best. TrainingTime: days, TestTime: seconds per frame.

5 Challenge Result Analysis

Figure 9: Visual Comparison for Track 1.
Figure 10: Temporal Profiles for Track 1.
Figure 11: Visual Comparison for Track 2.
Figure 12: Temporal Profiles for Track 2.

5.1 Track 1: Supervised VQM

This challenge track aims at restoring the discarded information, which has been lost due to compression, with the highest fidelity to the ground truth. Because of full supervision, the ranking among the participating teams can be computed objectively.

Metrics The most popular full reference metrics to evaluate the quality of images and videos are PSNR and SSIM. PSNR can be computed directly from the mean-squared-error (MSE). Therefore, L2-norm based objectives are commonly used to obtain high PSNR scores. SSIM is calculated from windows based statistics in images. In this challenge, both metrics are calculated per frame and averaged over all sequences. Table 1 reports the quantitative results of participating methods as well as the baseline, i.e. the input without any processing. With a PSNR value of 32.42dB and SSIM score of 0.91, team BossGao achieves the highest scores overall and is the winner of challenge track 1. Team JOJO-MVIG and GTQ follow closely with a PSNR difference of 0.25dB and 0.29dB to the winner respectively. The remaining teams also achieve respectable PSNR scores slightly below 32dB. The ranking in terms of SSIM is almost the same. In addition, as can be seen by the reported training times, capacity and test times, models with more parameters and teams with more processing power generally perform better. However, team ECNU manages to surpass more expensive methods with the fastest runtime. Team GIL targets for a compact network with the least parameters, which can be trained on a single lower-end GPU but still produces promising enhancement results.

Visual Comparison Selected samples from the test data are provided in Fig. 9 to compare the visual quality of the enhanced video frames among all teams. The visual comparison shows that team BossGao also performs the best for the quality enhancement on such sampled frames. It should be noted that due to the inherent loss of information after compression, fidelity based methods are not able reconstruct all high frequency details and tend to over-smooth the content. In order to assess continuity between frames, temporal profiles for all teams are provided in Fig. 10. A single vertical line of pixels is recorded over all frames in the sequence and stacked horizontally.

Additionally, we computed LPIPS [44] scores to compare perceptual quality among the teams. Optimizing for perceptual quality was not required by the participants in this challenge track, but the metric still provides interesting insights into quantitative quality assessment and its limitations. The scores among all teams is roughly consistent with PSNR and SSIM, which implies that the top teams also produce visually more pleasing results compared to their competitors. Interestingly, the input without processing along with team 7, which basically doesn’t alter the input, achieves the best score. We assume that the distortions, due to smoothing of L2-norm based methods, cause worse scores for the top teams, despite much higher reconstruction quality. In contrast, compression algorithms are designed to optimize for perceptual quality, which could lead to the strong LPIPS score for the input.

5.2 Track 2: Weakly-Supervised VQM

In this challenge track, the goal of the task is to enhance the video characteristics from a low quality device (ZED camera) to the characteristics of a high-end device (Canon 5D Mark IV) with limited supervision. Weak supervision is provided by weakly-paired videos, which share approximately the same content and are roughly aligned in the spatial and temporal domain.


Since there is no pixel-aligned ground truth available, full reference metrics are no option for quality assessment. Usually, results for these types of problems are scored by a MOS study, conducted by humans visually comparing different methods. While there exist metrics to measure distances between probability distributions for high level content, e.g. Fréchet Inception Distance (FID) 

[13], that are widely applied to generative models, finding reliable metrics for low-level characteristics remains an open problem. Popular perceptual metrics such as Learned Perceptual Image Patch Similarity  [44] metric and Perceptual Index [6] are used in the field too. However, we found the scores for these metrics are not suitable for the problem setting in this challenge and do not always correlate with human perception. Perceptual Index is not a relative score, it only measures general quality. However, we are interested in measuring the mapping quality from one domain to another. LPIPS requires aligned frames which is a problem since the frames are only roughly aligned. Nevertheless, we provide LPIPS scores for a selection of methods along with visual results, see Table 2, Fig. 11 and Fig. 12. Surprisingly, the source without processing achieves the best score by a large margin. While source and target frames are captured by a real camera, the methods alter the videos artificially. Since LPIPS relies on a feature extractor, which is trained on real images, this could lead to worse scores for the methods, due to low level distortions.

LPIPS 0.590 0.755 0.793 0.750
Table 2: LPIPS scores for Track 2. Bold: best, Underline: second best.

Visual Comparison Since there are no submissions for this track, visual results and temporal profiles for a selection of recent image and video quality mapping methods is provided as reference in Fig. 11 and Fig. 12 . WESPE [18] and DPE [8] are single image methods which are applied per frame, DACAL [16] is a true video enhancer. All the competing methods are trained on the Vid3oC dataset [21]

. The visual results show that DACAL preserves more details and enhances contrast better, while WESPE introduces biased colorization and DPE produces blurry textures.

6 Conclusions

This paper presents the setup and results of the NTIRE 2020 challenge on video quality mapping. This challenge addresses two real world settings: track 1 concerns video quality mapping from more compressed videos to less compressed ones with available paired training data; track 2 focuses on video quality mapping from a lower-end device to a higher-end device, given a collected weakly-paired training set. 7 teams competed in Track 1 in total. The participating methods demonstrated interesting and innovative solutions to the supervised quality mapping on compressed videos. In contrast, we evaluated three existing methods for track 2, showing their performance is promising but much effort is still needed for better video enhancement. The evaluation with LPIPS on both challenge tracks reveals the limits of current quantitative perceptual quality metrics and shows the need for more research in that area, especially for track 2 where no pixel-aligned reference is available. Our goal is that this challenge stimulates future research in the area of video quality mapping in either supervised or weakly-supervised scenarios, by serving as a standard benchmark and by the evaluation of new baseline methods.


We thank the NTIRE 2020 sponsors: Huawei, Oppo, Voyage81, MediaTek, DisneyResearch

Studios, and Computer Vision Lab (CVL) ETH Zurich.

Appendix A: Teams and affiliations

NTIRE 2020 VQM organizers

Members: Dario Fuoli (, Zhiwu Huang (, Martin Danelljan (, Radu Timofte (

Affiliations: Computer Vision Lab, ETH Zurich, Switzerland

GTQ team

Title: Modified Deformable Convolution Network for Video Quality Mapping

Members: Hua Wang, Longcun Jin (, Dewei Su

Affiliations: School of Software Engineering, South China University of Technology, Guangdong, China

ECNU team

Title: Compression2Compression: Learning compression artifacts reduction without clean data

Members: Jing Liu (

Affiliations: Multimedia and Computer Vision Lab, East China Normal University, Shanghai, China

GIL team

Title: Dense Recurrent Residual U-Net for Video Quality Mapping

Members: Jaehoon Lee (

Affiliations: Department of Electronics and Computer Engineering, Hanyang University, Seoul, Korea

TCL team

Title: Deformable convolution based multi-frame network with single-frame U-Net

Members: Michal Kudelski (, Lukasz Bala, Dmitry Hrybov, Marcin Mozejko

Affiliations: TCL Research Europe, Warsaw, Poland


Title: Video Quality Mapping with Dual Path Fusion Network

Members: : Muchen Li (, Siyao Li, Bo Pang, Cewu Lu

Affiliations: Machine Vision and Intelligence Group, Department of Computer Science, Shanghai Jiao Tong University, Shanghai, China

BossGao team

Title: Exploiting Deep Neural Architectures by Progressive Training for Video Quality Mapping

Members: Chao Li (, Dongliang He, Fu Li, Shilei Wen

Affiliations: Department of Computer Vision Technology (VIS), Baidu Inc., Beijing, China


  • [1] A. Abdelhamed, M. Afifi, R. Timofte, M. Brown, et al. (2020-06) NTIRE 2020 challenge on real image denoising: dataset, methods and results. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

    Cited by: §1.
  • [2] M. Aittala and F. Durand (2018-09)

    Burst image deblurring using permutation invariant convolutional neural networks

    In The European Conference on Computer Vision (ECCV), Cited by: §4.5.
  • [3] M. Z. Alom, C. Yakopcic, T. M. Taha, and V. K. Asari (2018) Nuclei segmentation with recurrent residual convolutional neural networks based u-net (r2u-net). In NAECON 2018-IEEE National Aerospace and Electronics Conference, pp. 228–233. Cited by: §4.3.
  • [4] C. O. Ancuti, C. Ancuti, F. Vasluianu, R. Timofte, et al. (2020-06) NTIRE 2020 challenge on nonhomogeneous dehazing. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [5] B. Arad, R. Timofte, Y. Lin, G. Finlayson, O. Ben-Shahar, et al. (2020-06) NTIRE 2020 challenge on spectral reconstruction from an rgb image. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [6] Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor (2018) The 2018 pirm challenge on perceptual image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §5.2.
  • [7] J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi (2017-07) Real-time video super-resolution with spatio-temporal networks and motion compensation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [8] Y. Chen, Y. Wang, M. Kao, and Y. Chuang (2018) Deep photo enhancer: unpaired learning for image enhancement from photographs with gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6306–6314. Cited by: §2, §4.7, §4.8, §5.2.
  • [9] M. Chu (2018) Learning temporal coherence via self-supervision for gan-based video generation. External Links: Link Cited by: §2.
  • [10] Y. Dai, D. Liu, and F. Wu (2017) A convolutional neural network approach for post-processing in hevc intra coding. In International Conference on Multimedia Modeling, pp. 28–39. Cited by: §2.
  • [11] D. Fuoli, S. Gu, and R. Timofte (2019) Efficient video super-resolution through recurrent latent space propagation. In ICCV Workshops, Cited by: §2.
  • [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §2.
  • [13] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 6626–6637. Cited by: §5.2.
  • [14] Y. Hu, H. He, C. Xu, B. Wang, and S. Lin (2018) Exposure: a white-box photo post-processing framework. ACM Transactions on Graphics (TOG) 37 (2), pp. 1–17. Cited by: §2.
  • [15] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708. Cited by: §4.3.
  • [16] Z. Huang, D. P. Paudel, G. Li, J. Wu, R. Timofte, and L. Van Gool (2019) Divide-and-conquer adversarial learning for high-resolution image and video enhancement. arXiv preprint arXiv:1910.10455. Cited by: §2, §4.9, §5.2.
  • [17] Z. Hui, J. Li, X. Gao, and X. Wang (2019) Progressive perception-oriented network for single image super-resolution. arXiv preprint arXiv:1907.10399. Cited by: §4.1.
  • [18] A. Ignatov, N. Kobyshev, R. Timofte, K. Vanhoey, and L. Van Gool (2018) WESPE: weakly supervised photo enhancer for digital cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 691–700. Cited by: §2, §4.8, §5.2.
  • [19] Y. Jo, S. Wug Oh, J. Kang, and S. Joo Kim (2018-06) Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [20] A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos (2016) Video super-resolution with convolutional neural networks. In IEEE Transactions on Computational Imaging, (English (US)). Cited by: §2.
  • [21] S. Kim, G. Li, D. Fuoli, M. Danelljan, Z. Huang, S. Gu, and R. Timofte (2019) The vid3oc and intvid datasets for video super resolution and quality mapping. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3609–3616. Cited by: §1, §3.1, §3.1, §3.2, §3.2, §5.2.
  • [22] S. Kosugi and T. Yamasaki (2019) Unpaired image enhancement featuring reinforcement-learning-controlled image editing software. arXiv preprint arXiv:1912.07833. Cited by: §2.
  • [23] M. Liang and X. Hu (2015) Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3367–3375. Cited by: §4.3.
  • [24] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee (2017-07) Enhanced deep residual networks for single image super-resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §4.6.
  • [25] A. Lugmayr, M. Danelljan, R. Timofte, et al. (2020-06) NTIRE 2020 challenge on real-world image super-resolution: methods and results. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [26] S. Nah, S. Son, R. Timofte, K. M. Lee, et al. (2020-06) NTIRE 2020 challenge on image and video deblurring. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [27] J. Park, J. Lee, D. Yoo, and I. So Kweon (2018) Distort-and-recover: color enhancement using deep reinforcement learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5928–5936. Cited by: §2.
  • [28] Cited by: §2.
  • [29] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §4.3, §4.4, §4.7.
  • [30] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 - 18th International Conference Munich, Germany, October 5 - 9, 2015, Proceedings, Part III, N. Navab, J. Hornegger, W. M. W. III, and A. F. Frangi (Eds.), Lecture Notes in Computer Science, Vol. 9351, pp. 234–241. External Links: Link, Document Cited by: §4.5.
  • [31] M. S. M. Sajjadi, R. Vemulapalli, and M. Brown (2018-06) Frame-Recurrent Video Super-Resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
  • [32] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883. Cited by: §4.3.
  • [33] X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia (2017-10) Detail-revealing deep video super-resolution. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
  • [34] M. Tassano, J. Delon, and T. Veit (2019)

    FastDVDnet: towards real-time video denoising without explicit motion estimation

    arXiv preprint arXiv:1907.01361. Cited by: §4.3.
  • [35] J. Wang, K. Chen, R. Xu, Z. Liu, C. C. Loy, and D. Lin (2019) CARAFE: content-aware reassembly of features. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 3007–3016. External Links: Link, Document Cited by: §4.5.
  • [36] T. Wang, M. Chen, and H. Chao (2017) A novel deep learning-based method of improving coding efficiency from the decoder-end for hevc. In 2017 Data Compression Conference (DCC), pp. 410–419. Cited by: §2.
  • [37] X. Wang, K. C. K. Chan, K. Yu, C. Dong, and C. C. Loy (2019) EDVR: video restoration with enhanced deformable convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Long Beach, CA, USA, June 16-20, 2019, Cited by: §4.6.
  • [38] X. Wang, K. C. Chan, K. Yu, C. Dong, and C. Change Loy (2019) Edvr: video restoration with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §4.4, §4.5, §4.5.
  • [39] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy (2018) Esrgan: enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §4.1.
  • [40] R. Yang, M. Xu, Z. Wang, and T. Li (2018) Multi-frame quality enhancement for compressed video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6664–6673. Cited by: §2.
  • [41] J. Yu, Y. Fan, J. Yang, N. Xu, Z. Wang, X. Wang, and T. Huang (2018) Wide activation for efficient and accurate image super-resolution. arXiv preprint arXiv:1808.08718. Cited by: §4.2.
  • [42] S. Yuan, R. Timofte, A. Leonardis, G. Slabaugh, et al. (2020-06) NTIRE 2020 challenge on image demoireing: methods and results. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [43] K. Zhang, S. Gu, R. Timofte, et al. (2020-06) NTIRE 2020 challenge on perceptual extreme super-resolution: methods and results. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [44] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    In CVPR, Cited by: §1, §5.1, §5.2.
  • [45] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu (2018) Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481. Cited by: §4.6.
  • [46] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017)

    Unpaired image-to-image translation using cycle-consistent adversarial networks

    In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §4.7.
  • [47] X. Zhu, H. Hu, S. Lin, and J. Dai (2019) Deformable convnets v2: more deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9308–9316. Cited by: §4.1.