At the moment, video content occupies a significant share of network traffic and is expected to grow to 71% in 2021. Therefore, the quality of the encoded video is becoming increasingly important, in particular, there is a growing interest in methods for assessing video quality. As new video codec standards appear, the existing standards are being improved. In order to choose one or another video encoding solution, it is necessary to have appropriate tools for video quality assessment. Since the only adequate method of evaluation is subjective evaluation (MOS), but it is extremely costly in terms of time and cost of its implementation, all other “objective” methods are improved in an attempt to approach the ground truth-solution (subjective evaluation).
Methods of evaluating the quality of the video can be divided into 3 categories : full-reference, reduced-reference and no-reference. Full-reference metrics are the most common, as their results are easily interpreted — usually as an assessment of the degree of distortion in the video and their visibility to the observer. The only drawback of this approach compared to the others is the need to have the original video for comparison with the encoded, and this possibility is not always available.
One of the widely-used full-reference metrics that is gaining popularity in use for video quality assessment is Video Multimethod Fusion Approach  (VMAF), announced by Netflix. It is an open-source learning-based solution. Its main idea is to combine multiple elementary video quality features and to train it with SVM on subjective data to perform the final per-frame score. The scheme of this metric is shown  in Fig. 1.
Despite increasing attention to this metric, many video quality analysis projects, such as MSU Annual Video Codec Comparison , use SSIM metrics and some even PSNR. At the same time, many users of these comparisons send requests to use metrics of VMAF type. The main obstacle to the full transition to the use of VMAF metrics is not the versatility of this metric is not fully adequate results on some types of video .
The main objective of this article is to find such video transformations that will improve VMAF-score without reducing the SSIM metric that is, those types of distortions that change the visual quality of the video (which should lead to a decrease in the value of any full-reference metric) and lead to an increase in the value of VMAF, which is a significant obstacle to the use of VMAF for all types of video and leads to the need to modify the original VMAF algorithm. The basis for this type of transformation will be colour and contrast adjustments.
2 Study Method
Two approaches for colour adjustments were tested to find the best strategy for VMAF scores increasing. In the first case, the distortions were applied for the videos before encoding. In the second case, the colours were adjusted after encoding. In general, there was no significant difference between these options, because the compression step can be omitted for increasing VMAF with colour enhancement. Therefore, further we will describe only the first case with adjustment before compression, and we leave the compression step because in our work VMAF tuning is considered in case of video-codec comparisons.
We chose 4 different videos with FullHD resolution and high bitrate to find colour transformations which may influence VMAF scores. Three of them (Crowd run, Red kayak and Speed bag) were taken from open video collection on media.xiph.org and one was taken from MSU video collection used for selecting testing video sets for annual video codecs comparison . All videos have different spatial and temporal complexity  and content. The description (and sources) of the first three videos can be found on site , and the rest Bay timelapse video sequence contained a scene with water and grass and the grass and waves on the water. It was filmed in flat colours and required post-processing.
Three versions of VMAF were tested: 0.6.1, 0.6.2, 0.6.3. The implementations of all three metric versions from MSU Video Quality Measurement Tool  were used. The results didn’t much different, so the following plots are presented for the latest (0.6.3) VMAF version.
3 Proposed Tuning Algorithm
For colour and brightness adjustment, two image processing algorithms were chosen: unsharp mask and histogram equalization. The implementations of these algorithms from the scikit-image library were used. In this library, unsharp mask has two parameters which influence image levels: radius (the radius of Gaussian blur) and amount (how much contrast was added at the edges). For histogram equalization, a parameter of clipping limit was analysed. In order to find optimal configurations of equalization parameters, a multi-objective optimization algorithm NSGA-II 
was used. Only the limits for the parameters were set to the genetic algorithm, and it was applied to find the best parameters for each testing video.
SSIM and VMAF scores were calculated for each video processed with the considered colour enhancement algorithms with different parameters. As it was mentioned before, after colour correction the videos were compressed with medium preset of x264 encoder on 3 Mbps. Then, the difference between metric scores of processed videos and original video were calculated to compare, how colour corrections influenced quality scores. Fig. 3 shows this difference for SSIM metric of Bay timelapse video sequence for different parameter values of unsharp mask algorithm. The similarity scores for VMAF quality metric are presented on Fig. 3.
On these plots, higher values mean that the objective quality of the colour-adjusted video was better according to the metric. VMAF shows better scores for high radius and a medium amount of unsharp mask, and SSIM becomes worse for high radius and high amount. The optimal values of the algorithm parameters can be estimated on the difference in these plots. For another colour adjustment algorithm (histogram equalization), one parameter was optimized and the results are presented on Fig.4 together with the results of unsharp mask.
According to these results, for some configurations of histogram equalization VMAF become significantly better (from 68 to 74) and SSIM doesn’t change a lot (decrease from 0.88 to 0.86). The results slightly differ for other videos. On Crowd run video sequence, VMAF was not increased by unsharp mask (Fig. 4(a)) and was increased a little by histogram equalization. For Red kayak and Speed bag videos, unsharp mask could significantly increase VMAF and just slightly decrease SSIM (Fig. 4(b) and Fig. 4(c))
The following examples of frames from the testing videos demonstrate colour corrections which increased VMAF and almost did not influence the values of SSIM. Unsharp mask with and Fig. 5(b) increased VMAF without significant decrease of SSIM for Bay timelapse.
For Crowd run sequence, histogram equalization with and also increased VMAF. The video is more contrasted, and the decrease in SSIM was more significant.
Red kayak looked better according to VMAF after unsharp mask with , .
For Speed bag the following parameters of unsharp mask allowed to increase VMAF greatly without influencing SSIM: , .
Video quality reference metrics are used to show the difference between original and distorted streams and are expected to take worse values when any transformations were applied to the original video. However, sometimes it is possible to deceive objective metrics. In our article, we described the way to increase the values of popular full-reference metric VMAF. If the video is not contrasted, VMAF can be increased by colour adjustments without influencing SSIM. In another case, contrasted video can also be tuned for VMAF but with little SSIM worsening.
Although VMAF has become popular and important, particularly for video codec developers and customers, there are still a number of issues in its application. This is why SSIM is used in many competitions, as well as in MSU Video-Codec Comparisons, as a main objective quality metric.
We wanted to pay attention to this problem and hope to see the progress in this are, which is likely to happen since the metric is being actively developed. Our further research will involve a subjective comparison of the proposed colour adjustments to the original videos and the development of novel approaches for metric tuning.
This work was partially supported by the Russian Foundation for Basic Research under Grant 19-01-00785a.
-  Cisco Visual Networking Index: Forecast and Methodology. 2016-2021.
-  HEVC Video Codec Comparison 2018 (Thirteen MSU Video Codec Comparison) http://compression.ru/video/codec_comparison/hevc_2018/
-  MSU Quality Measurement Tool: Download Page http://compression.ru/video/quality_measure/vqmt_download.html
-  Perceptual Video Quality Metrics: Are they Ready for the Real World? Available online: https://www.ittiam.com/perceptual-video-quality-metrics-ready-real-world
-  VMAF: Perceptual video quality assessment based on multi-method fusion, Netflix, Inc., 2017 https://github.com/Netflix/vmaf.
-  Xiph.org Video Test Media [derf’s collection] https://media.xiph.org/video/derf/
-  C. G. Bampis, Z. Li, and A. C. Bovik, “Spatiotemporal feature integration and model fusion for full reference video quality assessment,” in IEEE Transactions on Circuits and Systems for Video Technology, 2018.
-  C. Chen, S. Inguva, A. Rankin, and A. Kokaram, “A subjective study for the design of multi-resolution ABR video streams with the VP9 codec,” in Electronic Imaging, 2016(2), pp. 1-5.
-  S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam, “Objective video quality assessment methods: A classification, review, and performance comparison,” in IEEE Transactions on Broadcasting, 57(2), pp. 165–182, 2011.
K. Deb, A. Pratap, S. Agarwal, and T. A. M. T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” in
IEEE transactions on evolutionary computation, 6(2), pp.182-197, 2002.