Ambiguity of Objective Image Quality Metrics: A New Methodology for Performance Evaluation

01/19/2021 ∙ by Manri Cheon, et al. ∙ Yonsei University 0

Objective image quality metrics try to estimate the perceptual quality of the given image by considering the characteristics of the human visual system. However, it is possible that the metrics produce different quality scores even for two images that are perceptually indistinguishable by human viewers, which have not been considered in the existing studies related to objective quality assessment. In this paper, we address the issue of ambiguity of objective image quality assessment. We propose an approach to obtain an ambiguity interval of an objective metric, within which the quality score difference is not perceptually significant. In particular, we use the visual difference predictor, which can consider viewing conditions that are important for visual quality perception. In order to demonstrate the usefulness of the proposed approach, we conduct experiments with 33 state-of-the-art image quality metrics in the viewpoint of their accuracy and ambiguity for three image quality databases. The results show that the ambiguity intervals can be applied as an additional figure of merit when conventional performance measurement does not determine superiority between the metrics. The effect of the viewing distance on the ambiguity interval is also shown.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 8

page 15

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multimedia systems operating in resource-constrained environments usually strive to achieve two conflicting objectives: achieving efficiency and providing high quality content. For instance, compression, e.g., JPEG Wallace (1991) and JPEG2000 Skodras et al. (2001) for images and H.264/AVC Wiegand et al. (2003) and HEVC Sullivan et al. (2012) for videos, is a representative way to deal with this issue; it can reduce the amount of data to enhance storage and transmission efficiency at the cost of degradation of perceptual quality. Quality degradation introduced through enhanced efficiency tends to lower the quality of experience (QoE) of the consumers. Therefore, it is important to carefully consider the trade-off relationship between the two objectives in designing the target multimedia systems and services.

The first step toward this goal is to accurately measure the perceptual quality of the content as perceived by human viewers, which can be performed via subjective quality assessment or objective quality assessment Sheikh et al. (2006); Chikkerur et al. (2011); Cheon and Lee (2018). The former is the most accurate way of assessing the QoE, where human subjects are asked to rate the given content in terms of perceptual quality. However, it is time-consuming and expensive, and cannot be used in real time applications for controlling or optimizing the quality of the delivered content. Thus, objective quality assessment performed by objective metrics is widely used to replace subjective quality assessment, which tries to automatically predict perceived quality. A number of objective quality metrics have been developed and used for various applications including compression, transmission, enhancement, etc. Wang (2011).

It has been considered that the primary goal of an objective metric is to predict subjective quality scores, usually denoted as mean opinion scores (MOS), as accurately as possible. The ITU-T P.1401 standard ITU-T (2012)

specifies recommended procedures to evaluate the accuracy of an objective quality metric. For instance, the Pearson’s linear correlation coefficient (PLCC) and Spearman’s rank ordered correlation coefficient (SROCC) are computed to evaluate linearity and monotonicity of metrics with respect to subjective data, respectively. In addition, the prediction error and consistency are also measured using the root-mean-square error (RMSE) and outlier ratio (OR), respectively. Additional statistical measures of performance have also been proposed in

Krasula et al. (2016).

In this paper, however, we argue that the accuracy is not the only perspective in which objective quality metrics should be judged, and propose that considering an additional figure of merit provides much more informative insight into the performance and behavior of the metrics, which is their ambiguity or, conversely, reliability. In general, the output of an objective metric for a given visual stimulus is expressed as a single value on a continuous scale. This means that when the predicted quality scores for a pair of stimuli by a metric are obtained, the quality superiority between the stimuli is always formed, no matter how small the difference is. However, a nonzero quality score difference between two similar stimuli may cause misleading conclusions when the quality difference is not perceivable by human viewers. In fact, the visual sensitivity of humans is limited in the sense that a small amount of pixel value difference is sometimes visually indistinguishable depending on several factors such as overall luminance and neighboring pixel values Jayant et al. (1993).

Figure 1 shows example images demonstrating the existence of ambiguity of objective metrics Cheon and Lee (2016b). For two reference images (parrots and house) from the LIVE Image Quality Assessment Database Sheikh et al. (2006), JPEG2000 compression is applied to corrupt them with different bitrates. When Figures 1(a) and 1(b) are visually compared, their quality difference can be easily perceived. We conducted a subjective quality assessment experiment using the paired comparison scheme ITU-R (2012); Lee (2014), where most of the hired subjects (14 out of 15) chose Figure 1

(b) as the one having better quality. An objective metric, peak signal-to-noise ratio (PSNR), also rates Figure

1(b) as having better quality (with a difference of 2.49 dB), which is consistent with the quality superiority perceived by humans. On the other hand, the difference between Figures 1(c) and 1(d) is hardly noticeable; nearly a half of the subjects (6 out of 15) chose Figure 1(c). However, the quality measured by PSNR still determines that Figure 1(d) is better, showing a difference of 2.54 dB, which is even larger than the difference between Figures 1(a) and 1(b). Such inconsistent results between subjective and objective quality measurements are undesirable for quality-optimized multimedia systems. For instance, a system relying on PSNR may try to deliver Figure 1(d) instead of Figure 1(c) to improve QoE at the cost of an increased bitrate (20 to 35 kbytes), which is actually not so worthy for users. An additional observation in this example is the content-dependence of the ambiguity of objective metrics. In other words, the perceptual insignificance of the PSNR difference is observed only for house.

(a) 30.46 dB (b) 32.95 dB
(c) 30.39 dB (d) 32.93 dB
Figure 1: Example images from the LIVE Image Quality Assessment Database Sheikh et al. (2006), demonstrating the ambiguity of objective quality metrics (in this case, PSNR).

Even the state-of-the-art objective quality metrics showing good performance on predicting perceived quality (e.g., Narwaria et al. (2015); Sheikh and Bovik (2006); Wang et al. (2004)) have the issue of indistinguishable quality ranges because all the existing metrics produce single numerical values representing the perceptual quality of given stimuli, which is highly related to the reliability of the metrics. In this paper, therefore, we address the issue of ambiguity of objective quality assessment and propose an approach to measure the ambiguity as an interval defining the indistinguishable quality score range, which can be applied to any quality metrics to supplement their usefulness in a new direction. Furthermore, we present use cases where the proposed approach can be useful, i.e., one for performance comparison of quality metrics’ and the other for analysis of metrics performance in terms of reliability with respect to the viewing distance.

The main contributions of this paper can be summarized as follows:

  1. We propose an approach to measure the ambiguity of objective quality metrics.

    The ambiguity is expressed as an interval on the scale of a metric’s score, called ambiguity interval, within which the quality difference is perceptually indistinguishable. In obtaining ambiguity intervals, we incorporate the viewing conditions, in particular, viewing distance, because it is one of the most important factors that significantly influence the visual sensitivity of human viewers. Our approach employs the visual difference predictor (VDP) Daly (1992), which automatically estimates a threshold for perceptually indistinguishable pixel value difference at each pixel location. Using VDP also eliminates the necessity to conduct subjective experiments to obtain the ambiguity intervals, which maximizes the applicability of the proposed approach.

  2. We provide a practical use case, i.e., objective metric benchmarking, to demonstrate the effectiveness of the proposed approach.

    We use the ambiguity characteristics of metrics for performance comparison of metrics in addition to the accuracy measure. It is shown that the ambiguity can play an important role to determine the superiority among the metrics. In the research community of multimedia quality assessment, systematic evaluation of objective metrics has been considered important to analyze their advantages and disadvantages Sheikh et al. (2006); Chikkerur et al. (2011); Cheon and Lee (2018); Mohammadi et al. (2015); Lin and Kuo (2011); Lee (2012); Cheon and Lee (2016c). The Video Quality Experts Group (VQEG), an international forum for perceptual quality assessment towards standardization, also puts a significant amount of efforts for this. Thus, this use case proposes a novel framework for benchmarking of objective quality metrics, which enables performance analysis of the metrics in multidimensional perspectives.

  3. As another practical use case, we evaluate state-of-the-art metrics in terms of viewing distance.

    We show that the behavior of a metric depending on the viewing distance also provides valuable information in analyzing the metric’s performance. Such information can be exploited as a part of benchmarking of objective metrics. In addition, it can be used to identify proper viewing conditions where the metrics are reliable.

The rest of this paper is organized as follows. The following section presents the proposed approach in detail. Section 3 describes the experimental setup. The two use cases, where the ambiguity intervals are exploited, are given in Sections 4 and 5, respectively. Finally, conclusions are given in Section 6.

2 Proposed Method

2.1 Approach

As mentioned in the introduction, the goal of the proposed approach is to obtain an interval for a given objective quality metric, so that a score difference within the interval at that particular quality level is considered as being perceptually insignificant. The core idea to obtain such an interval is to change the amount of distortion (e.g., noise, compression artifacts, etc.) in an image and check using a perceptual model if the change of the distortion would be detected by human observers.

Algorithm 1 summarizes the procedure of the proposed approach to obtain the ambiguity interval (i.e., the upper and lower bounds of the interval) over the whole quality range for a source image and a type of distortion. Figure 2 illustrates the process to obtain the ambiguity interval for a particular quality level corresponding to a degraded image, which corresponds to lines 6 to 19 in Algorithm 1.

1:Source image having pixels
2:Upper bound width and lower bound width of the ambiguity interval
3:for  do : number of considered quality levels
4:     degrade_image Apply quality degradation (compression, blurring, etc.) to ( is more degraded than )
5:     measure_quality Measure the objective quality (assume that a higher indicates higher quality)
6:end for
7:for  do
8:     for  do
9:         PMap vdp(, ) Obtain the perceivableness map
10:         if count(PMap > 0.5) >  then
11:               Obtain the width of the lower bound
12:              break
13:         end if
14:     end for
15:     for  do
16:         PMap vdp(, ) Obtain the perceivableness map
17:         if count(PMap > 0.5) >  then
18:               Obtain the width of the upper bound
19:              break
20:         end if
21:     end for
22:end for
23:return and
Algorithm 1 Computing the ambiguity interval
Figure 2: Procedure to obtain an ambiguity interval based on a perceivableness map, which judges whether the two images are perceptually distinguishable or not. Note that the white pixels of the perceivableness map mean distinguishable pixels determined by VDP.

First, a quality degradation for the distortion type is applied to the source image () with various amounts of distortion and the objective quality levels of the resulting images are measured. Then, we determine perceptual distinguishability between two images having different amounts of artifacts. For a given image containing a certain type of artifacts, we obtain the level of ambiguity at the corresponding objective quality score () as an interval around the score. We assess perceivable difference of the given image () compared to an image from the same source image but with different amounts of artifacts (). We gradually increase (or decrease) the amounts of artifacts in , until the images that are perceptually distinguishable from the given image are found. Among the images that are perceptually indistinguishable from , the one with the highest (or lowest) quality level is identified, and the difference between the corresponding quality score and the quality score of is recorded as the width of the upper (or lower) bound of the interval, (or ).

A visual just-noticeable difference (JND) model is used to determine whether two images having different amounts of distortion are perceptually distinguishable. The JND model compares the two images and produces a map having the same size to that of the input images, called perceivableness

map. Each pixel of the map represents the probability that the pixel value difference of the two images at the corresponding location is perceptually distinguishable. A probability of 0.5 (i.e., random chance) is considered as the threshold of distinguishability. Therefore, if at most a certain proportion (denoted as

) of the pixels of the perceivableness map have values above 0.5, the two images are considered to be perceptually indistinguishable.

The JND model considered in this study is VDP, originally proposed by Daly Daly (1992). It enables to specify the viewing conditions including the type, resolution, and parameters of the display, together with the viewing distance Mantiuk et al. (2011). In particular, we use the latest version, known as HDR-VDP 2.2 Narwaria et al. (2015)111An implementation is publicly available at http://hdrvdp.sourceforge.net/wiki/. The model quantifies the visible difference between two input images under specific viewing conditions. The images are firstly passed through a model of the optical retinal pathway, including a simulation of intra-ocular light scatter, photoreceptor spectral sensitivity, luminance masking, and achromatic response. Further on, they are compared on multiple scales considering the model of neural noise, neural contrast sensitivity, and contrast masking. Note that when producing a perceivableness map, VDP takes into account the contextual information for each pixel (i.e., its relationship with neighboring pixels).

(a) JPEG (b) JPEG2K
(c) GB (d) WN
Figure 3: Examples of obtained ambiguity intervals of VIF for the LIVE database. The upper and lower bounds for two different reference images are expressed in different colors. (a) JPEG (b) JPEG2000 (JPEG2K) (c) Gaussian blur (GB) (d) white Gaussian noise (WN)

Figure 3 shows examples of the ambiguity intervals, which are obtained for the visual information fidelity (VIF) metric Sheikh and Bovik (2006). To determine the intervals, we generate =100 images having different amounts of distortion (spanning the whole quality range) for each distortion type and each reference image in the LIVE Image Quality Assessment Database Sheikh et al. (2006), and apply Algorithm 1 to them. In the figure, a higher score means a higher quality level, i.e., less artifacts. Three types of dependency of the interval are observed. First, the width of the interval is not necessarily uniform over the quality range. In Figure 3(c), for instance, the width of the interval is large for the intermediate quality range and small for low quality (near zero). This implies that the perceptual scale of the metric is not perfectly linear. Second, the interval width is dependent on the content, which is in line with the observation made from Figure 1. This is related to the fact that the visibility of quality degradation is dependent on the image content due to perceptual mechanisms such as frequency-dependent contrast sensitivity, spatial masking, etc. Third, the type of distortion also influences the interval because the detectability of quality difference depends on the type of artifacts. Detailed analysis is given in Section 4. In summary, the interval is dependent on the visual components included in the image, which are affected differently by the quality level, the distortion type, and the content itself.

2.2 Measures for Ambiguity Intervals

(a) PSNR (b) ADM
Figure 4: Examples of obtained ambiguity intervals for GB of the LIVE database. Different colors mean different reference images. (a) PSNR (b) ADM

The ambiguity intervals of an objective metric can be used to measure the performance of the metric in terms of quality resolution. Figure 4 shows examples for two different metrics, i.e., PSNR and additive impairment and detail loss measure (ADM) Li et al. (2011), which have different output ranges and ambiguity interval widths. Overall, for instance, the intervals of ADM are larger than those of PSNR; the intervals of PSNR are relatively small for the low quality range and get larger as the quality increases, whereas the intervals of ADM are more uniform over the whole range. To enable easy comparison between the intervals of different metrics, we compute measures that summarize the ambiguity intervals of a metric. As the first step, the ambiguity intervals of a metric are normalized with the obtained output range of the metric, since different metrics may have different ranges and units. Note that in our preliminary work Cheon and Lee (2016a), nonlinear regression using subjective rating data was employed for normalization, which limits the applicability of the method only to the cases where subjective data are available. In addition, only the quality levels associated with subjective ratings were used, which permitted ambiguity evaluation only at a coarse level.

We propose to compute three statistics of the ambiguity intervals, namely, the mean, maximum, and standard deviation of the widths of the ambiguity intervals over the whole quality range in order to measure the performance of a metric in multiple aspects of ambiguity. They are measures of the sensitivity of a metric in an average sense, the coarsest quality resolution, and the uniformity of the quality resolution, respectively. The smaller each of these measures is, the better the performance of the metric is.

3 Experimental setup

We conduct experiments in order to demonstrate applications where the proposed approach can be exploited effectively, which are shown in the following sections. This section explains the employed databases and the objective metrics considered in the experiments.

3.1 Databases

Database # Contents Image resolutions Distortion types Screen Viewing distance Subjective ratings
LIVE Sheikh et al. (2006) 29 768512 JPEG, JPEG2K sRGB, CRT 2H DMOS
512512 GB, WN 21-inch, 1024768
VDID Gu et al. (2015) 8 768512 JPEG, JPEG2K sRGB, LCD 4H DMOS
512512 GB, WN 23-inch, 19201080 6H
CIDIQ Liu et al. (2014b) 23 800800 JPEG, JPEG2K sRGB, LCD 1.5H MOS
GB, PN 24-inch, 19201080 3H
Table 1: Characteristics of the three databases used for the experiments. A viewing distance is expressed as a multiple of the height of the display.

We employ three databases that are popularly used in the research of perceptual quality assessment, i.e., the LIVE Image Quality Assessment Database (LIVE) Sheikh et al. (2006), which is one of the most popular databases for benchmarking objective metrics, the Viewing Distance-changed Image Database (VDID) Gu et al. (2015), which is the first image quality assessment database specifically established for varying viewing distances, and the Colourlab Image Database: Image Quality (CIDIQ) Liu et al. (2014b), which also contains subjective data for multiple viewing distances. The databases were produced based on different experimental setups such as reference images, distortion types, screens, viewing distances, etc. We select them to ensure reproducibility of distortion types and availability of information regarding viewing environments, e.g., information of the screen and viewing distance. Table 1 summarizes the characteristics of the databases. Four common distortion types are selected, i.e., JPEG compression, JPEG2000 (JPEG2K) compression, Gaussian blur (GB), and white Gaussian noise (WN). For the CIDIQ database, Poisson noise (PN) is considered instead of WN. JPEG and JPEG2K are well known compression schemes for images, and GB and WN (or PN) are distortions that can easily occur in pre- or post-processing of images. VDID and CIDIQ have subjective results from two different viewing distances.

3.2 Objective metrics

We consider 33 state-of-the-art objective quality metrics (28 full-reference (FR) metrics, one reduced-reference (RR) metric, and four no-reference (NR) metrics) for benchmarking. The tested FR metrics are PSNR, structural similarity index (SSIM) Wang et al. (2004), multi-scale structural similarity (MS-SSIM) Wang et al. (2003), visual signal-to-noise ratio (VSNR) Chandler and Hemami (2007), VIF Sheikh and Bovik (2006), universal image quality index (UQI) Wang and Bovik (2002), information fidelity criterion (IFC) Sheikh et al. (2005), noise quality measure (NQM) Damera-Venkata et al. (2000), weighted signal to noise ratio (WSNR) Damera-Venkata et al. (2000), modified versions of PSNR (PSNR-HVS Egiazarian et al. (2006), PSNR-HVS-M Ponomarenko et al. (2007), PSNR-HMA, PSNR-HA, PSNR-HMA-C, and PSNR-HA-C Ponomarenko et al. (2011)), optimal scale selection (OSS)-PSNR and OSS-SSIM Gu et al. (2015), information content weighted SSIM (IW-SSIM) Wang and Li (2011), feature similarity index (FSIM) and chrominance extension of FSIM (FSIM-C) Zhang et al. (2011), gradient magnitude similarity deviation (GMSD) Xue et al. (2014), most apparent distortion (MAD) Larson and Chandler (2010), ADM Li et al. (2011), analysis of distortion distribution-based (ADD)-SSIM Gu et al. (2016), ADD-gradient similarity index (ADD-GSIM) Gu et al. (2016), a visual saliency-induced index (VSI) Zhang et al. (2014), image quality assessment based on gradient similarity (GSM) Liu et al. (2012), and perceptual similarity (PSIM) Gu et al. (2017a). The RR metric is reduced reference entropic differencing index (RRED) Soundararajan and Bovik (2012), and the NR metrics are spatial-spectral entropy-based quality (SSEQ) Liu et al. (2014a), oriented gradients image quality assessment (OG-IQA) Liu et al. (2016), blind image integrity notator using DCT statistics (BLIINDS2) Saad et al. (2012), and accelerated screen image quality evaluator (ASIQE) Gu et al. (2017b).

4 Use case 1 : Benchmarking of objective metrics

Objective quality metrics that can automatically predict perceived quality of visual content are a key component of quality-optimized multimedia systems. For instance, a method enhancing a given degraded image requires an objective metric as a criterion with respect to which the image is enhanced. Therefore, it is critical to identify a quality metric that mimics the human visual system as closely as possible, so that the results of optimization based on the metric are also optimal for human viewers. In this context, benchmarking studies of objective quality metrics have been conducted extensively in literature, e.g., Cheon and Lee (2018); Lee (2012); Hanhart et al. (2013); Tian et al. (2019). In these studies, as mentioned in the introduction, the prediction accuracy of existing metrics is considered as the most important performance index, which is typically measured in terms of PLCC, SROCC, OR, and RMSE. However, different metrics have different levels of ambiguity, which can be captured by the proposed approach. The use case presented in this section demonstrates how such information can be effectively used in the benchmarking.

In this use case, we use the LIVE database. The accuracy performance of the 33 state-of-the-art objective metrics is measured by PLCC between the ground truth subjective quality scores and the predicted quality scores222Other measures such as SROCC, OR, and RMSE can be also used, but we use only PLCC for conciseness of presentation.. In particular, PLCC is computed after nonlinear regression using the monotonic logistic function:

(1)

to fit the objective scores outputted by a metric to the subjective quality scores, as described in the recommendation VQEG (2000). Here, and denote the objective scores before and after regression, respectively. The initial values of the parameters ( to ) are set as suggested in VQEG (2000). In addition, the statistical tests are also conducted ITU-T (2012)

, i.e., Z-tests are performed using the Fisher z-transformation for PLCC. The ambiguity performance of the metrics is evaluated based on the proposed approach. The mean, maximum, and standard deviation of the widths of the ambiguity intervals are obtained. In addition, non-parametric Wilcoxon-Mann-Whitney tests are conducted to statistically compare the ambiguity intervals of different metrics.

(a) JPEG (b) JPEG2K
(c) GB (d) WN
Figure 5: Performance of the objective metrics in terms of Pearson’s linear correlation coefficient (PLCC) scores (blue) and mean of ambiguity intervals (green) for the LIVE database. (a) JPEG, (b) JPEG2K, (c) GB, and (d) WN. The metrics are listed in a descending order of the PLCC scores. The statistically equivalent metrics with the best metric for PLCC are marked in a gray box.

Figure 5 summarizes the PLCC values and the mean ambiguity intervals of the 33 metrics. The results for the four distortion types are shown separately, and the metrics are listed in a descending order of the PLCC values. In the figure, the metrics showing statistically equivalent performance with the best metric in terms of PLCC are marked in the gray box. We can observe that the superiority of a metric over the others in terms of accuracy may not coincide with its superiority in terms of ambiguity, and vice versa. For instance, in Figure 5(b), the best metric in terms of accuracy is FSIM-C, but GSM, which is statistically significantly inferior to FSIM-C, is the best in terms of ambiguity.

Many metrics predict perceived image quality with high accuracy. For instance, the best metric in terms of PLCC for JPEG in Figure 5(a), i.e., FSIM-C, which shows PLCC of about 0.95, is not statistically different with PSNR-HA, which ranks 24th. Thus, it would be difficult to distinguish the superiority between these metrics. At this point, we can apply the results of the ambiguity analysis. Among the top 24 metrics, ADD-GSIM has the smallest mean width of the ambiguity intervals, which is revealed to be significantly smaller than the second smallest one (VSI) by the statistical test (). For the other types of distortion, similar trends are also observed, i.e., a number of the metrics show similar performance in terms of PLCC and their performance is not statistically different from that of the best metric, and we can use ambiguity intervals in order to choose the best metric for these cases. From this approach, ADD-SSIM, IFC, and MAD are selected as the best metrics for JPEG2K, GB, and WN, respectively.

The mean ambiguity interval widths are different depending on the distortion type. The average values for all metrics are 0.0032, 0.0300, 0.1104, and 0.0003 for JPEG, JPEG2K, GB, and WN, respectively. The smallest ambiguity intervals are produced for WN, because changes of the amount of white noise can be easily detected compared to the other types of distortion. The GB distortion yields the largest ambiguity intervals because the change of the strength of GB is relatively hard to distinguish. JPEG2K also has relatively large ambiguity intervals because the introduced artifacts in the images are quite similar to those by GB.

Figure 6: Performance of the top-performing objective metrics showing statistically equivalent PLCC values for all data of the LIVE database. PLCC scores and the mean, maximum, and standard deviation values of ambiguity intervals are shown.

In addition to the mean width of the ambiguity intervals, the maximum and standard deviation of the intervals can be also considered in order to analyze the performance of metrics and find the superiority between them. Figure 6 shows the performance of the objective metrics for all distortion types, which have statistically equivalent performance in terms of PLCC. When we compare the metrics, e.g., SSEQ and IW-SSIM, the two metrics show similar performance based on PLCC and the mean ambiguity interval. The maximum and standard deviation of the ambiguity intervals are smaller for IW-SSIM, which can be regarded as a better metric; a low standard deviation of the ambiguity intervals means that it has a uniform quality resolution (or ambiguity) for all quality ranges, which is useful in applications where the metric needs to operate in a wide range of quality. As another example, ADD-SSIM, ADD-GSIM, FSIM, and FSIM-C show statistically equivalent performance in terms of the mean ambiguity intervals, showing mean ambiguity intervals of only about 2.0-2.5% of the whole quality range. However, the maximum and standard deviation of the ambiguity intervals of ADD-SSIM are larger than those of the other three metrics, and thus it may be less preferable. Therefore, considering all the ambiguity measures, ADD-GSIM, FSIM, and FSIM-C can be regarded as the best metrics.

5 Use case 2 : Viewing distance vs. ambiguity

The viewing distance is one of the most important factors that influence visual quality perception of human viewers. As the distance from a viewer to an image gets large, less and less details in the image are distinguished, changes or artifacts in the image become less noticeable, and the viewer’s quality perception becomes less reliable. The proposed approach incorporates this tendency by employing the VDP method that considers the viewing environment including the viewing distance.

Figure 7: Examples of ambiguity intervals of two objective metrics, MS-SSIM (blue) and VSNR (orange), with respect to the viewing distance (in multiples of the display height) for GB of the VDID database. The superiority of a metric against the other varies depending on the viewing distance.

However, due to the difference in underlying mechanisms of different objective metrics, they may show different ambiguity patterns with respect to the viewing distance. For instance, the superiority of the metrics in terms of the ambiguity interval may change depending on the viewing distance. Figure 7 shows the mean ambiguity intervals of two metrics, MS-SSIM and VSNR, for GB of the VDID database with respect to the viewing distance. When the viewing distance is 4 or 5 times the display height (i.e., 4H or 5H), MS-SSIM shows slightly smaller ambiguity intervals than VSNR, whereas VSNR shows smaller intervals than MS-SSIM for the other viewing distances. Thus, the viewing distance should be considered carefully when the ambiguity of a metric is evaluated. In general, it is preferable for a metric not only to have high accuracy and low ambiguity for a particular viewing distance, but also to show consistent performance over various viewing distances in terms of both accuracy and ambiguity.

In this section, we demonstrate that the ambiguity behavior of metrics with respect to the viewing distance can be used to compare the reliability performance of the metrics, which can be seen as an extension of the benchmarking in the previous section, and to identify proper viewing distances for which a metric can be used reliably. The VDID and CIDIQ databases are used.

Performance of the metrics for two viewing distances in terms of PLCC and mean of the ambiguity intervals is shown in Figure 8. Most of the metrics show statistically equivalent PLCC scores for the two viewing distances; only one and nine metrics show significantly different accuracy scores for VDID and CIDIQ, respectively (which are marked with asterisks in Figure 8). However, for VDID, all metrics except for OSS-SSIM (marked with a square in Figure 8(a)) show significantly different ambiguity interval widths for the two viewing distances. Furthermore, OSS-SSIM shows high accuracy, i.e., it is included in the group of top-performing metrics (showing statistically equivalent PLCC scores with the best one for the short distance), and shows the smallest mean ambiguity intervals for both viewing distances (which are statistically equivalent). Thus, we can choose OSS-SSIM as the best metric considering both the accuracy and the ambiguity for different viewing distances. OSS-SSIM explicitly considers the effect of the viewing distance, which seems to be the reason for the consistency of its ambiguity performance. In the case of CIDIQ, all metrics have significantly different results of ambiguity intervals. MAD and IW-SSIM are two top-performing metrics in terms of accuracy for the short distance. However, these metrics have relatively lower performance in terms of ambiguity (i.e., larger mean interval widths) than the following ones (in the ranking of accuracy), e.g., OSS-SSIM and ADD-GSIM. If we accept a slight loss in terms of accuracy, it would be a better choice to select ADD-GSIM or OSS-SSIM as the best metric with consideration of both the accuracy and ambiguity for the two viewing distances.

(a) VDID
(b) CIDIQ
Figure 8: Performance of the metrics for two viewing distances in terms of PLCC scores and mean of ambiguity intervals for the (a) VDID and (b) CIDIQ databases. The metrics are listed in a descending order of the PLCC for the short viewing distances (4H for VDID and 1.5H for CIDIQ). The metrics having statistically different accuracy between the two viewing distances are marked with asterisks. The statistically equivalent metrics with the best metric in terms of PLCC for the short viewing distance are marked with a gray box. The metric having statistically equivalent ambiguity interval widths for the two viewing distances is marked with a square.
(a) JPEG (b) JPEG2K
(c) GB (d) WN
Figure 9: Mean widths of the ambiguity intervals of ADD-GSIM for the VDID database (a) JPEG, (b) JPEG2K, (c) GB, and (d) WN. The distortion type influences the slopes of the curves.

Next, we analyze patterns of the ambiguity intervals over various viewing distances. As an example, Figure 9 shows the mean widths of the ambiguity intervals of ADD-GSIM for each of the four distortion types of VDID. As aforementioned, as the viewing distance increases, the ability of human viewers to distinguish the details in images decreases. The ambiguity intervals obtained by our approach also tend to increase with the increasing viewing distance. A gradual increase of the ambiguity intervals due to increase of the viewing distance is acceptable, but a sudden increase of the slope would not be desirable. For instance, in Figure 9(c), the slope for GB increases suddenly after 5H, thus, care must be taken when the metric is used for viewing distances larger than 5H.

Figures 10 and 11 show the mean ambiguity interval widths of the metrics with respect to the viewing distance for VDID and CIDIQ, respectively. For each distortion type, the metrics are sorted with respect to the mean ambiguity interval width for the short viewing distances, and the result of a metric in each quarter is presented. We can observe that the overall slopes of the mean ambiguity interval due to the viewing distance change are different depending on the objective metrics, distortion types, and databases.

(a1) OSS-SSIM (a2) GSM (a3) WSNR (a4) PSNR
(b1) VSNR (b2) FSIM (b3) PSNR-HA (b4) VIF
(c1) MS-SSIM (c2) FSIM (c3) BLIINDS2 (c4) ADM
(d1) OSS-SSIM (d2) RRED (d3) NQM (d4) IFC
Figure 10: Mean widths of the ambiguity intervals of the objective metrics for the VDID database. For each distortion type, metrics in the first, second, third, and last quarters in the ascending order of the mean ambiguity interval width for 4H are shown from left to right. (a1)-(a4) JPEG, (b1)-(b4) JPEG2K, (c1)-(c4) GB, and (d1)-(d4) WN
(a1) ADD-GSIM (a2) RRED (a3) WSNR (a4) OSS-PSNR
(b1) VSNR (b2) GMSD (b3) UQI (b4) VIF
(c1) VSNR (c2) GSM (c3) GMSD (c4) ADM
(d1) BLIINDS2 (d2) PSNR-HMA (d3) PSNR (d4) OG-IQA
Figure 11: Mean widths of the ambiguity intervals of the objective metrics for the CIDIQ database. For each distortion type, metrics in the first, second, third, and last quarters in the ascending order of the mean ambiguity interval width for 1.5H are shown from left to right. (a1)-(a4) JPEG, (b1)-(b4) JPEG2K, (c1)-(c4) GB, and (d1)-(d4) PN

When we compare the metrics for the same distortion types (i.e., the four panels in each row), the worse the performance of the metric in terms of ambiguity is, the larger the slope of the graph is. This tendency is observed clearly except for GB of CIDIQ (Figure 11(c)). Therefore, choosing a metric showing good performance in terms of ambiguity for a particular viewing distance is useful also for its reliable usage over different viewing distances.

In Figures 10 and 11, it is also observed that the shapes of the graphs are different depending on the distortion type. The cases of JPEG and JPEG2K for both databases show mostly monotonically increasing patterns. Monotonic increases are also observed for the two types of noise (WN of VDID and PN of CIDIQ), except for the clipping at zero for PN of CIDIQ. The graphs of GB show different tendencies; the mean widths of the intervals remain almost the same for some ranges of viewing distance (for small distances up to 5H for VDID and all distances considered for CIDIQ). As mentioned earlier, when the viewing distance increases, an ability to distinguish the details in the image also decreases. Since GB has already reduced the details in the image, the ambiguity intervals are less affected by the viewing distance change. In some cases, there exist sudden increases of the ambiguity interval widths (GB of VDID and PN of CIDIQ, both after 5H), which needs to be carefully considered when a system using a quality metric operates for a wide range of viewing distances.

6 Conclusion

In this paper, we have proposed a new way to measure performance of objective image quality metrics in the viewpoint of the quality resolution. The procedure to obtain the ambiguity interval for an objective quality score has been developed. We have demonstrated that the width and the uniformity of the interval over the quality range are useful as performance measures in addition to the accuracy of quality estimation for comparison of different metrics. In addition, the need for consideration of the viewing distance has been emphasized when the ambiguity intervals are used.

In addition to the use cases shown in this paper, there are several other potential applications of the proposed method. An example is to construct rate-distortion (R-D) curves having ambiguity intervals for evaluating image compression methods, where the distortion is measured with an objective metric and the ambiguity interval at each rate value is obtained using the proposed method, which is an objective counterpart of the method obtaining R-D curves having intervals based on subjective quality scores Hanhart and Ebrahimi (2014).

One possible follow-up research question is: Is there a way to combine the two (possibly conflicting) performance dimensions of objective quality metrics (i.e., accuracy and ambiguity) to have a single performance measure? A general solution to this issue may be very challenging to develop. However, some guidelines to define the superiority and inferiority among different metrics could be identified depending on the target application, which would be desirable to explore as future work.

Acknowledgment

This research was supported by the Ministry of Science and ICT (MSIT), Korea, under the “ICT Consilience Creative Program” (IITP-2018-2017-0-01015) supervised by the Institute for Information & communications Technology Promotion (IITP) and also by the IITP grant funded by the Korea government (MSIT) (R7124-16-0004, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding).

References

  • D. M. Chandler and S. S. Hemami (2007) VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Processing 16 (9), pp. 2284–2298. Cited by: §3.2.
  • M. Cheon and J.-S. Lee (2016a) Ambiguity-based evaluation of objective quality metrics for image compression. In Proc. Int. Conf. Quality of Multimedia Experience (QoMEX), pp. 1–6. Cited by: §2.2, Ambiguity of Objective Image Quality Metrics: A New Methodology for Performance Evaluation.
  • M. Cheon and J.-S. Lee (2016b) On ambiguity of objective image quality assessment. Electronics Letters 52 (1), pp. 34–35. Cited by: §1.
  • M. Cheon and J. Lee (2016c) Evaluation of objective quality metrics for multidimensional video scalability. Journal of Visual Communication and Image Representation 35, pp. 132–145. Cited by: item 2.
  • M. Cheon and J. Lee (2018) Subjective and objective quality assessment of compressed 4K UHD videos for immersive experience. IEEE Trans. Circuits and Systems for Video Technology 28 (7), pp. 1467–1480. Cited by: item 2, §1, §4.
  • S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam (2011) Objective video quality assessment methods: A classification, review, and performance comparison. IEEE Trans. Broadcasting 57 (2), pp. 165–182. Cited by: item 2, §1.
  • S. J. Daly (1992) Visible differences predictor: an algorithm for the assessment of image fidelity. In Proc. SPIE/IS&T Symposium on Electronic Imaging: Science and Technology, pp. 2–15. Cited by: item 1, §2.1.
  • N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik (2000) Image quality assessment based on a degradation model. IEEE Trans. Image Processing 9 (4), pp. 636–650. Cited by: §3.2.
  • K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and M. Carli (2006) New full-reference quality metrics based on HVS. In Proc. Int. Workshop on Video Processing and Quality Metrics, Vol. 4. Cited by: §3.2.
  • K. Gu, L. Li, H. Lu, X. Min, and W. Lin (2017a) A fast reliable image quality predictor by fusing micro- and macro-structures. IEEE Trans. on Industrial Electronics 64 (5), pp. 3903–3912. Cited by: §3.2.
  • K. Gu, J. Zhou, J. Qiao, G. Zhai, W. Lin, and A. C. Bovik (2017b) No-reference quality assessment of screen content pictures. IEEE Trans. on Image Processing 26 (8), pp. 4005–4018. Cited by: §3.2.
  • K. Gu, M. Liu, G. Zhai, X. Yang, and W. Zhang (2015) Quality assessment considering viewing distance and image resolution. IEEE Trans. Broadcasting 61 (3), pp. 520–531. Cited by: §3.1, §3.2, Table 1.
  • K. Gu, S. Wang, G. Zhai, W. Lin, X. Yang, and W. Zhang (2016) Analysis of distortion distribution for pooling in image quality prediction. IEEE Trans. Broadcasting 62 (2), pp. 446–456. Cited by: §3.2.
  • P. Hanhart and T. Ebrahimi (2014) Calculation of average coding efficiency based on subjective quality scores. Journal of Visual Communication and Image Representation 25, pp. 555–564. Cited by: §6.
  • P. Hanhart, P. Korshunov, and T. Ebrahimi (2013) Benchmarking of quality metrics on ultra-high definition video sequences. In Proc. Int. Conf. Digital Signal Processing (DSP), pp. 1–8. Cited by: §4.
  • ITU-R (2012) Recommendation BT.500-13: Methodology for the subjective assessment of the quality of television. Technical report ITU-R. Cited by: §1.
  • ITU-T (2012) Recommendation P.1401: methods, metrics and procedures for statistical evaluation, qualification and comparison of objective quality prediction models. Technical report ITU-T. Cited by: §1, §4.
  • N. Jayant, J. Johnston, and R. Safranek (1993) Signal compression based on models of human perception. Proc. IEEE 81 (10), pp. 1385–1422. Cited by: §1.
  • L. Krasula, K. Fliegel, P. Le Callet, and M. Klíma (2016) On the accuracy of objective image and video quality models: New methodology for performance evaluation. In Proc. Int. Workshop on Quality of Multimedia Experience (QoMEX), pp. 1–6. Cited by: §1.
  • E. C. Larson and D. M. Chandler (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging 19 (1), pp. 1–21. Cited by: §3.2.
  • J.-S. Lee (2014) On designing paired comparison experiments for subjective multimedia quality assessment. IEEE Trans. Multimedia 16 (2), pp. 564–571. Cited by: §1.
  • J. Lee (2012) Comparison of objective quality metrics on the scalable extension of H.264/AVC. In Proc. IEEE Int. Conf. Image Processing (ICIP), pp. 693–696. Cited by: item 2, §4.
  • S. Li, F. Zhang, L. Ma, and K. N. Ngan (2011) Image quality assessment by separately evaluating detail losses and additive impairments. IEEE Trans. Multimedia 13 (5), pp. 935–949. Cited by: §2.2, §3.2.
  • W. Lin and C. J. Kuo (2011) Perceptual visual quality metrics: A survey. Journal of Visual Communication and Image Representation 22 (4), pp. 297–312. Cited by: item 2.
  • A. Liu, W. Lin, and M. Narwaria (2012) Image quality assessment based on gradient similarity. IEEE Trans. Image Processing 21 (4), pp. 1500–1512. Cited by: §3.2.
  • L. Liu, Y. Hua, Q. Zhao, H. Huang, and A. C. Bovik (2016)

    Blind image quality assessment by relative gradient statistics and adaboosting neural network

    .
    Signal Processing: Image Communication 40, pp. 1–15. Cited by: §3.2.
  • L. Liu, B. Liu, H. Huang, and A. C. Bovik (2014a) No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication 29 (8), pp. 856–863. Cited by: §3.2.
  • X. Liu, M. Pedersen, and J. Y. Hardeberg (2014b) CID:IQ–a new image quality database. In Proc. Int. Conf. Image and Signal Processing, pp. 193–202. Cited by: §3.1, Table 1.
  • R. Mantiuk, K. J. Kim, A. G. Rempel, and W. Heidrich (2011) HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graphics 30 (4), pp. 40:1–13. Cited by: §2.1.
  • P. Mohammadi, A. Ebrahimi-Moghadam, and S. Shirani (2015) Subjective and objective quality assessment of image: A survey. Majlesi Journal of Electrical Engineering 9 (1), pp. 55–83. Cited by: item 2.
  • M. Narwaria, R. K. Mantiuk, M. P. Da Silva, and P. Le Callet (2015) HDR-VDP-2.2: A calibrated method for objective quality prediction of high-dynamic range and standard images. Journal of Electronic Imaging 24 (1), pp. 1–3. Cited by: §1, §2.1.
  • N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, and M. Carli (2011) Modified image visual quality metrics for contrast change and mean shift accounting. In Proc. Int. Conf. The Experience of Designing and Application of CAD Systems in Microelectronics, pp. 305–311. Cited by: §3.2.
  • N. Ponomarenko, F. Silvestri, K. Egiazarian, M. Carli, J. Astola, and V. Lukin (2007) On between-coefficient contrast masking of DCT basis functions. In Proc. Int. Workshop on Video Processing and Quality Metrics, Vol. 4. Cited by: §3.2.
  • M. A. Saad, A. C. Bovik, and C. Charrier (2012) Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Processing 21 (8), pp. 3339–3352. Cited by: §3.2.
  • H. R. Sheikh and A. C. Bovik (2006) Image information and visual quality. IEEE Trans. Image Processing 15 (2), pp. 430–444. Cited by: §1, §2.1, §3.2.
  • H. R. Sheikh, M. F. Sabir, and A. C. Bovik (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Processing 15 (11), pp. 3440–3451. Cited by: Figure 1, item 2, §1, §1, §2.1, §3.1, Table 1.
  • H. R. Sheikh, A. C. Bovik, and G. De Veciana (2005) An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Processing 14 (12), pp. 2117–2128. Cited by: §3.2.
  • A. Skodras, C. Christopoulos, and T. Ebrahimi (2001) The JPEG 2000 still image compression standard. IEEE Signal Processing Magazine 18 (5), pp. 36–58. Cited by: §1.
  • R. Soundararajan and A. C. Bovik (2012) RRED indices: reduced reference entropic differencing for image quality assessment. IEEE Trans. Image Processing 21 (2), pp. 517–526. Cited by: §3.2.
  • G. J. Sullivan, J. Ohm, W. Han, and T. Wiegand (2012) Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits and Systems for Video Technology 22 (12), pp. 1649–1668. Cited by: §1.
  • S. Tian, L. Zhang, L. Morin, and O. Déforges (2019) A benchmark of DIBR synthesized view quality assessment metrics on a new database for immersive media applications. IEEE Trans. Multimedia 21 (5), pp. 1235–1247. Cited by: §4.
  • VQEG (2000) Final report from the video quality experts group on the validation of objective models of video quality assessment. Technical report VQEQ. Cited by: §4.
  • G. K. Wallace (1991) The JPEG still picture compression standard. Communications of the ACM 34 (4), pp. 30–44. Cited by: §1.
  • Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing 13 (4), pp. 600–612. Cited by: §1, §3.2.
  • Z. Wang and A. C. Bovik (2002) A universal image quality index. IEEE Signal Processing Letters 9 (3), pp. 81–84. Cited by: §3.2.
  • Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In Proc. IEEE Asilomar Conf. Signals, Systems and Computers, Vol. 2, pp. 1398–1402. Cited by: §3.2.
  • Z. Wang and Q. Li (2011) Information content weighting for perceptual image quality assessment. IEEE Trans. Image Processing 20 (5), pp. 1185–1198. Cited by: §3.2.
  • Z. Wang (2011) Applications of objective image quality assessment methods [applications corner]. IEEE Signal Processing Magazine 28 (6), pp. 137–142. Cited by: §1.
  • T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra (2003) Overview of the H.264/AVC video coding standard. IEEE Trans. Circuits and Systems for Video Technology 13 (7), pp. 560–576. Cited by: §1.
  • W. Xue, L. Zhang, X. Mou, and A. C. Bovik (2014) Gradient magnitude similarity deviation: a highly efficient perceptual image quality index. IEEE Trans. Image Processing 23 (2), pp. 684–695. Cited by: §3.2.
  • L. Zhang, Y. Shen, and H. Li (2014) VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Processing 23 (10), pp. 4270–4281. Cited by: §3.2.
  • L. Zhang, D. Zhang, and X. Mou (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Processing 20 (8), pp. 2378–2386. Cited by: §3.2.