DeepAI
Log In Sign Up

Center Emphasized Visual Saliency and Contrast-based Full Reference Image Quality Index

Objective Image Quality Assessment (IQA) is imperative in this multimedia-intensive world to asses the visual quality of an image close to the human ability. There are many parameters that bring human attention to an image and if the center part contains any visually salient information then it draws the attention even more. To the best of our knowledge, any previous IQA method did not give extra importance to the center part. In this paper, we propose a full reference image quality assessment (FR-IQA) approach using visual saliency and contrast, however, we give extra attention to the center by raising-up sensitivity of the similarity maps in that region. We evaluated our method on three popular benchmark databases (TID2008, CSIQ and LIVE) and compared with 13 state-of-the-art approaches which reveal the stronger correlation of our method with human evaluated values. The prediction of quality score is consistent for distortion-specific as well as distortion-independent cases. Moreover, faster processing makes it applicable to any real-time application. The MATLAB pcode is publicly available online to test the algorithm and can be found at http://layek.khu.ac.kr/CEQI.

READ FULL TEXT VIEW PDF

page 2

page 6

08/22/2017

Contrast and visual saliency similarity induced index for image quality assessment

Perceptual image quality assessment (IQA) defines/utilizes a computation...
12/17/2014

Full-reference image quality assessment by combining global and local distortion measures

Full-reference image quality assessment (FR-IQA) techniques compare a re...
04/13/2016

The Effect of Distortions on the Prediction of Visual Attention

Existing saliency models have been designed and evaluated for predicting...
04/18/2019

No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement

No-reference image quality assessment (NR-IQA) aims to measure the image...
10/28/2021

Degraded Reference Image Quality Assessment

In practical media distribution systems, visual content usually undergoe...
11/24/2019

Controllable List-wise Ranking for Universal No-reference Image Quality Assessment

No-reference image quality assessment (NR-IQA) has received increasing a...
12/04/2017

Multi-measures fusion based on multi-objective genetic programming for full-reference image quality assessment

In this paper, we exploit the flexibility of multi-objective fitness fun...

abstract

Objective Image Quality Assessment (IQA) is imperative in this multimedia-intensive world to assess the visual quality of an image close to the human ability. There are many parameters that bring human attention to an image and if the center part contains any visually salient information then it draws the attention even more. To the best of our knowledge, any previous IQA method did not give extra importance to the center part. In this paper, we propose a full reference image quality assessment (FR-IQA) approach using visual saliency and contrast, however, we give extra attention to the center by raising-up sensitivity of the similarity maps in that region. We evaluated our method on three popular benchmark databases (TID2008, CSIQ and LIVE) and compared with 13 state-of-the-art approaches which reveal the stronger correlation of our method with human evaluated values. The prediction of quality score is consistent for distortion-specific as well as distortion-independent cases. Moreover, faster processing makes it applicable to any real-time application. The MATLAB pcode is publicly available to test the algorithm and can be found online 111http://layek.khu.ac.kr/CEQI.

Introduction

Computer-based automatic image quality assessment has been a sought for decades because numerous image and video applications need this assessment to automate their quality maintenance. To this date, IQA research has been advanced significantly ,however, this is still an active research area to bring the methods even closer to the human level. In the literature, there are three principle IQA approaches are found; no-reference (NR-IQA) where assessment should be done on a single image, reduced-reference (RR-IQA) where partial information of the reference image is given and the third is full-reference (FR-IQA) where the full reference image is given. In this paper, we are dealing with FR-IQA.

The early pixel-based faster IQA methods such as MSE and PSNR neither consider human visual system (HVS) nor any aspects of human perception, thus, those approaches failed to achieve good correlation with human assessment [9, 29]. Two images having same PSNR or MSE may be perceived totally different ways by a human observer. However, human is the ultimate receiver of images, as a result, the search for methods which can achieve closer correlation with human is ever expected. Wang et.al. in their revolutionary work of SSIM [32] argued that human visual perception is highly sensitive to structural information. SSIM index incorporates luminance, contrast and structure comparison information which achieved very good correlation with the mean opinion (MOS) scores of human observers. Inspired with the success of SSIM, several extended versions such as Multi-scale structural similarity for image quality assessment (MS-SSIM) [31] and Information content weighting for perceptual image quality assessment (IW-SSIM) [30] were proposed by the same research group by modifying the pooling strategies.

Based on the shared information between the reference and distorted images, Sheikh et al. proposed information fidelity criteria (IFC) [25] and visual information fidelity (VIF) [24] which is still a candidate choice for off-line FR-IQA. The Most apparent distortion (MAD) [15] separates images based on the distortion and applies either a detection-based strategy or an appearance-based strategy. Some of the methods such as NQM [6] and VSNR [3] take into account the HVS with the interaction among different visual signals while other approaches including the popular FSIM [35] emphasize on phase congruency [12, 17, 23]. FSIM uses image gradient as the secondary feature and the local quality maps are again weighted by phase congruency to obtain the final score. Image gradient has been used effectively in a number of other works [4, 37].Xue et al. in their GMSD [33]

use gradient magnitude with a different pooling strategy by applying standard deviation whereas

Alaei et al. adopted similar approach for assessing document images[1], both examples prove the effectiveness of standard deviation pooling. Wang et al. proposed Multiscale contrast similarity deviation (MCSD) [28] which can be termed as a continuation of SSIM and MS-SSIM because it also uses the RMS contrast similarity, however, they employed standard deviation pooling for the final score. In our proposed approach, we also use standard deviation pooling.

Meanwhile, being inspired by the vision-related psychological researches, visual saliency (VS) based IQA methods [36, 18] caught researchers’ attention which utilizes different kinds of visual saliencies [10, 8, 34]. In VSI [8] VS is used as both quality map and the weighting function at the pooling stage. SR-SIM [34] uses the spectral residual saliency which makes the approach very fast but maintaining competitive correlation with MOS. Combining VS with other features become also popular [16, 11], Li et al. proposed an approach combining VS and FSIM while ,recently, Jia et al. use contrast and spectral residual saliency as well as standard deviation pooling.

(a) Sailing.bmp from LIVE
(b) Block01
(c) Block02
(d) Block03
(e) Block04
(f) Block05
(g) Block06
(h) Block07
(i) Block08
(j) Block09
Figure 11: The image Sailing.bmp splits into 9 blocks where Block05 is the center area

In this context, we found that center bias in early eye movements is an established fact in psychological vision research [13, 19, 20, 27]. Bindemann found that eye movement is biased not only to the scene center but also screen center. As a result, if a scene appears at the center of the screen, it will get the most attention. For example, as in Figure 11 human eye will first move to Block05 region and if that part has visually important information then it will attract even more as a result people will be more sensitive to the distortions of this region. To the best of our knowledge, there is no research in IQA considering this center bias for quality assessment. In this proposed method, first we obtain both contrast and VS similarity maps of the whole images; to give center emphasis we find the VS similarity map of the mid-region and apply element-wise multiplication in the mid-part to raise the similarity deviation there. However, for the contrast similarity, we just apply element-wise square in the center part. Contrast is a local quality map so we do not calculate the contrast of mid-area separately whereas VS is a global quality map thus it is calculated differently.

We evaluated our proposed method on three popular benchmark databases for IQA research and compared with 13 other state-of-the-art approaches. Results show that the method proposed by us outperforms other comparing approaches with a reasonable amount of processing time.

The paper is organized as follows. Section 1 describes some underlying theories and related techniques. Section 2 explains the proposed center emphasized assessment approach, and the results with relevant discussions are presented in Section 3. Finally, the paper is concluded in Section 4.

1 Background

In this section, we briefly review all underlying theories that the content of this paper relies on; including spectral residual visual saliency similarity, contrast similarity, standard deviation pooling, and evaluation metrics.

1.1 Spectral Residual Visual Saliency Similarity

Human visual system (HVS) gets more attention to some interesting or salient regions of an image than other parts. Detection of image saliency itself is an active field in vision research. As such, the human is more sensitive to these salient parts and any distortion also attracted more intensely which makes it an important feature in IQA methods. There is a lot of saliency detection techniques available out there [5] and the spectral residual saliency detection [10] is a very fast approach among them. We adopt the saliency map generator described in SR-SIM [34] and VSP [11]. For an image , according to [10] the spectral residual saliency (SRS) is computed as follows:

(1)
(2)
(3)
(4)
(5)

and denote Fourier and the inverse transforms respectively, and return magnitude and argument of a complex number, is an mean filter, is a Gaussian filter, and denotes the convolution operation.

In this way, we calculate the SRS for both reference and distorted images denoted by and respectively. After that, the spectral residual visual saliency similarity() is calculated which is defined as:

(6)

1.2 Contrast Similarity

Contrast is a basic perceptual attribute of an image [21] which varies greatly over the image and contrast map (CM) contains the spatial distribution of those varying values. There are many ways devised for calculating CMs and in this paper, we adopted RMS contrast from SSIM because it achieves better performance for natural images and is given by:

(7)

Where,

(8)

Again, using equation 7 we get the contrast maps and for both reference and distorted images respectively and find the contrast similarity as follow:

(9)

and in equations 6 and 9 are constants used to increase the calculation stability.

1.3 Standard Deviation Pooling

As discussed in the introduction, standard deviation pooling achieves very good performance and thus adopted by several successful methods. Jia et al. conducted an experiment with several other combinations of pooling and found that SD pooling gives the best correlation. The final quality score (QS) is calculated using the following equation:

(10)

where and are weighting factors giving the importance among VSS and CS. Also, the standard deviations in above equation are defined as:

(11)
(12)

where, and are the mean values of the and respectively and are given by:

(13)
(14)

1.4 Evaluation Metrics

The performance of any IQA method usually measured by the mean squared error (MSE) and several other correlations. However, to apply linear correlation the two comparing values should be in the same scale and perfectly linear correlated [7]. To ensure better fairness Sheikh suggested a nonlinear mapping before applying the linear correlation measurements.

(15)

where, and are the original and mapped values. The subjective scores are then used with these mapped scores to find the correlation coefficients.

One of the basic correlation widely adopted is Pearson’s linear correlation coefficient (PLCC) which is defined as follows:

(16)

where, and

are the vectors of objective and subjective scores respectively, and

(17)
(18)

In our case, the objective scores are actually the mapped scores using equation 15. If anyone wants to avoid the nonlinear mapping in equation 15, then rank order coefficients can be used. The most popular Spearman’s rank-order correlation coefficient (SROCC) is defined as:

(19)

Where the function of a vector returns a rank vector where i-th item contains the rank of i-th item in the original vector.

Another popularly adopted rank order coefficient is the Kendall’s rank-order correlation coefficient (KROCC) which is given as below:

(20)

where is the number of concordant pairs which are consistently correlated between objective and subjective scores, is the number of discordant pairs.

The root mean square (RMSE) is also commonly adopted and is defined as:

(21)

2 Proposed Center Emphasized Quality Assessment

Figure 12: Flow diagram of the proposed center emphasized approach.

The general flow diagram of our proposed method is presented in Figure 12. At first, the center parts of both reference and distorted images are extracted. To do this, we split the image in image blocks as shown in Figure 11 and the block that resides in the middle of both horizontally and vertically is taken as the center area. If original image dimension is then the corresponding dimension for center block becomes

(22)

The center block is defined as a rectangular area identified by two corner points where,

(23)

After that, the saliency similarity maps for the full images and mid images are found using equations 1 to 6 and is denoted as and respectively. Simultaneously, the contrast similarity map for the full size is also obtained. However, as discussed before, we do not derive the CS map for middle images.

Now, we raise the sensitivity of the center area within both of the maps. Let, and the are the center areas of and respectively. Then, the updated middle parts will be determined as follows:

(24)
(25)

where, is the Hadamard product operation.

With the updated middle portion, we get the finalized maps and and using equation 10 we calculate the score as:

(26)

3 Results and Analysis

Experiments are carried out on three popular benchmark databases for IQA research, TID2008 [22], CSIQ [14] and LIVE [26] and compared with 13 other state-of-the-art approaches namely PSNR, SSIM [32], MS-SSIM[31], IW-SSIM[30], MAD[24], FSIM[35] , GMSD [33], VSI[8], MCSD[28], VIF[24], NQM[6], SR-SIM[34] and VSP[11]. The basic information about the databases is given in Table 1 and the distortion information is recorded in Table 2.

Table 1: Basic information about the databases used for experiments.
Table 2: Description databases based on the types of distortions used
Table 3: Performance comparison of IQA methods on three databases
Table 4: Table:Overall performance ranking of comparing IQA methods

For performance comparison, we use four commonly adopted metrics, Spearman’s rank-order correlation coefficient (SROCC), Kendall’s rank-order correlation coefficient (KROCC), Pearson’s linear correlation coefficient (PLCC), and the root mean square (RMSE) which are already discussed in section 1.4.

Table 3 shows the comparison of four metrics among IQA models for all of the three databases. Top three values for each metric is typed in boldfaced and light-gray shaded where the top value is colored blue, second highest with red and third value is colored in black. However, in the case of RMSE, coloring is done in a reverse way i.e. the lowest value is colored in blue an so on because lower RMSE implies the better method. We see that, for the biggest database TID2008 our proposed method outperforms all other methods for all metrics, for other two databases it achieves competitive performance. We calculated the weighted averages of SROCC, KROCC, PLCC and RMSE using the number of distorted images to find out the overall performance as proposed in [30]. The overall ranking based on the performances is shown in Table 4.

Table 5: Table: Distortion-wise SROCC performance comparison of IQA methods on three databases

Table 5 shows the SROCC performance comparison for all distortion types, please refer to Table 2 for the description of abbreviated names. We see that, different methods perform better for different distortions and even it varies on database-to-database. It happens because all images do not affect equally by a specific type of distortion, it depends on color, salient regions, and perhaps the combination of many others. Still, distortion-wise comparison gives us a good understanding of whether an IQA method is biased to some noise type or not. It can be seen that the proposed CEQI performs consistently good for all type of distortions and not too much biased to any specific type while retaining average performance within the top 3.

(a) SSIM
(b) MS-SSIM
(c) IW-SSIM
(d) MAD
(e) FSIM
(f) GMSD
(g) VSI
(h) MCSD
(i) VIF
(j) SRSIM
(k) VSP
(l) CEQI (Proposed)
Figure 25: Predicted scores with the MOS for different methods on TID2008 database

Figure 25 shows the scatter plots of predicted scores for different IQA approaches with the MOS values on TID2008 database. It shows that, CEQI’s prediction is consistent compared to other methods at the same time providing a better correlation. We do not include PSNR because its predictions are very much inconsistent, NQM is also discarded for the same reason although its’ performance is not as inconsistent as PSNR.

Table 6: Running time comparison of IQA models

Although the prime consideration of an IQA model is the performance of its prediction, having low computational cost is a desirable feature especially for a real-time system. We evaluate the comparing IQA models on MATLAB R2017b using a computer equipped with Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz processor and 16GB of memory. The MATLAB codes provided by the authors are used and elapsed time were recorded using traditional tic-toc function. As expected, PSNR have the lowest computation time. Surprisingly, the Gradient Magnitude Similarity Division could process 263.05 images per second (IPS) with satisfactory performance(rank 4) as shown in Table 4. VIF shown very good performance and for LIVE database it is the best performed IQA but it can process only 1.79 IPS on an average which makes it inappropriate for real-time system or systems with low processing capability. On the other hand, CEQI takes 15.25 milliseconds to process an image having the capability of processing 65.51 images per second. This frame rate meets the need for almost all kind of real-time systems.

4 Conclusions

In this paper, we consider the center bias of HVS and proposed a full-reference image quality assessment method CEQI combining visual saliency and contrast. We give extra emphasis on the center part of the image so that any degradation within center region results in more effects than other regions. The proposed approach is compared with the other state-of-the IQA models where it outperforms other competing methods in most of the cases. In case of comparing on individual distortion types, the proposed method gives consistent scores also the running time is suitable for real-time applications. The center emphasis makes the method more balanced and robust. We believe that this center emphasis will enhance the performance of other existing IQA models including no-reference and reduced-reference approaches, and in our future work, we will investigate these possibilities.

acknowledgments

This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No.2017-0-00294, Service mobility support distributed cloud technology)

References

References

  • Alaei et al. [2015] Alireza Alaei, Donatello Conte, and Romain Raveaux. Document image quality assessment based on improved gradient magnitude similarity deviation. In Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pages 176–180. IEEE, 2015.
  • Bindemann [2010] Markus Bindemann. Scene and screen center bias early eye movements in scene viewing. Vision research, 50(23):2577–2587, 2010.
  • Chandler and Hemami [2007] Damon M Chandler and Sheila S Hemami. Vsnr: A wavelet-based visual signal-to-noise ratio for natural images. IEEE transactions on image processing, 16(9):2284–2298, 2007.
  • Chen et al. [2006] Guan-Hao Chen, Chun-Ling Yang, and Sheng-Li Xie. Gradient-based structural similarity for image quality assessment. In Image Processing, 2006 IEEE International Conference on, pages 2929–2932. IEEE, 2006.
  • Cong et al. [2018] Runmin Cong, Jianjun Lei, Huazhu Fu, Ming-Ming Cheng, Weisi Lin, and Qingming Huang. Review of visual saliency detection with comprehensive information. arXiv preprint arXiv:1803.03391, 2018.
  • Damera-Venkata et al. [2000] Niranjan Damera-Venkata, Thomas D Kite, Wilson S Geisler, Brian L Evans, and Alan C Bovik. Image quality assessment based on a degradation model. IEEE transactions on image processing, 9(4):636–650, 2000.
  • Ding [2018] Yong Ding. General framework of image quality assessment. In Visual Quality Assessment for Natural and Medical Image, pages 45–62. Springer, 2018.
  • Duan et al. [2011] Lijuan Duan, Chunpeng Wu, Jun Miao, Laiyun Qing, and Yu Fu. Visual saliency detection by spatially weighted dissimilarity. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 473–480. IEEE, 2011.
  • Girod [1991] B Girod. Psychovisual aspects of image processing: What’s wrong with mean squared error? In Multidimensional Signal Processing, 1991., Proceedings of the Seventh Workshop on, pages P–2. IEEE, 1991.
  • Hou and Zhang [2007] Xiaodi Hou and Liqing Zhang. Saliency detection: A spectral residual approach. In Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
  • Jia et al. [2018] Huizhen Jia, Lu Zhang, and Tonghan Wang. Contrast and visual saliency similarity-induced index for assessing image quality. IEEE Access, 6:65885–65893, 2018.
  • Kovesi [1999] Peter Kovesi. Image features from phase congruency. Videre: Journal of computer vision research, 1(3):1–26, 1999.
  • Langford [1936] Roy C Langford. How people look at pictures, a study of the psychology of perception in art. 1936.
  • Larson and Chandler [2010a] Eric C Larson and DM Chandler. Categorical image quality (csiq) database, 2010a.
  • Larson and Chandler [2010b] Eric Cooper Larson and Damon Michael Chandler. Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1):011006, 2010b.
  • Li et al. [2013] Ang Li, Xiaochun She, and Qizhi Sun. Color image quality assessment combining saliency and fsim. In Fifth International Conference on Digital Image Processing (ICDIP 2013), volume 8878, page 88780I. International Society for Optics and Photonics, 2013.
  • Liu and Laganière [2007] Zheng Liu and Robert Laganière.

    Phase congruence measurement for image similarity assessment.

    Pattern Recognition Letters, 28(1):166–172, 2007.
  • Ma and Zhang [2008] Qi Ma and Liming Zhang. Saliency-based image quality assessment criterion. In International Conference on Intelligent Computing, pages 1124–1133. Springer, 2008.
  • Mannan et al. [1997] SK Mannan, KH Ruddock, and DS Wooding. Fixation sequences made during visual examination of briefly presented 2d images. Spatial vision, 11(2):157–178, 1997.
  • Parkhurst et al. [2002] Derrick Parkhurst, Klinton Law, and Ernst Niebur. Modeling the role of salience in the allocation of overt visual attention. Vision research, 42(1):107–123, 2002.
  • Peli [1990] Eli Peli. Contrast in complex images. JOSA A, 7(10):2032–2040, 1990.
  • Ponomarenko et al. [2009] Nikolay Ponomarenko, Vladimir Lukin, Alexander Zelensky, Karen Egiazarian, Marco Carli, and Federica Battisti. Tid2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 10(4):30–45, 2009.
  • Saha and Wu [2013] Ashirbani Saha and QM Jonathan Wu. Perceptual image quality assessment using phase deviation sensitive energy features. Signal Processing, 93(11):3182–3191, 2013.
  • Sheikh and Bovik [2005] Hamid R Sheikh and Alan C Bovik. A visual information fidelity approach to video quality assessment. In The First International Workshop on Video Processing and Quality Metrics for Consumer Electronics, pages 23–25, 2005.
  • Sheikh et al. [2005] Hamid R Sheikh, Alan C Bovik, and Gustavo De Veciana. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Transactions on image processing, 14(12):2117–2128, 2005.
  • Sheikh [2005] HR Sheikh. Live image quality assessment database release 2. http://live. ece. utexas. edu/research/quality, 2005.
  • Tatler [2007] Benjamin W Tatler. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of vision, 7(14):4–4, 2007.
  • Wang et al. [2016] Tonghan Wang, Lu Zhang, Huizhen Jia, Baosheng Li, and Huazhong Shu. Multiscale contrast similarity deviation: An effective and efficient index for perceptual image quality assessment. Signal Processing: Image Communication, 45:1–9, 2016.
  • Wang and Bovik [2009] Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine, 26(1):98–117, 2009.
  • Wang and Li [2011] Zhou Wang and Qiang Li. Information content weighting for perceptual image quality assessment. IEEE Transactions on Image Processing, 20(5):1185–1198, 2011.
  • Wang et al. [2003] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398–1402. Ieee, 2003.
  • Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
  • Xue et al. [2014] Wufeng Xue, Lei Zhang, Xuanqin Mou, and Alan C Bovik. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Transactions on Image Processing, 23(2):684–695, 2014.
  • Zhang and Li [2012] Lin Zhang and Hongyu Li. Sr-sim: A fast and high performance iqa index based on spectral residual. In Image Processing (ICIP), 2012 19th IEEE International Conference on, pages 1473–1476. IEEE, 2012.
  • Zhang et al. [2011] Lin Zhang, Lei Zhang, Xuanqin Mou, David Zhang, et al. Fsim: a feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8):2378–2386, 2011.
  • Zhang et al. [2014] Lin Zhang, Ying Shen, and Hongyu Li. Vsi: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing, 23(10):4270–4281, 2014.
  • Zhu and Wang [2012] Jieying Zhu and Nengchao Wang. Image quality assessment by visual gradient similarity. IEEE Transactions on Image Processing, 21(3):919–933, 2012.