Quality Map Fusion for Adversarial Learning

10/24/2021
by   Uche Osahor, et al.
0

Generative adversarial models that capture salient low-level features which convey visual information in correlation with the human visual system (HVS) still suffer from perceptible image degradations. The inability to convey such highly informative features can be attributed to mode collapse, convergence failure and vanishing gradients. In this paper, we improve image quality adversarially by introducing a novel quality map fusion technique that harnesses image features similar to the HVS and the perceptual properties of a deep convolutional neural network (DCNN). We extend the widely adopted l2 Wasserstein distance metric to other preferable quality norms derived from Banach spaces that capture richer image properties like structure, luminance, contrast and the naturalness of images. We also show that incorporating a perceptual attention mechanism (PAM) that extracts global feature embeddings from the network bottleneck with aggregated perceptual maps derived from standard image quality metrics translate to a better image quality. We also demonstrate impressive performance over other methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 7

page 8

page 10

03/30/2019

A HVS-inspired Attention Map to Improve CNN-based Perceptual Losses for Image Restoration

Deep Convolutional Neural Network (CNN) features have been demonstrated ...
10/28/2019

PerceptNet: A Human Visual System Inspired Neural Network for Estimating Perceptual Distance

Traditionally, the vision community has devised algorithms to estimate t...
08/14/2013

Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index

It is an important task to faithfully evaluate the perceptual quality of...
10/15/2018

CSV: Image Quality Assessment Based on Color, Structure, and Visual System

This paper presents a full-reference image quality estimator based on co...
06/10/2021

Curiously Effective Features for Image Quality Prediction

The performance of visual quality prediction models is commonly assumed ...
08/26/2020

Fusion of Global-Local Features for Image Quality Inspection of Shipping Label

The demands of automated shipping address recognition and verification h...
05/27/2020

Earballs: Neural Transmodal Translation

As is expressed in the adage "a picture is worth a thousand words", when...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A representation of the human visual system (HVS) is necessary to establish a robust image quality metric which is needed for computer vision applications

[10.1117/12.135952, Mohamadi2020DeepBA]. Classical approaches considered hand-crafted strategies to mimic the properties of the HVS [Wang2004ImageQA, Xue2014GradientMS, Zhang2011FSIMAF, Wang2015AnOB] by implementing a stream of computational functions that are combined to identify the key perceptual properties of images. While these techniques played their role effectively, scaling up these methods have proven to be a daunting task, especially for applications with huge datasets. However, the introduction of neural networks has helped to improve the aforementioned task considerably [Isola2017ImagetoImageTW, Karras2018ProgressiveGO, Osahor2020QualityGS, Kancharla2019QualityAG, Aghdaie2021AttentionAW, Goodfellow2014GenerativeAN, Mostofa2020JointSRVDNetJS]. These deep networks consist of non-linear filters configured to extract key perceptual features within user-defined constraints from data.

In our work, we focus more on Full-Reference Image Quality Assessment (FR-IQA) models [Zhang2012SRSIMAF, Liang2016ImageQA], these models are mostly used to represent the HVS with the aim of deriving a quality measure for images by comparing the perceptual similarity between distorted images and their respective reference image. A standard FR-IQA model seeks to imitate the HVS by exploiting photographic computational algorithms that represent contrast sensitivity, visual masking, luminance, etc. A number of FR-IQA metrics have since being derived which include the Structure Similarity Index Measure (SSIM) [Wang2004ImageQA], Multi Scale Structure Similarity Index Measure (MS-SSIM) [10.1007/978-3-319-13671-4_40], Visual Information Fidelity (VIF) [Han2013ANI], Feature Similarity Index Measure (FSIM) [Zhang2011FSIMAF], Mean Deviation Similarity Index (MDSI) [Nafchi2016MeanDS], etc.

In this paper, we present a novel approach for improving the quality of GAN-synthesised images by combining the benefits of established FR-IQA metrics and the features of a deep convolutional neural network (DCNN). We extend the performance of popular GAN-based baseline approaches by introducing a novel image quality map fusion network that computes the perceptual properties of images and fuse them with a perceptual attention mechanism, as shown in Figure 1. We also introduce novel quality loss functions derived via Banach spaces to boost image quality. Our technique shows impressive results as compared to state-of-the-art. Our key contributions are as follows:

  • We introduced a new perceptual quality map fusion network that harnesses the perceptual qualities of computationally derived quality assessment metrics.

  • We propose a new norm implemented via Banach Wasserstein GAN (BWGAN) instead of the popular norm computed using the Wasserstein metric.

  • We also propose a perceptual attention mechanism (PAM) that augments image features to boost the overall visual appeal of the synthesised images.

2 Related Work

The FR-IQA model tries to simulate the HVS characteristics with good performance measures [Pei2015ImageQA, Reisenhofer2018AHW, Ye2012UnsupervisedFL, Zhang2014VSIAV]

. Two main reasons for the success of FR-IQA can be attributed to the deep learning based perceptual properties of the reference image and the hand-crafted features derived from statistical metrics which are similar to the HVS. Hence, it becomes easier to build a system that minimises the difference between these two corresponding features. In order to effectively model the properties of the HVS, a couple of related systems have been proposed. Zhang et al.

[Zhang2011FSIMAF] proposed a similarity index metric which calculated the phase congruence and gradient magnitude to represent the HVS system, while [Xue2014GradientMS]

implemented an efficient standard deviation pooling strategy which demonstrated that the gradient magnitude of an image still holds true as a technique for representing the HVS.

[Nafchi2016MeanDS] adopted a novel deviation pooling technique to compute the quality score from the gradient and chromaticity similarities as a measure for local structural distortions.

The Banach Wasserstein GAN (BWGAN) is a framework that makes use of arbitrary norms other than the popularly used norm as the underlying metric of choice in adversarial training. Adler et al. in [Adler2018BanachWG] translated the WGAN-GP model to Banach spaces which have the capacity to utilize norms that capture desired image features like edges, texture, etc.

Figure 1: The quality encoding architecture. The structure shows the generator configuration coupled with a quality Map Fusion network . The domain discriminator (lower-right) extracts features where True/False predictions are made per pixel and attribute classification is executed to ensure multi-domain adaptation, critics the perceptual map to maintain consistency in an adversarial manner.

3 Our Approach

We present a single GAN model capable of implementing image-to-image synthesis. We combine the benefits of established FR-IQA metrics [Zhang2011FSIMAF, 10.1007/978-3-319-13671-4_40, Nafchi2016MeanDS] and the low-level salient features of a deep convolutional neural network (DCNN) to aid adversarial image synthesis aimed at producing perceptually appealing images. In our model, we introduced an attention schema that exploits the salient perceptual features in a channel-wise fashion and the spatial map representation embeddings of standard FR-IQA metrics (SSIM, MDSI and FSIM). Our framework consists of five main components; a quality-aware generator network , where and represent the encoder and decoder section, respectively. and are coupled with a perceptual attentive mechanism (PAM) for quality encoding and a perceptual quality map fusion network at the latent space as shown in Figure 1. The discriminative networks critics images generated by in an adversarial manner without compromising image quality and the perceptual consistency with the reference image. The perceptual quality map generator combines the core quality metric functions that capture the sensitive perceptual features of a given image, while the score regression network pools the images synthesized by

to estimate reference quality score

. Our overall objective consists of a Wasserstein Gradient Penalty, Structural Similarity Index Gradient Penalty (SSIM-GP) and a Natural Image Quality Estimator (NIQE) as defined in section 4.

3.1 Perceptual Attention Mechanism (PAM)

The attention mechanism augments perceptual features from a prior generator encoder network computed over input images . The aim is to establish a convex combination of quality-enhanced condensed representations of the input image for real time training. We begin by describing the channel attention in PAM, which is based on the CBAM module [Woo2018CBAMCB]. PAM involves two steps: first, per-channel “summary statistics” obtained from a 2-layer residual block

, is calculated to yield the global feature attention vector

. Secondly, a multi-head network applies a non-linear multi-head attention transformation which allows the model to jointly attend to information and from different representation sub-spaces and of the bottleneck [Vaswani2017AttentionIA]. The channel-based attention output is given as = softmax which is multiplied with the encoder output from and processed by the residual block to produce the channel-based attention embeddings, denoted as where .

3.2 Perceptual Map Generator

We selected the FSIM [Zhang2011FSIMAF], MS-SSIM [10.1007/978-3-319-13671-4_40], and MDSI [Nafchi2016MeanDS] image quality metrics to generate similarity maps because the trio collectively capture key image characteristics that are similar to the HVS [Wang2004ImageQA, Nafchi2016MeanDS, 10.1007/978-3-319-13671-4_40, Zhang2011FSIMAF, Aghdaie2021DetectionOM] as shown in Figure 2. The FSIM metric captures the luminance, contrast and structural information. For the MS-SSIM metric, we considered multiple scales of the synthesised image and its reference for contrast and structure while the MDSI map is derived by extracting the gradient and chromaticity of the pair of synthesised and reference images, respectively. We use an intensity coefficient; to specify the intensity of the maps. The map fusion network is divided into three stages. First, we extract the feature similarity representations between the reference image and the generated images per iteration given as for the aforementioned similarity metrics, where is an arbitrary function used to calculate similarity index maps; MS-SSIM, FSIM and MDSI. Secondly, the generated maps, ( and ) are concatenated and pre-processed by two-layer MLP networks to form a spatial-based perceptual map representation . At the last stage, the predicted future states is then computed as the expectation of spatial features and the channel-based features . is then summed with the output of the encoder given as . The resulting output which is fed to decoder represents latent features that are optimized for better image quality.

Figure 2: A spectrum of quality maps obtained at different intensities.

4 Banach space gradient penalty

Quality assessment metrics for the distance between images has been limited to cost functions that take the form of or norms. However, issues like non-convexity and complications in gradient computations (vanishing gradients, exploding gradients, etc) are some of the struggles experienced in formulating optimization problems. To mitigate the aforementioned computational shortcomings, the Wasserstein distance was introduced in [Arjovsky2017WassersteinG].

However, a wide variety of untapped metrics [10.1007/978-3-319-13671-4_40, Wang2004ImageQA] exist that can be used to compare and emphasize key features of interest. In this regard, we extended the Wasserstein distance beyond the popular WGAN with the gradient penalty (WGAN-GP), which is constrained to norms and rather adapted a more complete space called the Banach space [Adler2018BanachWG]. Our technique, similar to [Arjovsky2017WassersteinG, Kancharla2019QualityAG, Adler2018BanachWG] shows that the characterisation of -lipschitz functions via the norm of the differential can be extended from the setting to arbitrary Banach spaces by considering the gradient as an element in the dual of . Such a loss function is given as:

(1)

where , are regularization parameters. These Banach space norms give room for specific image features such as texture, structure, contrast and luminance which highlight the perceptual appeal of a human observer, as described in section 4.1 and 4.2.

4.1 Structural Similarity (SSIM) index

The SSIM index measures the perceptual difference between two similar images. The local mean, variance and structure are computed to find an local quality score

[Wang2004ImageQA]. The SSIM index computes changes to local mean, local variance and local structure between two images and . The local scores are then averaged across the image to find the image quality score.

(2)
(3)

where and refer to the input and synthesised images, the subscript is the pixel index, and are the local mean and standard deviation, respectively. , and are the local luminance, contrast and structure scores at pixel , respectively. Furthermore, since is bounded, the lipschitz constant can be imposed directly by introducing a gradient penalty regularization term given as:

(4)

This makes the SSIM a good candidate for quality awareness which is beneficial for regularizing GANs. The complete mathematical properties are described in [Brunet2012OnTM].

Figure 3: A sample of synthesised images representing different datasets.

4.2 Natural Image Quality Estimator (NIQE)

The NIQE [Mittal2013MakingA]

is an NR-IQA metric of perceptually relevant spatial domain Natural Scene Statistics (NSS) features extracted from local image patches that capture the essential low-order statistics of natural images. The equation is given as:

(5)

where is the pixel index and and

are the local mean and standard deviation. The NIQE captures the naturalness of a pristine reference image by modelling a generalized gaussian distribution (GGD)

[Ruderman1994TheSO], and models the products of neighbouring pixel coefficients using an Asymmetric GGD (AGGD). The parameters of both the GGD and AGGD are then modelled using a Multivariate Gaussian Model (MVG) distribution [Moorthy2010StatisticsON]. The quality of the test image is measured in terms of the “distance” of its MVG parameters and from the pristine MVG parameters obtained. Finally, discriminator gradients computed for both pristine reference and synthesised images are used to compute the distance between the pair. The expression is given as:

(6)

where , , and are the mean and covariance of the reference and synthesised images, respectively. In addition to the SSIM and NIQE metrics, we also used a 1-GP regularizer [Gulrajani2017ImprovedTO] designed to force the local statistics of the discriminator gradient to be as close to those of real images. Our claim is that such a regularization strategy results in improving visual quality of the generated images especially for attributes like hair, age, skin colour etc. We worked in the WGAN-GP framework to demonstrate our method. The overall discriminator cost function includes the NIQE function regularizer, the SSIM and the 1-GP regularizer defined as:

(7)

The full objective is given as:

(8)

where is the generated score from the regression network minimised over the groudtruth scores of the images. we use and as a means of tuning the objective functions to achieve better results.

Figure 4: (a) Entropy
Figure 5: (b) Contrast
Figure 6: (c) Homogeneity
Figure 7: Figure 4: Statistical feature IQA metric values.
Figure 8: (a) FairFace Dataset
Figure 9: (b) CelebA Dataset
Figure 10: Figure 5: Superman’s rank correlation values at different layers of the network
Figure 11: (a)                                             (b)
Figure 12: Figure 6: HOG similarity metrics (left) and synthesised image results for FairFace Dataset.
Figure 13: (a)                            (b)                            (c)
Figure 14: Figure 7: Randomly sampled images generated using a combination of our model base line (

) and different losses for W-GAN and BWGAN (SSIM and NIQE) of the CIFAR-10 dataset. 7 (a) Shows images synthesised using

and just the WGAN with gradient penalty loss. 7 (b) performs better when the SSIM loss function is added (). 7 (c) shows the best results when all loss functions are included ().
Figure 15: (a)                            (b)                            (c)
Figure 16: Figure 8: Randomly sampled images generated using different models for FairFace dataset. 8 (a) Shows images synthesised using DCGAN. 8 (b) performs better due to the gradient penalty approach of WGAN. However, 8 (c) shows that our model performs even better when Banach losses are included.

5 Training Strategy

We trained our model using the Adam optimizer, with momentum values set at = 0.5 and = 0.99, we used a batch size of 8 for most experiments on CelebA [liu2015faceattributes], Celeba-HQ [Lee2019MaskGANTD] and FairFAce [Krkkinen2019FairFaceFA]

, respectively. A learning rate of 0.0001 for the first 10 epochs which linearly decayed to 0 over the corresponding epochs. We trained the entire model on three NVIDIA Titan X GPUs.

5.1 Datasets

We evaluated the efficacy of our proposed technique on the following datasets: The CelebFaces Attributes (CelebA) [liu2015faceattributes] of 202,599 celebrity face images. We cropped the initial images to 178x178, then resized them to 64x64. The CIFAR-10 [Krizhevsky09learningmultiple] dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. The Fair-Face [Krkkinen2021FairFaceFA] image dataset contains 108,501 images, with an emphasis of balanced race composition in the dataset comprising 7 race groups: White, African, Indian, East Asian, Southeast Asian, Middle East, and Hispanic. For evaluations, we used the LIVE [Sheikh2006ASE] which consists of 982 distorted images with 5 different distortions. The TID2008 dataset [Reisenhofer2018AHW] that contains 25 reference images and a total of 1,700 distorted images. We also used Edges-to-shoes 50,000 training images from UT Zappos50K dataset [Yu2014FineGrainedVC] and Edges-to-Handbag 137,000 Amazon Handbag images from [Zhu2016GenerativeVM], trained for 15 epochs and batch size 8.

5.2 Evaluation

To evaluate the performance of the synthesized images as shown in Figure 3, two key evaluation criteria were adopted in our paper; the Spearman’s Rank Order Correlation Coefficient (SROCC) and the Linear Correlation Coefficient (LCC) [Forthofer1981]. SROCC is a measure of the monotonic relationship between the ground-truth and model prediction, while the LCC is a measure of the linear correlation between the ground-truth and model prediction. Table 1 and 2 shows the SROCC and LCC performance of the competing IQA methods for different distortion types, respectively. In general, our model performs competitively among most distortion types. Compared with BPSQM, our model shows more performance of about 4.5% overall in dealing with the distortion of AGN, SCN, HFN, JPEG and MN, respectively as indicated in Table 1. For comparison with previous models, we computed three quantitative measures: Inception Score (IS), Frechet Inception Distance (FID) and the Feature Similarity (FSIM) index. IS measures the sample quality and diversity by finding the entropy of the predicted labels. FID score measures the similarity between real and fake samples by fitting a multivariate Gaussian (MVG) model to the intermediate representation. The FSIM index computes quality estimates based on phase congruency as the primary feature, and incorporates the gradient magnitude as the complementary feature for the real and fake samples, respectively. Table 3 shows the quantitative comparison of the GAN-metric performance for BPGAN [Wang2017BackPA], CAGAN [Yu2017TowardsRF], CGAN [CycleGAN2017] , WAGAN [Arjovsky2017WassersteinG], QAGAN [Kancharla2019QualityAG] and ours for CelebA dataset. We also carried out pixel variation analysis on the synthesised images by using the second order features of the synthesized images, which are based on the gray level co-occurrence matrix (GLCM) [Haralick1979StatisticalAS]. We used the aforementioned technique to determine the Entropy, Homogeneity and Correlation of the synthesised images in comparison with state-of-the-art methods as shown in Figure 4. Entropy is useful for assessing sharpness while Homogeneity and Correlation are useful for evaluating the Contrast of an image. Entropy and Correlation increase in image quality, whereas Homogeneity energy values decrease with increase in image quality. From the Entropy plot in 5(a), our model performs decently well by over 3.5% compared to the QAGAN and WAGAN. The Contrast level improves drastically for our approach as compared to the other methods that are closely matched at a tolerance of about 2%. We also observed that most models possess similar homogeneity values except our model and QAGAN which reflect significant performance values.

height = 1.6cm, width=12.0cm SROCC TID2008 AGN ANMC SCN MN HFN IMN QN GB DEN JPEG JP2K JGTE J2TE GMSD [Xue2014GradientMS] 0.911 0.888 0.914 0.747 0.919 0.683 0.857 0.911 0.966 0.954 0.983 0.852 0.873 FSIMc [Zhang2011FSIMAF] 0.910 0.864 0.890 0.863 0.921 0.736 0.865 0.949 0.964 0.945 0.977 0.878 0.884 BLIINDSII [Yang2018BlindIQ] 0.779 0.807 0.887 0.691 0.917 0.908 0.851 0.952 0.908 0.928 0.940 0.865 0.855 DIIVINE [Moorthy2011BlindIQ] 0.812 0.844 0.854 0.713 0.922 0.915 0.874 0.943 0.912 0.930 0.938 0.873 0.852 BRISQUE [Mittal2012NoReferenceIQ] 0.853 0.861 0.885 0.810 0.931 0.927 0.881 0.933 0.924 0.934 0.944 0.891 0.836 NIQE [Mittal2013MakingA] 0.786 0.832 0.903 0.835 0.931 0.913 0.893 0.953 0.917 0.943 0.956 0.862 0.827 BPSQM [Pan2018BlindPS] 0.881 0.801 0.935 0.786 0.938 0.933 0.920 0.937 0.914 0.943 0.967 0.829 0.644 Ours 0.936 0.878 0.961 0.939 0.948 0.892 0.915 0.898 0.878 0.955 0.987 0.836 0.779

Table 1: SROCC comparison on individual distortion types on the TID2008 databases.

height = 1.5cm, width=5.9cm LCC LIVE JP2K JPEG WN BLUR FF ALL BRISQUE [Mittal2012NoReferenceIQ] 0.923 0.973 0.985 0.951 0.903 0.942 CORNIA [Ye2012UnsupervisedFL] 0.951 0.965 0.987 0.968 0.917 0.935 CNN [Kang2014ConvolutionalNN] 0.953 0.981 0.984 0.953 0.933 0.953 SOM [Zhang2015SOMSO] 0.952 0.961 0.991 0.974 0.954 0.962 BIECON [Kim2017FullyDB] 0.965 0.987 0.970 0.945 0.931 0.962 Ours 0.975 0.986 0.994 0.988 0.960 0.982

Table 3: GAN-metric performance.

height = 1.5cm, width=3.5cm Model CelebA FID FSIM IS BPGAN [Wang2017BackPA] 86.10 69.13 0.87 CGAN [CycleGAN2017] 43.21 71.10 0.89 CAGAN [Yu2017TowardsRF] 36.16 71.33 0.90 WGAN [Arjovsky2017WassersteinG] 33.24 72.60 0.91 QAGAN [Kancharla2019QualityAG] 18.23 82.69 0.96 Ours 18.39 83.40 0.97

Table 2: LCC evaluation on LIVE database.

5.3 Ablation Study

Ablation studies on our loss functions was implemented to test model robustness in general for the CIFAR-10, FairFace and CelebA datasets, respectively. The Lagrange coefficients and of the SSIM and NIQE losses were also changed empirically within ( and ) range, to check the effect on the perceptual appeal of the synthesised images. It was inferred that reducing the coefficients towards the lower limit weakens the discriminative power which in turn reduces the quality of the synthesised images from the generator. We also conducted a Histogram of Oriented Gradient (HOG) similarity performance with the Inception v3 model [Szegedy2016RethinkingTI] for the input and synthesised images on the FairFace dataset, in order to obtain the model layer-wise performance at specific iterations of the baseline of our model. Figure 6 (a) shows the HOG similarity performance at different iterations while training for our model compared to other quality metric techniques.

Our results show that our approach is closest to MDSI [Nafchi2016MeanDS], as compared to RVSIM [Ye2012UnsupervisedFL], GSMD [Xue2014GradientMS], SRSIM [Zhang2012SRSIMAF], FSIMc [Zhang2011FSIMAF] that perform slightly below our model. An SRCC plot representation in Figure 5 depicts the rank correlation performance for both FairFace and CelebA dataset. The values confirm that our model performs favourably over other aforementioned techniques. At different iteration values, we also observed decent image quality improvements at about 20k - 30k iterations as shown in Figure 6(b) for the FairFace dataset. Figures 7 and 8 show further results obtained from a combination of different loss functions and other competitive models, respectively.

We computed the FID and IS scores of synthesised images for ClebeA and CIFAR-10 datasets with resolutions of 64 x 64 and 32 x 32, respectively. Table 4 shows the performance of our model baseline (BL) for different combinations of attention schemes (PAM and ) and the IQA losses (NIQE and SSIM). By observation, we see from Table 4 that including the module significantly boost image quality, this is a confirmation that perceptual spatial salient maps are crucial in GAN models for better image quality [Yang2018BlindIQ, Saad2012BlindIQ, Johnson2016PerceptualLF].

Furthermore, we applied the PAM and attention modules to the StleGAN2 [Karras2020AnalyzingAI] architecture. We also added the proposed Banach space norms (SSIM NIQE) to compare the overall model performance with our model. In Table 5, we show the trade-offs of the QAGAN [Kancharla2019QualityAG], StyleGAN2 [Karras2020AnalyzingAI] and our model baseline (BL). We used different combinations of the standard IQA metrics as discussed in section 4.1 and 4.2. Our findings confirm that our approach is competitive with state-of-the-art. Most importantly, we see improved performance of our model at lower resolutions (32 x 32), this improvement can be attributed to the attention schema employed. In Figure 9, we showcase the performance of QAGAN [Kancharla2019QualityAG] and our model on image synthesis for CelebA dataset. Our results show that our model performs significantly well overall, Table 5 gives a clearer representation of the performance levels.

height = 1.5cm, width = 12cm CIFAR-10 CelebA BL PAM M NIQE SSIM FID IS FID IS 38.10 0.12 8.20 0.03 29.80 0.10 0.86 0.03 19.01 0.10 8.01 0.13 13.16 0.02 0.88 0.19 16.31 0.21 8.00 0.35 11.86 0.07 0.89 0.11 15.00 0.20 7.46 0.21 10.76 0.43 0.90 0.10 13.21 0.10 7.80 0.10 6.38 0.39 0.96 0.10 8.06 0.22 7.48 0.62 6.40 0.71 0.97 0.16

Table 4: Ablation study on CelebA and CIFAR-10 datasets on our model baseline "BL" with a combination of the quality modules "PAM" and "" and losses "SSIM" and "NIQE".
Figure 17: (a)                            (b)                            (c)
Figure 18: Figure 9: Randomly sampled images for QAGAN [Kancharla2019QualityAG] (Top row) and our model (Bottom row) with different losses (SSIM and NIQE) of the CelebA dataset. 10 (a) Shows images synthesised using the baseline of both models. 9 (b) The SSIM loss function is added for both cases. We observe similar performance levels. 9 (c) Our model performs better overall with the best results when all loss functions are included ().

height = 2.4cm, width = 8.2cm CIFAR-10 (32 x 32) CelebA (64 x 64) Model FID FID QAGAN [Kancharla2019QualityAG] 41.20 0.25 10.03 0.35 QAGAN (SSIM) [Kancharla2019QualityAG] 14.13 0.32 6.44 0.43 QAGAN (NIQE) [Kancharla2019QualityAG] 12.57 0.11 6.40 0.23 QAGAN (SSIM + NIQE) [Kancharla2019QualityAG] 10.01 0.13 6.16 0.05 StyleGAN2 [Karras2020AnalyzingAI] 37.11 0.15 9.03 0.25 StyleGAN2 (SSIM) [Karras2020AnalyzingAI] 13.14 0.02 5.84 0.13 StyleGAN2 (NIQE) [Karras2020AnalyzingAI] 11.17 0.31 6.10 011 StyleGAN2 (SSIM + NIQE) [Karras2020AnalyzingAI] 10.81 0.13 6.18 0.05 BL 37.80 0.22 10.06 0.43 BL (SSIM) 12.80 0.12 6.86 0.62 BL (NIQE) 10.20 0.79 6.36 0.44 BL (SSIM + NIQE) 9.76 0.37 6.21 0.36

Table 5: FID on CelebA and CIFAR-10 dataset.

6 Conclusion

In this paper, we introduced a novel quality encoding protocol that harnesses the image quality maps mimicking the HVS and the perceptual properties from a deep convolutional neural network (DCNN) to provide perceptually consistent features that translate to better image quality. We identified visually sensitive parameters and adapted a quality perceptual attention scheme that narrows down these features to a localised embedding which incentives perceptual representations over other features. The aim was to target the most relevant intrinsic features responsible for image texture, structural contrast and luminance which we use to guide the adversarial model towards high quality image synthesis. We also introduced a critic model that monitors perceptual consistency for each image representation. We demonstrated state-of-the-art or comparable performance over other approaches.

References