Distorted Representation Space Characterization Through Backpropagated Gradients

08/27/2019 ∙ by Gukyeong Kwon, et al. ∙ Georgia Institute of Technology 8

In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms. We demonstrate the utility of such gradients in applications including perceptual image quality assessment and out-of-distribution classification. The applications are chosen to validate the effectiveness of gradients as features when the test image distribution is distorted from the train image distribution. In both applications, the proposed gradient based features outperform activation features. In image quality assessment, the proposed approach is compared with other state of the art approaches and is generally the top performing method on TID 2013 and MULTI-LIVE databases in terms of accuracy, consistency, linearity, and monotonic behavior. Finally, we analyze the effect of regularization on gradients using CURE-TSR dataset for out-of-distribution classification.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent ubiquitous deployment of deep learning algorithms across multiple vision applications can be attributed to the statistical commonalities present in all image data [3, 1]

. Feedforward neural networks learn nonlinear transformations with the objective of obtaining useful representations of complex inputs. These representations make the subsequent applications easier. For instance, in image recognition, the authors in 

[8]

successfully apply a linear classifier on a representation space that is obtained by multiple weight parameters spread across layers. The high accuracy is achieved because of the constancy of the train and test domain statistics that allows the weights to transform the test images into the trained representation space. While the concept of domain transformation is not new 

[22], the advent of backpropagation algorithm [18] has allowed the learning of representation spaces using large data. During training, the backpropagation adjusts the weights between layers to minimize the difference between obtained output and desired output. The measure of adjustment is controlled by optimization procedures. Generally, regularized gradient based optimization procedures are adopted to obtain the direction of weight adjustment. During testing, images are projected onto the learned representation space. If there is no change in the train and the test domain, the representation space is optimized to perform the task it was trained for.

Domain adaptation techniques have been been proposed to bridge the distributional gap, when there are differences in the train and test image statistics [14]. These techniques essentially re-train the weight parameters to adjust the representation space to include the patterned shift of the test domain. Once the training is done, the test images are again projected onto the re-trained weights to presumably lie on the optimized representation space. However, in the cases where there is not enough data in the test domain for the representation space to adapt, domain adaptation techniques cannot be applied. Similarly, when the domain shift is caused by various types and levels of distortions such as noise which does not possess statistical commonalities with natural images, domain adaptation techniques may not perform as desired. Applications that satisfy these criteria include image quality assessment and out-of-distribution classification.

In this paper, we tackle the case where there is a considerable shift between the train and test domains because of distortions and where there is not enough test data for the weights to adapt to the test distribution. The contributions of this paper are threefold:

  • We introduce the concept of using directional information provided by gradients as a feature.

  • We use gradients as error projection spaces to quantify perceptual dissimilarity between pristine and distorted images and validate this quantification on image quality assessment.

  • We analyze the effect of regularization on gradients for out-of-distribution classification.

We use an autoencoder architecture to demonstrate the validity of gradients as features. In Section 2, we set up the autoencoder architecture. The proposed method is introduced in Section 3. The application of the proposed method along with the conducted experiments are reported in Sections 4 and 5.

2 Background

Figure 1: Block diagram for Variational Autoencoder (VAE) and Sparse Autoencoder (SAE). The image is taken from the CURE-TSR dataset

An autoencoder is an unsupervised learning network which learns a regularized representation of inputs to reconstruct them as its output 

[3]. The representation space of autoencoders can be constrained to have desirable properties in its latent space. An autoencoder constrained to be sparse in its latent space is called a Sparse Autoencoder (SAE) [13]

. Similarly, an autoencoder which reconstructs input data while constraining the latent space to follow a known distribution such as Gaussian distribution, is called a Variational Autoencoder (VAE) 

[7]. Both the SAEs and VAEs consist of the same basic architecture as shown in Fig. 1. The encoder, , is parameterized to map inputs to the latent representation, , and the decoder, , learns to reconstruct inputs using the latent representation. That is, given input ,

(1)

where are height, width, channel of input image, respectively. The notations and are used to differentiate SAE and VAE in Fig. 1

. Training is carried out by minimizing the loss function

defined as follows:

(2)

where is a reconstruction error which measures the dissimilarity between the input and the reconstructed image and is a regularization term.

SAEs differ from VAEs in the loss function . For SAEs, we enforce latent space sparsity by using mean square error and elastic net regularization [36] for and , respectively. The cost function for training our SAE is,

(3)

where, and are the weight parameters for the encoder and decoder, respectively. and are set as suggested by the authors in [12]. The loss function of VAEs are defined as

(4)

KL is the Kullback Leibler divergence between two distributions and we assume

. Thus, KL divergence constrains the latent space of VAEs to be a Gaussian distribution. The first term in the loss corresponds to the reconstruction error, , and the second term is a latent constraint, in Eq. (2). In this paper, we use the trained representation spaces of both SAEs and VAEs to validate our methodology.

3 Gradient Interpretation and Generation

Figure 2: Geometric visualization for gradients from test images.
Figure 3: Block Diagram for gradient generation. The image is taken from the CURE-TSR dataset.

The latent representation spaces and are low-dimensional manifolds embedded in a high dimensional input space. The activation of encoder is a projection of image on the learned manifold [1]. The generalizability of autoencoder is ensured when test images are drawn from the same training distribution and are well projected in the vicinity of the learned manifold. However, the presence of distortions in test images causes their activations to move away from the representation manifold. Unless this movement is tangential to the manifold, the representations may not be accurate enough for the subsequent tasks. This limits the capability of activation from the encoder in characterizing the distortions in the input space. This is shown in Fig. 2, where both and have the same activation on the manifold.

To alleviate this problem, we propose to utilize the gradients of trained autoencoder to characterize distortions in the test images. The geometric interpretation of backpropagated weight gradients is explained in subsection 3.1, while the gradient generation methodology is described in subsection 3.2.

3.1 Geometric Interpretation

In Fig. 2, we visualize the geometric interpretation of gradients. We assume the manifold of reconstructed training images to be linear for the simplicity of explanation. As shown in the figure, the manifold of reconstructed training images is considered to be spanned by the weights of the encoder and the decoder. When the test images are the variations of training images, the autoencoder approximates the reconstruction, , by projecting them on the learned reconstructed training image manifold, . The gradient of weights can be calculated using the loss defined in Eq. (2) and the backpropgataion, . These gradients represent required changes in the reconstructed image manifold to incorporate the test images. Therefore, we use these gradients to characterize the orthogonal distortions toward the test images. For instance, when and are projected onto different activations in the direction of the weight gradients , they become distinguishable in the gradient space. We note that the assumption of linear manifold does not hold in most of practical scenarios but the geometric interpretation of gradients is still applicable to nonlinear manifolds.

3.2 Gradient Generation Framework

Fig. 3 shows the block diagram to generate the required gradients. A test image is passed through a trained network as the input. The feedforward loss from Eq. 2 is calculated. This loss is a sum of the reconstruction loss, , and the regularization loss, . In the loss function, the reconstruction error and the regularization serve different roles during the optimization. Therefore, gradients backpropagated from both terms characterize different aspects of distortions in test image. Hence, we extract and analyze them separately. Both and are differentiated with respect to weight parameters from encoder and decoder, and . These four gradients, , , , are the generated outputs. Note that the total loss gradient can be calculated as a sum of individual components. In Section 4, we show how these generated gradients can be used as features for distorted images.

4 Applications and Experiments

Figure 4: Block diagram for image quality assessment.

Gradients have traditionally been used for providing visual explanations for the decisions made by deep networks [21]. We are not aware of any other work that utilizes gradient information to characterize directional change of learned representation space during test time. Note that the obtained weight gradients are not used to adjust the network parameters. Rather, they are used to characterize the distortions present in test domains. We validate the effectiveness of this characterization using two applications: Out-of-distribution classification (OOD-C) and image quality assessment (IQA).

The weight gradients, , can be used in two ways. The first approach is to use gradients as error direction indicators by projecting images with complex distortions on the gradient space. The complex distortions can either be a combination of multiple distortions such as distortions generated in the MULTI-LIVE dataset [5] or the human visual system (HVS) specific peculiar distortions such as the ones presented in the TID2013 [16]

dataset. Hence, this approach of projections onto gradients is explored in the application of IQA. The second approach is to directly use gradients as feature vectors. The assumption here is that the directional change caused by distortions is unique and becomes a characteristic of the distortion. This approach is explored for the application of OOD-C.

We also analyze the effect of regularization in the characterization of distortions using gradients. We study the gradients backpropagated from regularization and reconstruction errors in the last layer of encoder and decoder. Both these layers serve as the genesis layers for gradients during backpropagated. In particular, we conduct this analysis using VAEs in the application of OOD-C with diverse distortion types.

4.1 Image Quality Assessment

Image quality assessment is a classical field of image processing the goal of which is to objectively estimate the perceptual quality of a degraded image. Traditionally, hand-crafted features based on natural scene statistics are extracted from original and distorted images. These features are then mapped to the subjective quality scores 

[11, 19]. Recently, end-to-end deep learning techniques [6, 29]

that handle both feature extraction and mapping have been proposed. The authors in 

[29, 17] project both original and distorted images onto the same generically trained SAE based representation space to quantify the perceptual difference. A better performance compared to the results in [29] would validate the usage of gradient projections. To facilitate unbiased comparison, we follow the same preprocessing and training steps as described in [29]. The block diagram during testing for the proposed method, is shown in Fig. 4. Both original and distorted images are projected on the gradient space of decoder. Inversion of nonlinear layer is performed and the Spearman correlation between two projections is calculated to estimate the quality of image.

4.2 Out-Of-Distribution Classification

Figure 5: Block diagram for out-of-distribution classification.

Out-of-distribution detection is an emerging research topic to ensure AI safety. The authors in [9] propose to use temperature scaling and pre-processing on images to detect OOD images. Zong et al. [35] utilize activations and reconstruction error features of autoencoder to perform OOD detection. In [2]

, a VAE with Gaussian mixture model and a sample energy-based method are utilized to detect OOD. We use the Challenging Unreal and Real Environments for Traffic Sign Recognition (CURE-TSR) dataset 

[28, 25, 27] which contains traffic sign images with different challenging conditions and challenge levels for out-of-distribution classification. The block diagram of the proposed method is demonstrated in Fig. 5. We train the VAE using challenge-free training images and generate gradients using challenge-free training images and Gaussian blur training images. The generated gradients are used to classify challenge-free test images (in-distribution) and challenge test images (out-of-distribution) with challenge types which are decolorization (DE), Codec Error (CE), Noise (NO), Lens Blur (LB), Dirty Lens (DL), and Rain (RA). We compare the performance of classifiers trained with gradient features and activations from the encoder. Also, we analyze the gradient features from the reconstruction error and the regularization in classifying different challenge images.

5 Results

Databases PSNR PSNR SSIM MS CW IW SR FSIM FSIMc BRIS BIQI BLII Per CSV UNI COHER SUMMER Proposed
HA HMA SSIM SSIM SSIM SIM QUE NDS2 SIM QUE ENSI
[15] [15] [30] [32] [20] [31] [33] [34] [34] [10] [11] [19] [23] [24] [29] [26] [26]
Outlier Ratio (OR)
MULTI 0.013 0.009 0.016 0.013 0.093 0.013 0.000 0.018 0.016 0.067 0.024 0.078 0.004 0.000 0.000 0.031 0.000 0.000
TID13 0.615 0.670 0.734 0.743 0.856 0.701 0.632 0.742 0.728 0.851 0.856 0.852 0.655 0.687 0.640 0.833 0.620 0.620
Root Mean Square Error (RMSE)
MULTI 11.320 10.785 11.024 11.275 18.862 10.049 8.686 10.866 10.794 15.058 12.744 17.419 9.898 9.895 9.258 14.806 8.212 7.943
TID13 0.652 0.697 0.762 0.702 1.207 0.688 0.619 0.710 0.687 1.100 1.108 1.092 0.643 0.647 0.615 1.049 0.630 0.596
Pearson Linear Correlation Coefficient (PLCC)
MULTI 0.801 0.821 0.813 0.803 0.380 0.847 0.888 0.818 0.821 0.605 0.739 0.389 0.852 0.852 0.872 0.622 0.901 0.908
-1 -1 -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 0
TID13 0.851 0.827 0.789 0.830 0.227 0.832 0.866 0.820 0.832 0.461 0.449 0.473 0.855 0.853 0.869 0.533 0.861 0.877
-1 -1 -1 -1 -1 -1 0 -1 -1 -1 -1 -1 -1 -1 0 -1 -1
Spearman’s Rank Correlation Coefficient (SRCC)
MULTI 0.715 0.743 0.860 0.836 0.631 0.884 0.867 0.864 0.867 0.598 0.611 0.386 0.818 0.849 0.867 0.554 0.884 0.887
-1 -1 0 -1 -1 0 0 0 0 -1 -1 -1 -1 -1 0 -1 0
TID13 0.847 0.817 0.742 0.786 0.563 0.778 0.807 0.802 0.851 0.414 0.393 0.396 0.854 0.846 0.860 0.649 0.856 0.865
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 -1 0 -1 0
Kendall’s Rank Correlation Coefficient (KRCC)
MULTI 0.532 0.559 0.669 0.644 0.457 0.702 0.678 0.673 0.677 0.420 0.440 0.268 0.624 0.655 0.679 0.399 0.698 0.702
-1 -1 0 0 -1 0 0 0 0 -1 -1 -1 -1 0 0 -1 0
TID13 0.666 0.630 0.559 0.605 0.404 0.598 0.641 0.629 0.667 0.286 0.270 0.277 0.678 0.654 0.667 0.474 0.667 0.677
0 -1 -1 -1 -1 -1 -1 -1 0 -1 -1 -1 0 0 0 -1 0
Table 1: Overall performance of image quality estimators.
Accuracy (%)
Method Non-Blur Types
DE CE NO
VAE-A 56.51 62.01 84.72
VAE-R 70.24 60.64 68.59
VAE-L 56.18 63.94 83.04
Proposed 62.67 64.76 87.57
Method Blur Types
LB DL RA
VAE-A 93.49 93.70 94.10
VAE-R 89.08 90.83 92.57
VAE-L 94.51 94.08 94.74
Proposed 95.26 95.71 95.85
Table 2: Out-of-distribution classification accuracy.

In this section, we show the advantages of the gradient analysis framework on the visual tasks detailed in Section 4 : perceptual image quality assessment and out-of-distribution classification.

5.1 Image Quality Assessment

The proposed method detailed in Section 4.1 is compared against other state-of-the-art IQA approaches in Table 1. The performance of the quality estimators is validated using root mean square error (accuracy), outlier ratio (consistency), Pearson correlation (linearity), and Spearman correlation (monotonic behavior). In Table 1, higher Spearman and Pearson correlations, and lower RMSE and OR are desired. Statistical significance between correlation coefficients is measured with the formulations suggested in ITU-T Rec. P.1401 [4]. In the performance comparison, we use full-reference methods based on perceptually-extended fidelity [15], structural similarity [30, 32], spectral similarity [34], perceptual similarity [23, 24], SAE based activations [29], error spectrum analysis [26] and no-reference methods based on natural scene statistics [10, 11, 19].

In each evaluation metric and both databases, we highlight the results of two best performing methods. Statistical significance results with respect to correlation values of the proposed method are reported against the correlation values of other methods. A

corresponds to statistically similar performance, means compared method is statistically inferior, and indicates compared method is statistically superior. None of the compared methods are statistically superior to the proposed method while all of them are statistically inferior to the proposed method in at least one evaluation metric. Note that the proposed gradient based method outperforms the activation based SAE method in [29] in all categories and in both databases. This comparison is an indicator that gradient based projections provide better characterization for distorted representation spaces compared to activations.

5.2 Out-Of-Distribution Classification

The results of the proposed gradient based method detailed in Section 4.2 for out-of-distribution classification is shown in Table 2. Classification accuracy is used as the performance metric to compare results. VAE-A is a method that the activations, , of test images are used as features to train the classifier. VAE-R and VAE-L refer to the methods that utilize regularization and reconstruction loss gradients as features, respectively. Proposed is the method that utilizes both regularization and reconstruction gradients. The out-of-distribution challenge types are divided into two categories: spatial blur vs others. The spatial blur category includes Lens Blur, Dirty Lens, and Rain. These challenges decrease the higher frequency components in the test images and hence are categorized together. The other challenge category includes Decolorization, Codec Error, and Noise. The method with the highest classification accuracy is highlighted.

Table 2 demonstrates that gradient features from the VAE are better indicators of being outside the trained representation space than activations taken from the same VAE. Moreover, the classification accuracies based on different gradient features are similar for challenges in the blur category while their accuracy varies significantly among challenges in the other category. This is because the directional information provided by reconstruction loss (VAE-L) and regularization loss (VAE-R) are both equally haphazard for blurred images. This is in contrast to in-distribution (challenge-free) images which all have similar directional gradients. Hence, it is easy to classify between in-distribution and out-of-distribution images using any gradients. On the other hand, in the other non-blur type category, there is considerable variation in accuracy from different gradient features which merits further investigation.

6 Conclusion

In this paper, we proposed a framework to analyze the backpropagated gradients of representation space and show that they are effective directional measures when analyzing distorted images. We motivated the use of gradients by geometric interpretations and proposed a methodology to extract gradients. We conducted experiments that demonstrated the use of gradients as both features as well as error projection spaces in two applications. We also analyzed the effect of regularization on the proposed method. Finally, in both applications, the proposed method using gradients outperformed the features obtained from activations in every measure.

References

  • [1] Y. Bengio, A. Courville, and P. Vincent (2013) Representation learning: a review and new perspectives. IEEE Trans. Patt. Analy. Mach. Intel. 35 (8), pp. 1798–1828. Cited by: §1, §3.
  • [2] Y. Fan, G. Wen, D. Li, S. Qiu, and M. D. Levine (2018)

    Video anomaly detection and localization via gaussian mixture fully convolutional variational autoencoder

    .
    arXiv:1805.11223. Cited by: §4.2.
  • [3] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT Press. Note: www.deeplearningbook.org Cited by: §1, §2.
  • [4] ITU-T (2012) P.1401: methods, metrics and procedures for statistical evaluation, qualification and comparison of objective quality prediction models. Technical report ITU Telecom. Stand. Sector. Cited by: §5.1.
  • [5] D. Jayaraman, A. Mittal, A. K. Moorthy, and A. C. Bovik (2012) Objective quality assessment of multiply distorted images. In Asilomar Conf. Sig. Syst. Comp., pp. 1693–1697. Cited by: §4.
  • [6] L. Kang, P. Ye, Y. Li, and D. Doermann (2014) Convolutional neural networks for no-reference image quality assessment. In IEEE Conf. Comp. Vis. Patt. Recog., pp. 1733–1740. Cited by: §4.1.
  • [7] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv:1312.6114. Cited by: §2.
  • [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Neur. Info. Proc. Syst., pp. 1097–1105. Cited by: §1.
  • [9] S. Liang, Y. Li, and R. Srikant (2017) Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv:1706.02690. Cited by: §4.2.
  • [10] A. Mittal, A. K. Moorthy, and A. C. Bovik (2012) No-reference image quality assessment in the spatial domain. IEEE Trans. Image Proc. 21 (12), pp. 4695–4708. Cited by: §5.1, Table 1.
  • [11] A. K. Moorthy and A. C. Bovik (2010) A two-step framework for constructing blind image quality indices. IEEE Sig. Proc. Let. 17 (5), pp. 513–516. Cited by: §4.1, §5.1, Table 1.
  • [12] A. Ng, J. Ngiam, C. Y. Foo, Y. Mai, and C. S. (2012) UFLDL tutorial. Cited by: §2.
  • [13] A. Ng (2011) Sparse autoencoder. CS294A Lecture notes 72 (2011), pp. 1–19. Cited by: §2.
  • [14] V. M. Patel, R. Gopalan, R. Li, and R. Chellappa (2015) Visual domain adaptation: a survey of recent advances. IEEE Sig. Proc. Mag. 32 (3), pp. 53–69. Cited by: §1.
  • [15] N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, and M. Carli (2011) Modified image visual quality metrics for contrast change and mean shift accounting. In Proc. CADSM, pp. 305–311. Cited by: §5.1, Table 1.
  • [16] N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al. (2015) Image database tid2013: peculiarities, results and perspectives. Sig. Proc.: Imag. Comm. 30, pp. 57–77. Cited by: §4.
  • [17] M. Prabhushankar, D. Temel, and G. AlRegib (2017) MS-UNIQUE: multi-model and sharpness-weighted unsupervised image quality estimation. Elect. Imag. 2017 (12), pp. 30–35. External Links: ISSN 2470-1173, Link, Document Cited by: §4.1.
  • [18] D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1986) Learning representations by back-propagating errors. Nature 323 (6088), pp. 533. Cited by: §1.
  • [19] M. A. Saad, A. C. Bovik, and C. Charrier (2012) Blind image quality assessment: a natural scene statistics approach in the dct domain. IEEE Trans. on Image Proc. 21 (8), pp. 3339–3352. Cited by: §4.1, §5.1, Table 1.
  • [20] M. P. Sampat, Z. Wang, S. Gupta, A. C. Bovik, and M. K. Markey (2009)

    Complex wavelet structural similarity: a new image similarity index

    .
    IEEE Trans. Image Proc. 18 (11), pp. 2385–2401. Cited by: Table 1.
  • [21] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In IEEE Int. Conf. Comp. Vis., pp. 618–626. Cited by: §4.
  • [22] T. G. Stockham (1972) Image processing in the context of a visual model. Proc. IEEE 60 (7), pp. 828–842. Cited by: §1.
  • [23] D. Temel and G. AlRegib (2015) PerSIM: multi-resolution image quality assessment in the perceptually uniform color domain. In IEEE Int. Conf. Image Proc., pp. 1682–1686. Cited by: §5.1, Table 1.
  • [24] D. Temel and G. AlRegib (2016) CSV: image quality assessment based on color, structure, and visual system. Sig. Proc.: Image Comm. 48 (), pp. 92 – 103. Note: External Links: ISSN 0923-5965, Document, Link Cited by: §5.1, Table 1.
  • [25] D. Temel and G. AlRegib (2018-03) Traffic signs in the wild: highlights from the ieee video and image processing cup 2017 student competition [sp competitions]. IEEE Sig. Proc. Mag. 35 (2), pp. 154–161. External Links: Document, ISSN 1053-5888 Cited by: §4.2.
  • [26] D. Temel and G. AlRegib (2019) Perceptual image quality assessment through spectral analysis of error representations. Sig. Proc.: Image Comm.. Cited by: §5.1, Table 1.
  • [27] D. Temel, T. Alshawi, M-H. Chen, and G. AlRegib (2019) Challenging environments for traffic sign detection: reliability assessment under inclement conditions. arXiv:1902.06857. Cited by: §4.2.
  • [28] D. Temel, G. Kwon*, M. Prabhushankar*, and G. AlRegib (2017-12) CURE-TSR: Challenging unreal and real environments for traffic sign recognition. In Neur. Info. Proc. Syst. Work. on MLITS, Long Beach, U.S.. Cited by: §4.2.
  • [29] D. Temel, M. Prabhushankar, and G. AlRegib (2016) UNIQUE: unsupervised image quality estimation. IEEE Sig. Proc. Let. 23 (10), pp. 1414–1418. Cited by: §4.1, §5.1, §5.1, Table 1.
  • [30] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Proc. 13 (4), pp. 600–612. Cited by: §5.1, Table 1.
  • [31] Z. Wang and Q. Li (2011) Information content weighting for perceptual image quality assessment. IEEE Trans. Image Proc. 20 (5), pp. 1185–1198. Cited by: Table 1.
  • [32] Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In Asilomar Conf. Sig., Syst. & Comp., Vol. 2, pp. 1398–1402. Cited by: §5.1, Table 1.
  • [33] L. Zhang and H. Li (2012) SR-SIM: a fast and high performance iqa index based on spectral residual. In IEEE Int. Conf. Image Proc., pp. 1473–1476. Cited by: Table 1.
  • [34] L. Zhang, L. Zhang, X. Mou, D. Zhang, et al. (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Proc. 20 (8), pp. 2378–2386. Cited by: §5.1, Table 1.
  • [35] B. Zong, Q. Song, M. R. Min, W. Cheng, et al. (2018) Deep autoencoding gaussian mixture model for unsupervised anomaly detection. Int. Conf. Learning Repr.. Cited by: §4.2.
  • [36] H. Zou and T. Hastie (2005) Regularization and variable selection via the elastic net. Jour. Royal Stat. Soc.: Ser. B 67 (2), pp. 301–320. Cited by: §2.