Blind Image Quality Assessment Using A Deep Bilinear Convolutional Neural Network

07/05/2019 ∙ by Weixia Zhang, et al. ∙ University of Waterloo NYU college Wuhan University 3

We propose a deep bilinear model for blind image quality assessment (BIQA) that handles both synthetic and authentic distortions. Our model consists of two convolutional neural networks (CNN), each of which specializes in one distortion scenario. For synthetic distortions, we pre-train a CNN to classify image distortion type and level, where we enjoy large-scale training data. For authentic distortions, we adopt a pre-trained CNN for image classification. The features from the two CNNs are pooled bilinearly into a unified representation for final quality prediction. We then fine-tune the entire model on target subject-rated databases using a variant of stochastic gradient descent. Extensive experiments demonstrate that the proposed model achieves superior performance on both synthetic and authentic databases. Furthermore, we verify the generalizability of our method on the Waterloo Exploration Database using the group maximum differentiation competition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 8

page 9

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Nowadays, digital images are captured via various mobile cameras, compressed by conventional and advanced techniques [1, 2], transmitted through diverse communication channels [3], and stored on different devices. Each stage in the image processing pipeline could introduce unexpected distortions, leading to perceptual quality degradation. Therefore, image quality assessment (IQA) is of great importance to monitoring the quality of images and ensuring the reliability of image processing systems. It is essential to design accurate and efficient computational models to push IQA from laboratory research to real-world applications [4, 5]. Among all computational models, we are interested in no-reference or blind IQA (BIQA) methods [6] because the reference information is often unavailable (or may not exist) in many practical applications.

Previous knowledge-driven BIQA models typically adopt low-level features either hand-crafted [7] or learned [8] to characterize the level of deviations from statistical regularities of natural scenes. Until recently, there has been limited effort towards end-to-end optimized BIQA using deep convolutional neural networks (CNN) [9, 10]

, primarily due to the lack of sufficient ground truths such as the mean opinion scores (MOS) for training. A straightforward approach is to fine-tune a CNN pre-trained on ImageNet 

[11] for quality prediction [12]. The resulting model performs reasonably on the LIVE Challenge Database [13] (with authentic distortions), but does not stand out on the LIVE [14] and TID2013 [15] databases (with synthetic distortions). Another common strategy is patch-based training, where the patch-level ground truths are either inherited from image-level annotations [9] or approximated by full-reference IQA models [16]. This strategy is very effective at learning CNN-based models for synthetic distortions, but fails to handle authentic distortions due to the non-homogeneity of distortions and the absence of reference images. Other methods [17, 10] take advantage of synthetic degradation processes (e.g., distortion types) to find reasonable initializations for CNN-based models, but cannot be applied to authentic distortions either.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Fig. 1: Sample distorted images synthesized from a reference image in the Waterloo Exploration Database [18]. (a) Gaussian blur. (b) White Gaussian noise. (c) JPEG compression. (d) JPEG2000 compression. (e) Contrast stretching. (f) Pink noise. (g) Image color quantization with dithering. (h) Over-exposure. (i) Under-exposure.

In this work, we aim for an end-to-end solution to BIQA that handles both synthetic and authentic distortions. We first learn two feature sets for the two distortion scenarios separately. For synthetic distortions, inspired by previous studies [17, 10], we construct a large-scale pre-training set based on the Waterloo Exploration Database [18] and the PASCAL VOC Database [19], where the images are synthesized with nine distortion types and two to five distortion levels. We take advantage of known distortion type and level information in the dataset and pre-train a CNN through a multi-class classification task. For authentic distortions, it is difficult to simulate the degradation processes due to their complexities [20]. Therefore, we opt for another CNN (VGG-16 [21]) pre-trained on ImageNet [11] that contains many realistic natural images of different perceptual quality. We model synthetic and authentic distortions as two-factor variations, and pool the two feature sets bilinearly [22] into a unified representation for final quality prediction. The resulting deep bilinear CNN (DB-CNN) is fine-tuned on target subject-rated databases using a variant of the stochastic gradient descent method. Extensive experimental results on five IQA databases demonstrate the effectiveness of DB-CNN for both synthetic and authentic distortions. Furthermore, through the group MAximum Differentiation (gMAD) competition [23], we find that DB-CNN is more robust than the most recent CNN-based BIQA models [24, 10].

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
Fig. 2: Illustration of the five new distortion types with increasing degradation levels from left to right. (a)-(e) Contrast stretching. (f)-(j) Pink noise. (k)-(o) Image color quantization with dithering. (p)-(q) Over-exposure. (r)-(s) Under-exposure.

Ii Related Work

In this section, we provide a review of recent CNN-based BIQA models. For a more detailed treatment of BIQA, we refer the interested readers to [6, 25].

Tang et al. [26]

pre-trained a deep belief network with a radial basis function and fine-tuned it to predict image quality. Bianco

et al. [27]

investigated various design choices for CNN-based BIQA. They first adopted off-the-shelf CNN features to learn a quality evaluator using support vector regression (SVR). Alternatively, they fine-tuned the features in a multi-class classification setting followed by SVR. Their proposals are not end-to-end optimized and involve heavy manual parameter adjustments 

[27]. Kang et al. [9]

trained a CNN using a large number of spatially normalized image patches. Later, they estimated image quality and distortion type simultaneously via a multi-task CNN 

[17]. Patch-based training may be problematic because due to the high non-stationarity of local image content and the intricate interactions between content and distortion [10, 12], local image quality is not always consistent with global image quality. Taking this problem into consideration, Bosse et al. [24] trained CNN models using two strategies: direct average of features from multiple patches and weighted average of patch quality scores according to their relative importance. Kim et al. [16] pre-trained a CNN model using numerous patches with proxy quality scores provided by a full-reference IQA model [28]

and summarized the patch-level features using the mean and standard deviation statistics for fine-tuning. A closely related work to ours is MEON 

[10], a cascaded multi-task framework for BIQA. A distortion type identification network is first trained, for which large-scale training samples are readily available. Starting from the pre-trained early layers and the outputs of the distortion type identification network, a quality prediction network is trained subsequently. Compared with MEON, the proposed DB-CNN takes a step further by considering not only distortion type but also distortion level information, which results in better quality-aware initializations. In summary, the aforementioned methods partially address the training data shortage problem in the synthetic distortion scenario, but it is difficult to extend them to the authentic distortion scenario.

Iii DB-CNN for BIQA

In this section, we first describe the construction of the pre-training set and the CNN architecture for synthetically distorted images. We then present the tailored VGG-16 network for authentically distorted images. Finally, we introduce the bilinear pooling module along with the fine-tuning procedure.

Iii-a CNN for Synthetic Distortions

To take into account the enormous content variations in real-world images, we start with the Waterloo Exploration Database [18] and the PASCAL VOC Database [19]. The former contains pristine-quality images with four synthetic distortions, i.e., JPEG compression, JPEG2000 compression, Gaussian blur, and while Gaussian noise. The latter is a large database for object recognition, which contains images of acceptable quality with semantic classes. We merge the two databases to obtain source images. In addition to the four distortion types mentioned above, we add five more—contrast stretching, pink noise, image quantization with color dithering, over-exposure, and under-exposure. We ensure that the added distortions dominate the perceived quality as some source images (especially in the PASCAL VOC Database) may not have perfect quality. Following [18], we synthesize images with five distortion levels except for over-exposure and under-exposure, where only two levels are generated [29]. Sample distorted images with various degradation levels are shown in Fig. 1 and Fig. 2. As a result, the pre-training ? set contains distorted images in total.

Due to the large scale of the pre-training set, it is impractical to carry out a full subjective experiment to obtain the MOS of each image. We take advantage of the distortion type and level information in the synthesis process, and pre-train a CNN to classify the distortion type and the degradation level. Compared to previous methods that exploit distortion type information only [10, 17] , our pre-training strategy offers perceptually more meaningful initializations, leading to better local optimum (shown in Section IV-B5). Specifically, we form the ground truth as an -class indicator vector with one entry activated to encode the underlying distortion type at specific distortion level. In our case, , which corresponds to seven distortion types with five levels and two distortion types with two levels.

Fig. 3: The architecture of S-CNN for synthetic distortions. We follow the style and convention in [2], and denote the parameterization of the convolution layer as “height width input channel output channel stride padding”. For brevity, we ignore all ReLU layers here.

Inspired by the VGG-16 network architecture [21], we design our CNN for synthetic distortions (S-CNN) with a similar structure subject to some modifications (see Fig. 3). In a nutshell, the input image is resized and cropped to . All convolutions have a kernel size of

with a stride of two to reduce the spatial resolution by half in both directions. We pad the feature activations with zeros when necessary before convolution. The nonlinear activation function we adopt is the rectified linear unit (ReLU). The feature activations at the last convolution layer are globally averaged across spatial locations. We append three fully connected layers and the softmax layer at the end. Given

training tuples in a mini-batch, where denotes the -th input image and is the ground-truth indicator vector, S-CNN produces the activations of the last fully connected layer . Denoting the model parameters in S-CNN by , we define the softmax function as

(1)

where is an

-dimensional probability vector of the

-th input, indicating the probability of each distortion type at specific degradation level. Finally, we compute the empirical cross-entropy loss by

(2)

Iii-B CNN for Authentic Distortions

Unlike training S-CNN for synthetic distortions, it is difficult to obtain a large amount of relevant training data for authentic distortions. Meanwhile, training a CNN from scratch using a small number of samples often leads to overfitting. Here we resort to VGG-16 [21] that has been pre-trained for the image classification task on ImageNet [11], to extract relevant features for authentically distorted images. Since the distortions in ImageNet occur as a natural consequence of photography rather than simulations, the VGG-16 feature representations are highly likely to adapt to authentic distortions and to improve the classification performance [12].

Iii-C DB-CNN by Bilinear Pooling

We consider bilinear pooling to combine S-CNN for synthetic distortions and VGG-16 for authentic distortions into a unified model. Bilinear models have been shown to be effective in modeling two-factor variations, such as style and content of images [30], location and appearance for fine-grained recognition [22], spatial and temporal characteristics for video analysis [31], and text and visual information for question-answering [32]. We tackle the BIQA problem with a similar philosophy, where synthetic and authentic distortions are modeled as two-factor variations, resulting in a DB-CNN model.

Fig. 4: The structure of the proposed DB-CNN.

The structure of DB-CNN is presented in Fig. 4. We tailor the pre-trained S-CNN and VGG-16 by discarding all layers after the last convolution. Denote the representations from S-CNN and VGG-16 by and , which have sizes of and , respectively. The bilinear pooling of and requires , which holds in our case for an input image of arbitrary size because S-CNN and VGG-16 share the same padding and downsampling routines. Other CNNs such as ResNet [33] may also be adopted in our framework if the structure of S-CNN is adjusted appropriately. The bilinear pooling of and is formulated as

(3)

where is of dimension . Bilinear representations are usually mapped from a Riemannian manifold into an Euclidean space [34] by

(4)

where refers to element-wise multiplication. is fed to a fully connected layer with one output for final quality prediction. We consider the -norm as the empirical loss, which is widely used in previous studies [9, 24, 12] to drive the learning of the entire DB-CNN model on a target IQA database

(5)

where is the MOS of the -th image in a mini-batch and is the predicted quality score by DB-CNN.

According to the chain rule, the backward propagation of the loss

through the bilinear pooling layer to and can be computed by

(6)

and

(7)

Bilinear pooling summarizes the spatial information and enables DB-CNN to accept an input image of arbitrary size. As a result, we can feed the whole image directly instead of patches cropped from it to DB-CNN during both training and testing.

Iv Experiments

In this section, we first describe the experimental setups, including the IQA databases, the evaluation protocols, the performance criteria, and the implementation details of DB-CNN. After that, we compare the performance of DB-CNN with state-of-the-art BIQA models on individual databases and across databases. We also test the robustness of DB-CNN on the Waterloo Exploration Database using the discriminability and ranking consistency criteria [18], and the gMAD competition method. Finally, we conduct a series of ablation experiments to justify the rationality of DB-CNN.

Iv-a Experimental Setups

Iv-A1 IQA Databases

The main experiments are conducted on three singly distorted synthetic IQA databases, i.e., LIVE [14], CSIQ [35] and TID2013 [15], a multiply distorted synthetic dataset LIVE MD [36], and the authentic LIVE Challenge Database [13]. LIVE [14] contains distorted images synthesized from reference images with five distortion types—JPEG compression (JPEG), JPEG2000 compression (JP2K), Gaussian blur (GB), white Gaussian noise (WN), and fast fading error (FF) at seven to eight degradation levels. Difference MOS (DMOS) in the range of is collected for each image with a higher value indicating lower perceptual quality. CSIQ [35] is composed of distorted images generated from reference images, including six distortion types, i.e., JPEG, JP2K, GB, WN, contrast change (CG), and pink noise (PN) at three to five degradation levels. DMOS in the range of is provided as the ground truth. TID2013 [15] consists of distorted images from reference images with distortion types at five degradation levels. MOS in the range of is provided to indicate perceptual quality. LIVE MD [36] contains 450 images generated from 15 source images under two multiple distortion scenarios—blur followed by JPEG compression and blur followed by white Gaussian noise. DMOS in the range of is provided as the subjective opinion. LIVE Challenge [13] is an authentic IQA database, which contains images captured from diverse real-world scenes by numerous photographers with various levels of photography skills using different camera devices. As a result, the images undergo complex realistic distortions. MOS in the range of is collected from over unique human evaluators via an online crowdsourcing platform.

Iv-A2 Experimental Protocols and Performance Criteria

We conduct the experiments by following the same protocol in [12]. Specifically, we divide the distorted images in a target IQA database into two splits, of which are used for fine-tuning DB-CNN and the rest for testing. For synthetic databases LIVE, CSIQ, TID2013, and LIVE MD, we guarantee the image content independence between the fine-tuning and test sets. The splitting procedure is randomly repeated ten times for all databases and the average results are reported.

We adopt two commonly used metrics to benchmark BIQA models: Spearman rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC). SRCC measures the prediction monotonicity while PLCC measures the prediction precision. As suggested in [37], the predicted quality scores are passed through a nonlinear logistic function before computing PLCC

(8)

where are regression parameters to be fitted.

Iv-A3 Implementation Details

All parameters in S-CNN are initialized by He’s method [38] and trained from scratch using Adam [39] with a mini-batch of . We run epochs with a learning rate decaying logarithmically from to . Images are first scaled to and cropped to as inputs. During fine-tuning of DB-CNN, we adopt Adam [39] again with a learning rate of for LIVE [14] and CSIQ [35], and for TID2013 [15], LIVE MD [36] and LIVE Challenge [13]

, respectively. The mini-batch size is set to eight. Batch normalization 

[40] is used to stabilize the pre-training and fine-tuning. We feed images of original size to DB-CNN during both fine-tuning and test phases. We implement DB-CNN using the MatConvNet toolbox [41] and will release the code at https://github.com/zwx8981/BIQA_project.

SRCC LIVE CSIQ TID2013 LIVE LIVE
[14] [35] [15] MD [36] CL[13]
BRISQUE [7] 0.939 0.746 0.604 0.886 0.608
M3 [42] 0.951 0.795 0.689 0.892 0.607
FRIQUEE [20] 0.940 0.835 0.680 0.923 0.682
CORNIA [8] 0.947 0.678 0.678 0.899 0.629
HOSA [43] 0.946 0.741 0.735 0.913 0.640
Le-CNN [9] 0.956
BIECON [16] 0.961 0.815 0.717 0.909 0.595
DIQaM [24] 0.960 0.835 0.606
WaDIQaM [24] 0.954 0.761 0.671
ResNet-ft [12] 0.950 0.876 0.712 0.909 0.819
IW-CNN [12] 0.963 0.812 0.800 0.914 0.663
DB-CNN 0.968 0.946 0.816 0.927 0.851
PLCC LIVE CSIQ TID2013 LIVE LIVE
MD CL
BRISQUE [7] 0.935 0.829 0.694 0.917 0.629
M3 [42] 0.950 0.839 0.771 0.919 0.630
FRIQUEE [20] 0.944 0.874 0.753 0.934 0.705
CORNIA [8] 0.950 0.776 0.768 0.921 0.671
HOSA [43] 0.947 0.823 0.815 0.926 0.678
Le-CNN [9] 0.953
BIECON [16] 0.962 0.823 0.762 0.933 0.613
DIQaM [24] 0.972 0.855 0.601
WaDIQaM [24] 0.963 0.787 0.680
ResNet-ft [12] 0.954 0.905 0.756 0.920 0.849
IW-CNN [12] 0.964 0.791 0.802 0.929 0.705
DB-CNN 0.971 0.959 0.865 0.934 0.869
TABLE I: Average SRCC and PLCC results across ten sessions. The top two results are highlighted in boldface. LIVE CL stands for the LIVE Challenge Database

Iv-B Experimental Results

Iv-B1 Performance on Individual Databases

We compare DB-CNN against several state-of-the-art BIQA models. The source codes of BRISQUE [7], M3 [42], FRIQUEE [20], CORNIA [8], HOSA [43], and dipIQ [25] are provided by the respective authors. We re-train and/or validate them using the same randomly generated training-test splits. For CNN-based counterparts, we directly copy the performance from the corresponding papers due to the unavailability of the training codes. The SRCC and PLCC results on the five databases are listed in Table I, from which we have several interesting observations. First, while all competing models achieve comparable performance on LIVE [14], their results on CSIQ [35] and TID2013 [15] are rather diverse. Compared with knowledge-driven models, CNN-based models deliver better performance on CSIQ and TID2013 because of end-to-end feature learning rather than hand-crafted feature engineering. Second, on the multiply distorted dataset LIVE MD, DB-CNN performs favorably although it does not include multiply distorted images for pre-training, indicating that DB-CNN generalizes well to slightly different distortion scenarios. Last, for the authentic database LIVE Challenge, FRIQUEE [20]

that combines a set of quality-aware features extracted from multiple color spaces outperforms other knowledge-driven BIQA models and all CNN-based models except for ResNet-ft 

[12] and the proposed DB-CNN. This suggests that the intrinsic characteristics of authentic distortions cannot be fully captured by low-level features learned from synthetically distorted images. The success of DB-CNN on LIVE Challenge verifies the relevance between the high-level features from VGG-16 and the authentic distortions. In summary, DB-CNN achieves superior performance on both synthetic and authentic IQA databases.

SRCC JPEG JP2K WN GB FF
BRISQUE [7] 0.965 0.929 0.982 0.964 0.828
M3 [42] 0.966 0.930 0.986 0.935 0.902
FRIQUEE [20] 0.947 0.919 0.983 0.937 0.884
CORNIA [8] 0.947 0.924 0.958 0.951 0.921
HOSA [43] 0.954 0.935 0.975 0.954 0.954
dipIQ [25] 0.969 0.956 0.975 0.940
DB-CNN 0.972 0.955 0.980 0.935 0.930
PLCC JPEG JP2K WN GB FF
BRISQUE [7] 0.971 0.940 0.989 0.965 0.894
M3 [42] 0.977 0.945 0.992 0.947 0.920
FRIQUEE [20] 0.955 0.935 0.991 0.949 0.936
CORNIA [8] 0.962 0.944 0.974 0.961 0.943
HOSA [43] 0.967 0.949 0.983 0.967 0.967
dipIQ [25] 0.980 0.964 0.983 0.948
DB-CNN 0.986 0.967 0.988 0.956 0.961
TABLE II: Average SRCC and PLCC results of individual distortion types across ten sessions on LIVE [14]
SRCC JPEG JP2K WN GB PN CC
BRISQUE [7] 0.806 0.840 0.723 0.820 0.378 0.804
M3 [42] 0.740 0.911 0.741 0.868 0.663 0.770
FRIQUEE [20] 0.869 0.846 0.748 0.870 0.753 0.838
CORNIA [8] 0.513 0.831 0.664 0.836 0.493 0.462
HOSA [43] 0.733 0.818 0.604 0.841 0.500 0.716
dipIQ [25] 0.936 0.944 0.904 0.932
MEON [10] 0.948 0.898 0.951 0.918
DB-CNN 0.940 0.953 0.948 0.947 0.940 0.870
PLCC JPEG JP2K WN GB PN CC
BRISQUE [7] 0.828 0.887 0.742 0.891 0.496 0.835
M3 [42] 0.768 0.928 0.728 0.917 0.717 0.787
FRIQUEE [20] 0.885 0.883 0.778 0.905 0.769 0.864
CORNIA [8] 0.563 0.883 0.687 0.904 0.632 0.543
HOSA [43] 0.759 0.899 0.656 0.912 0.601 0.744
dipIQ [25] 0.975 0.959 0.927 0.958
MEON [10] 0.979 0.925 0.958 0.946
DB-CNN 0.982 0.971 0.956 0.969 0.950 0.895
TABLE III: Average SRCC and PLCC results of individual distortion types across ten sessions on CSIQ [35]
SRCC BRISQUE [7] M3 [42] FRIQUEE [20] CORNIA [8] HOSA [43] MEON [10] DB-CNN
Additive Gaussian noise 0.711 0.766 0.730 0.692 0.833 0.813 0.790
Additive noise in color components 0.432 0.560 0.573 0.137 0.551 0.722 0.700
Spatially correlated noise 0.746 0.782 0.866 0.741 0.842 0.926 0.826
Masked noise 0.252 0.577 0.345 0.451 0.468 0.728 0.646
High frequency noise 0.842 0.900 0.847 0.815 0.897 0.911 0.879
Impulse noise 0.765 0.738 0.730 0.616 0.809 0.901 0.708
Quantization noise 0.662 0.832 0.764 0.661 0.815 0.888 0.825
Gaussian blur 0.871 0.896 0.881 0.850 0.883 0.887 0.859
Image denoising 0.612 0.709 0.839 0.764 0.854 0.797 0.865
JPEG compression 0.764 0.844 0.813 0.797 0.891 0.850 0.894
JPEG2000 compression 0.745 0.885 0.831 0.846 0.919 0.891 0.916
JPEG transmission errors 0.301 0.375 0.498 0.694 0.730 0.746 0.772
JPEG2000 transmission errors 0.748 0.718 0.660 0.686 0.710 0.716 0.773
Non-eccentricity pattern noise 0.269 0.173 0.076 0.200 0.242 0.116 0.270
Local bock-wise distortions 0.207 0.379 0.032 0.027 0.268 0.500 0.444
Mean shift 0.219 0.119 0.254 0.232 0.211 0.177 -0.009
Contrast change -0.001 0.155 0.585 0.254 0.362 0.252 0.548
Change of color saturation 0.003 -0.199 0.589 0.169 0.045 0.684 0.631
Multiplicative Gaussian noise 0.717 0.738 0.704 0.593 0.768 0.849 0.711
Comfort noise 0.196 0.353 0.318 0.617 0.622 0.406 0.752
Lossy compression of noisy images 0.609 0.692 0.641 0.712 0.838 0.772 0.860
Color quantization with dither 0.831 0.908 0.768 0.683 0.896 0.857 0.833
Chromatic aberrations 0.615 0.570 0.737 0.696 0.753 0.779 0.732
Sparse sampling and reconstruction 0.807 0.893 0.891 0.865 0.909 0.855 0.902
TABLE IV: Average SRCC results of individual distortion types across ten sessions on TID2013 [15]. We obtain similar results using PLCC, which are omitted here due to the page limit
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Fig. 5: Images with different distortion types may share similar visual appearances. (a) Additive Gaussian noise. (b) Additive noise in color components. (c) High frequency noise. (d) Gaussian blur. (e) Image denoising. (f) Sparse sampling and reconstruction. (g) Image color quantization with dither. (h) Quantization noise.
Training LIVE [14] CSIQ [35]
Testing CSIQ TID2013 LIVE Challenge LIVE TID2013 LIVE Challenge
BRISQUE [7] 0.562 0.358 0.337 0.847 0.454 0.131
M3 [42] 0.621 0.344 0.226 0.797 0.328 0.183
FRIQUEE [20] 0.722 0.461 0.411 0.879 0.463 0.264
CORNIA [8] 0.649 0.360 0.443 0.853 0.312 0.393
HOSA [43] 0.594 0.361 0.463 0.773 0.329 0.291
DIQaM [24] 0.681 0.392
WaDIQaM [24] 0.704 0.462
DB-CNN 0.758 0.524 0.567 0.877 0.540 0.452
Training TID2013 [15] LIVE Challenge [13]
Testing LIVE CSIQ LIVE Challenge LIVE CSIQ TID2013
BRISQUE [7] 0.790 0.590 0.254 0.238 0.241 0.280
M3 [42] 0.873 0.605 0.112 0.059 0.109 0.058
FRIQUEE [20] 0.755 0.635 0.181 0.644 0.592 0.424
CORNIA [8] 0.846 0.672 0.293 0.588 0.446 0.403
HOSA [43] 0.846 0.612 0.319 0.537 0.336 0.399
DIQaM [24] 0.717
WaDIQaM [24] 0.733
DB-CNN 0.891 0.807 0.457 0.746 0.697 0.424
TABLE V: SRCC results in a cross-database setting

Iv-B2 Performance on Individual Distortion Types

To take a closer look at the behaviors of DB-CNN on individual distortion types along with several competing BIQA models, we test them on a specific distortion type and show the results on LIVE [14], CSIQ [35], and TID2013 [15] in Tables IIIII, and IV, respectively. We find that DB-CNN is among the top two performing models out of times. Specifically, on CSIQ, DB-CNN outperforms other counterparts by a large margin, especially for pink noise and contrast change, validating the effectiveness of pre-training in DB-CNN. Although we do not synthesize as many distortion types as in TID2013, we find that DB-CNN performs well on unseen distortion types that exhibit similar artifacts in our pre-training set. As shown in Fig. 5, grainy noise exists in images distorted by additive Gaussian noise, additive noise in color components, and high frequency noise; Gaussian blur, image denoising, and sparse sampling and reconstruction mainly introduce blur; image color quantization with dither and quantization noise also share similar appearances. Trained on synthesized images with additive Gaussian noise, Gaussian blur, and image color quantization with dither, DB-CNN generalizes well to unseen distortions with similar perceived artifacts. In addition, all BIQA models fail in three distortion types on TID2013, i.e., non-eccentricity pattern noise, local block-wise distortions, and mean shift, whose characteristics are difficult to model.

Iv-B3 Performance across Different Databases

In this subsection, we evaluate DB-CNN in a cross-database setting against knowledge-driven and CNN-based models. We train knowledge-driven models on one database and test them on the other databases. The results of CNN-based counterparts are reported if available from the original papers. We show the SRCC results in Table V, where we see that models trained on LIVE are much easier to generalize to CSIQ and vice versa than other cross-database pairs. When trained on TID2013 and tested on the other two synthetic databases, DB-CNN significantly outperforms the rest models. However, it is evident that models trained on synthetic databases do not generalize to the authentic LIVE Challenge Database. Despite this, DB-CNN still achieves higher prediction accuracies under such a challenging experimental setup.

Iv-B4 Results on the Waterloo Exploration Database [18]

Although SRCC and PLCC have been widely used as the performance criteria in IQA research, they cannot be applied to arbitrarily large databases due to the absence of the ground truths. Three testing criteria are introduced along with the Waterloo Exploration Database in [18]i.e., the pristine/distorted image discriminability test (D-Test), the listwise ranking consistency test (L-Test), and the pairwise preference consistency test (P-Test). D-Test measures the capability of BIQA models in discriminating distorted images from pristine ones. L-Test measures the listwise ranking consistency of BIQA models when rating images with the same content and distortion type but different degradation levels. P-Test measures the pairwise concordance of BIQA models on image pairs with clearly discriminable perceptual quality. More details of the three criteria can be found in [18]. Here we use them to test the robustness of DB-CNN on the Waterloo Exploration Database. To ensure the independence of image content during training and testing, we re-train the S-CNN stream in DB-CNN using the distorted images generated from the PASCAL VOC Database only. Experimental results are tabulated in Table VI, where we observe that DB-CNN is competitive in all the three tests.

Model D-Test L-Test P-Test
BRISQUE [7] 0.9204 0.9772 0.9930
M3 [42] 0.9203 0.9106 0.9748
CORNIA [8] 0.9290 0.9764 0.9947
HOSA [43] 0.9175 0.9647 0.9983
dipIQ [25] 0.9346 0.9846 0.9999
deepIQA [24] 0.9074 0.9467 0.9628
MEON [10] 0.9384 0.9669 0.9984
DB-CNN 0.9616 0.9614 0.9992
TABLE VI: Results on the Waterloo Exploration Database [18]
Fig. 6: gMAD competition results between DB-CNN and deepIQA [24]. (a) Fixed deepIQA at the low-quality level. (b) Fixed deepIQA at the high-quality level. (c) Fixed DB-CNN at the low-quality level. (d) Fixed DB-CNN at the high-quality level.
Fig. 7: gMAD competition results between DB-CNN and MEON [10]. (a) Fixed MEON at the low-quality level. (b) Fixed MEON at the high-quality level. (c) Fixed DB-CNN at the low-quality level. (d) Fixed DB-CNN at the high-quality level.

We further let CNN-based BIQA models play the gMAD competition game [23] on the Waterloo Exploration Database [18]. gMAD extends the idea of the MAD competition [44] and allows a group of IQA models to be falsified in the most efficient way by letting them compete on a large-scale database with no human annotations. A small number of extremal image pairs are generated automatically by maximizing the responses of the attacker model while fixing the defender model. In Fig. 6, DB-CNN first plays the attacker role and deepIQA [24] acts as the defender. deepIQA [24] considers pairs (a) and (b) to have the same perceptual quality at the low- and high-quality level, respectively, which is in disagreement with human perception. By contrast, DB-CNN correctly predicts the better quality of the top images in pairs (a) and (b). We then switch the roles of DB-CNN and deepIQA to obtain pairs (c) and (d). deepIQA fails to falsify DB-CNN, where the two images in one extremal image pair indeed exhibit similar quality. Furthermore, we let DB-CNN fight against MEON [10] and show four extremal image pairs in Fig. 7. From pairs (a) and (c), we find that both DB-CNN and MEON successfully defend the attack from the other model at the low-quality level. As for the high-quality level, DB-CNN shows slightly advantage by finding the counterexample of MEON in pair (b). This reveals that MEON does not handle JPEG compression well enough, especially when the image contains few structures. MEON also finds a counterexample of DB-CNN in pair (d), where the bottom image is blurrier than the top one. Through gMAD, there is no clear winner between DB-CNN and MEON, but we identify the weaknesses of the two models.

Iv-B5 Ablation Experiments

In order to evaluate the design rationality of DB-CNN, we conduct several ablation experiments with the setups and protocols following Section IV-A. We first work with a baseline version, where only one stream (either S-CNN or VGG-16) is included. The bilinear pooling is kept, which turns out to be the outer-product of the activations from the last convolution layer with themselves. We then replace the bilinear pooling module with a simple feature concatenation and ensure that the last fully connected layer has approximately the same parameters as in DB-CNN. From Table VII, we observe that S-CNN and VGG-16 can only deliver promising performance on synthetic and authentic databases, respectively. By contrast, DB-CNN is capable of handling both synthetic and authentic distortions. We also train two DB-CNN models, one from scratch and the other using the distortion type information only during pre-training S-CNN, to validate the necessity of the pre-training stages. From the table, we observe that with perceptually more meaningful initializations, DB-CNN achieves better performance.

SRCC LIVE CSIQ TID2013 LIVE
[14]  [35] [15] Challenge [13]
S-CNN 0.963 0.950 0.810 0.680
VGG-16 0.943 0.824 0.758 0.848
Concatenation 0.951 0.856 0.701 0.811
DB-CNN scratch 0.875 0.541 0.488 0.625
DB-CNN distype 0.963 0.928 0.761
DB-CNN 0.968 0.946 0.816 0.851
TABLE VII: Average SRCC results of ablation experiments across ten sessions. “Scratch” means DB-CNN is trained from scratch with random initializations. “distype” means the S-CNN stream is pre-trained to classify distortion types only, ignoring the distortion level information

V Conclusion

We propose a CNN-based BIQA model for both synthetic and authentic distortions by conceptually modeling them as two-factor variations. DB-CNN demonstrates superior performance, which we believe arises from the two-stream architecture for distortion modeling, pre-training for better initializations, and bilinear pooling for feature combination. Through the validations across different databases, the experiments on the Waterloo Exploration Database, and the results from the gMAD competition, we have shown the scalability, generalizability, and robustness of the proposed DB-CNN model.

DB-CNN is versatile and extensible. For example, more distortion types and levels can be added to the pre-training set; more sophisticated designs of S-CNN and more powerful CNNs such as ResNet [33] can be utilized. One may also improve DB-CNN by considering other variants of bilinear pooling [45].

The current work deals with synthetic and authentic distortions separately by fine-tuning DB-CNN on either synthetic or authentic databases. How to extend DB-CNN towards a more unified BIQA model, especially in the early feature extraction stage, is an interesting direction yet to be explored.

References

  • [1] A. C. Bovik, Handbook of Image and Video Processing.   Academic Press, 2010.
  • [2] J. Ballé, V. Laparra, and E. P. Simoncelli, “End-to-end optimized image compression,” CoRR, vol. abs/1611.01704, 2016. [Online]. Available: http://arxiv.org/abs/1611.01704
  • [3] Z. Duanmu, K. Ma, and Z. Wang, “Quality-of-experience of adaptive video streaming: Exploring the space of adaptations,” in ACM Multimedia, 2017, pp. 1752–1760.
  • [4] Z. Wang and A. C. Bovik, Modern Image Quality Assessment.   Morgan & Claypool, 2006.
  • [5] A. Rehman, K. Zeng, and Z. Wang, “Display device-adapted video quality-of-experience assessment,” in Human Vision and Electronic Imaging, 2015, pp. 1–11.
  • [6] Z. Wang and A. C. Bovik, “Reduced-and no-reference image quality assessment: The natural scene statistic model approach,” IEEE Signal Processing Magazine, vol. 28, no. 6, pp. 29–40, Nov. 2011.
  • [7] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, Dec. 2012.
  • [8] P. Ye, J. Kumar, L. Kang, and D. Doermann, “Unsupervised feature learning framework for no-reference image quality assessment,” in

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2012, pp. 1098–1105.
  • [9] L. Kang, P. Ye, Y. Li, and D. Doermann, “Convolutional neural networks for no-reference image quality assessment,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1733–1740.
  • [10] K. Ma, W. Liu, K. Zhang, Z. Duanmu, Z. Wang, and W. Zuo, “End-to-end blind image quality assessment using deep neural networks,” IEEE Transactions on Image Processing, vol. 27, no. 3, pp. 1202–1213, Mar. 2018.
  • [11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: A large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
  • [12] J. Kim, H. Zeng, D. Ghadiyaram, S. Lee, L. Zhang, and A. C. Bovik, “Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 130–141, Nov. 2017.
  • [13] D. Ghadiyaram and A. C. Bovik, “Massive online crowdsourced study of subjective and objective picture quality,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 372–387, Jan. 2016.
  • [14] H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3440–3451, Nov. 2006.
  • [15] N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, and C.-C. J. Kuo, “Image database TID2013: Peculiarities, results and perspectives,” Signal Processing: Image Communication, vol. 30, pp. 57–77, Jan. 2015.
  • [16] J. Kim and S. Lee, “Fully deep blind image quality predictor,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 1, pp. 206–220, Feb. 2017.
  • [17] L. Kang, P. Ye, Y. Li, and D. Doermann, “Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks,” in IEEE International Conference on Image Processing, 2015, pp. 2791–2795.
  • [18] K. Ma, Z. Duanmu, Q. Wu, Z. Wang, H. Yong, H. Li, and L. Zhang, “Waterloo Exploration Database: New challenges for image quality assessment models,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 1004–1016, Feb. 2017.
  • [19] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes (VOC) Challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, Jun. 2010.
  • [20] D. Ghadiyaram and A. C. Bovik, “Perceptual quality prediction on authentically distorted images using a bag of features approach,” Journal of Vision, vol. 17, no. 1, pp. 32–32, Jan. 2017.
  • [21] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations, 2015.
  • [22] T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear CNN models for fine-grained visual recognition,” in IEEE International Conference on Computer Vision, 2015, pp. 1449–1457.
  • [23] K. Ma, Q. Wu, Z. Wang, Z. Duanmu, H. Yong, H. Li, and L. Zhang, “Group MAD competition a new methodology to compare objective image quality models,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1664–1673.
  • [24] S. Bosse, D. Maniry, K. R. M ller, T. Wiegand, and W. Samek, “Deep neural networks for no-reference and full-reference image quality assessment,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 206–219, Jan. 2018.
  • [25] K. Ma, W. Liu, T. Liu, Z. Wang, and D. Tao, “dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3951–3964, Aug. 2017.
  • [26] H. Tang, N. Joshi, and A. Kapoor, “Blind image quality assessment using semi-supervised rectifier networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2877–2884.
  • [27] S. Bianco, L. Celona, P. Napoletano, and R. Schettini, “On the use of deep learning for blind image quality assessment,” CoRR, vol. abs/1602.05531, 2016.
  • [28] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, Aug. 2011.
  • [29] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345–3356, Nov. 2015.
  • [30] J. B. Tenenbaum and W. T. Freeman, “Separating style and content,” in Advances in Neural Information Processing Systems, 1997, pp. 662–668.
  • [31] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in Advances in Neural Information Processing Systems, 2014, pp. 568–576.
  • [32] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach, “Multimodal compact bilinear pooling for visual question answering and visual grounding,” CoRR, vol. abs/1606.01847, 2016.
  • [33] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  • [34]

    X. Pennec, P. Fillard, and N. Ayache, “A Riemannian framework for tensor computing,”

    International Journal of Computer Vision, vol. 66, no. 1, pp. 41–66, Jan. 2006.
  • [35] E. C. Larson and D. M. Chandler, “Most apparent distortion: Full-reference image quality assessment and the role of strategy,” Journal of Electronic Imaging, vol. 19, no. 1, pp. 1–21, Jan. 2010.
  • [36] D. Jayaraman, A. Mittal, A. K. Moorthy, and A. C. Bovik, “Objective quality assessment of multiply distorted images,” in Signals, Systems and Computers, 2013, pp. 1693–1697.
  • [37] VQEG, “Final report from the video quality experts group on the validation of objective models of video quality assessment,” 2000. [Online]. Available: http://www.vqeg.org
  • [38] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in IEEE International Conference on Computer Vision, 2015, pp. 1026–1034.
  • [39] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2014.
  • [40] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in

    International Conference on Machine Learning

    , 2015, pp. 448–456.
  • [41] A. Vedaldi and K. Lenc, “MatConvNet: Convolutional neural networks for Matlab,” in ACM International Conference on Multimedia, 2015, pp. 689–692.
  • [42] W. Xue, X. Mou, L. Zhang, A. C. Bovik, and X. Feng, “Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features,” IEEE Transactions on Image Processing, vol. 23, no. 11, pp. 4850–4862, Nov. 2014.
  • [43] J. Xu, P. Ye, Q. Li, H. Du, Y. Liu, and D. Doermann, “Blind image quality assessment based on high order statistics aggregation,” IEEE Transactions on Image Processing, vol. 25, no. 9, pp. 4444–4457, Sep. 2016.
  • [44] Z. Wang and E. P. Simoncelli, “Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual quantities,” Journal of Vision, vol. 8, no. 12, pp. 8.1–8.13, Sep. 2008.
  • [45] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell, “Compact bilinear pooling,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 317–326.