GANs-NQM: A Generative Adversarial Networks based No Reference Quality Assessment Metric for RGB-D Synthesized Views

03/28/2019 ∙ by Suiyi Ling, et al. ∙ University of Nantes 8

In this paper, we proposed a no-reference (NR) quality metric for RGB plus image-depth (RGB-D) synthesis images based on Generative Adversarial Networks (GANs), namely GANs-NQM. Due to the failure of the inpainting on dis-occluded regions in RGB-D synthesis process, to capture the non-uniformly distributed local distortions and to learn their impact on perceptual quality are challenging tasks for objective quality metrics. In our study, based on the characteristics of GANs, we proposed i) a novel training strategy of GANs for RGB-D synthesis images using existing large-scale computer vision datasets rather than RGB-D dataset; ii) a referenceless quality metric based on the trained discriminator by learning a `Bag of Distortion Word' (BDW) codebook and a local distortion regions selector; iii) a hole filling inpainter, i.e., the generator of the trained GANs, for RGB-D dis-occluded regions as a side outcome. According to the experimental results on IRCCyN/IVC DIBR database, the proposed model outperforms the state-of-the-art quality metrics, in addition, is more applicable in real scenarios. The corresponding context inpainter also shows appealing results over other inpainting algorithms.



There are no comments yet.


page 1

page 2

page 5

page 6

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Nowadays, thanks to the rapid development of RGB-D equipment and immersive multi-media technologies, scenarios like 3D-TV [1], Free-viewpoint TV (FTV) [2], Virtual Reality (VR), and Augmented Reality (AR) have attracted greater users’ interests and raised a novel revolution in viewing experience. For instance, FTV offers a ‘flying in the scene’ experience to users by allowing them to change the viewpoints freely in applications like virtual conferences, broadcasting live concerts/matches, remote surveillance, etc. To realize this functionality, new virtual views are needed to be rendered with limited reference videos taken by neighboring reference RGB-D cameras. Since compressing and transmitting massive numbers of RGB-D format images/videos are expensive and inefficient, RGB-D view synthesis is thus important for these use cases to provide users with immersive experience. The depth-image-based rendering (DIBR) [3] is one of the most widely adopted techniques for synthesizing new views by taking advantages of both the RGB images and the depth maps.

Nevertheless, the DIBR techniques usually introduce challenging distortions in synthesized views mainly due to occlusions and inaccurate depth maps. A general DIBR based view synthesis scheme could be summarized as a five-step framework as shown in Fig. 1, which is proposed by MPEG-FTV [4]. Different types of local non-uniform distortions might be introduced in different steps within the DIBR synthesis procedure including local geometric distortion, object shifting/object deformation and most of the time, inpainting related distortions [5].

It’s worth noting that during the RGB image (texture) mapping process, regions that can be seen in the virtual views but occluded in the reference views are remained as dark holes and commonly termed as dis-occluded regions (‘dis-occlusions’, or ‘holes’). Apart from the big dark holes caused by the occlusion, ‘small holes’ may also be introduced by round-off error [6]

(if pixel coordinates are mapped to an integer value at the virtual viewpoint, then would usually be either interpolated or rounded to their nearest integer position). The ‘goodness’ of an RGB-D synthesized view by using DIBR based techniques highly depends on the hole filling process in these dis-occluded regions, especially when large disocclusions exist 

[7]. Therefore, recently, most of the efforts in this domain are spent on developing holes filling algorithms [8, 7, 9, 10, 11, 12]. However, by employing these inpainting algorithms, ‘blurry regions’ might be induced. Particularly, when it comes to complex texture regions where inpainting algorithms fail to fill up the missing holes, incorrect rendering of texture regions namely ‘ghosting artifact’ might also occur. These blurry or poorly inpainted regions are more visible, since they are always along the foreground background transitions [13] and sometimes even degrade the structure.

Figure 1: Diagram of DIBR synthesis algorithm. (1) Pre-processing procedure: camera parameters of both the reference and synthesized views are utilized to obtain the projection matrix. (2) Depth mapping/virtual depth map generation, both the left and the right reference depth maps are warped to generate the corresponding virtual depth map by doing forward warping with the transform matrix, i.e., the 3D warping from the reference views to the virtual ones. (3) RGB image/texture mapping, the texture of the virtual view is synthesized by reverse warping, which is to map the texture from the reference views pixel-wise to the virtual ones with the virtual depth maps. (4) Blending/‘occlusion handling’ process, the left and right synthesized textures are blended together to recover the dis-occluded regions by borrowing information from the two reference views. (5) Hole filling process, inpainting methods are employed to fill up the holes that cannot be handled in the previous steps.

Examples of dis-occluded regions and non-uniform inpainted distortions are shown in Fig.2. Fig. (a)a is a reference image, the corresponding synthesized view is shown in Fig. (b)b, where the dis-occluded regions are distributed locally and non-uniformly as shown in the error map Fig. (c)c. After inpainting the dis-occluded regions, local disturbing inpainting artifacts could be observed, examples are shown in Fig. (d)d - (h)h.

(a) (b) (c) (d) (e) (f) (g) (h)
Figure 2: Examples of dis-occluded regions and non-uniform inpainted distortions in DIBR based synthesized views. (a) the reference view. (b) synthesized view with dis-occluded regions. (c) error map between reference and synthesized image (the darker the color, the more distortions there are). Green box region is with non-perceivable distortion while red box region is with big black holes. (d)-(h) examples of local non-uniform inpainted distortions.

Quality control of the entire multi-view display system is important in order to provide users with qualified service. For instance, in the FTV system, synthesis process could be the ‘bottleneck’ of delivering good quality of free viewing experience in extreme cases. Unlike traditional uniform compression distortions, local non-uniform structure related distortions introduced by DIBR synthesis algorithms could be predominant in impacting the perceived quality. The entire viewing experience of a Free Viewpoint Video (FVV) could be ruined by only one poor-quality region in one synthesized view [14] in extreme cases. Therefore, a robust quality assessment metric is necessary to benchmark DIBR based synthesized algorithms, provide guidance for the optimization of the overall system performance and improve the Quality of Experience (QoE) of the users. However, these DIBR related artifacts, especially the ones introduced by dis-occluded regions filling, are challenging for existing commonly used quality metrics [14, 5]

as they are designed for the uniform distributed artifacts. A specific model, which can be used to capture the local non-uniformed inpainting distortions in RGB-D view synthesis process is needed. Targeting at solving this issue, we seek a state-of-the-art machine learning technique by asking ourselves

‘How to detect the non-uniform inpainting artifacts without reference and eventually to learn the impact of them on the perceived quality?’

Generative Adversarial Network (GANs), first proposed by Goodfellow et al. [15], might be an answer to this question. GANs has been widely used in solving problems in different domains, including 3D structure reconstruction [16]

, semantic inpainting

i.e., context encoder) [17][18][19]

, face recognition 

[20], realistic image synthesis [21], etc. The main idea of the adversarial nets framework is to train a generator () and a discriminator () simultaneously: a generative model that captures the data distribution and a discriminative model [15]

that is able to tell the ‘real’ image from the generated one. They are trained together so that the discriminator can keep pitting against the generator until the counterfeits generated by the generator cannot be distinguished by the discriminator. By doing so, both of them would be driven to improve their performance until the probability that

makes a mistake is maximized. Taking context encoder as an example, the goal of is to inpaint an image, with a mask map indicating the regions to be inpainted, to be ‘real’ for the discriminator; the goal of is being able to distinguish between an inpainted image and a real image, i.e., ground truth image.

Considering the case of synthesizing RGB-D views by DIBR techniques, the most annoying non-uniform distortions usually are the non-continuous inpainted distortions introduced by the hole filling stage. If the context encoder is trained to inpaint dis-occluded regions as shown in Fig. (b)b, then the discriminator, which is trained along with the generator, could tell whether or not the input image is inpainted. Furthermore, if dividing the input image into local patches, the local inpainted regions could be detected by the trained discriminator, and the perceptual quality of the whole image could also be learned in some way.

Based on this assumption, in this paper, a NR quality metric is proposed for evaluating RGB-D synthesized images with the following contributions: 1) A novel RGB-D synthesis view training strategy for deep neural networks (DNNs) is proposed, which is a GANs based context inpainter by designing special masks similar to 

dis-occluded regions induced in the general DIBR process. As a by-product, the trained context inpainter (generator) could also be used in DIBR framework for dis-occluded regions inpainting; 2) A local non-uniform distortion region detection strategy is proposed based on the pre-trained discriminator; 3) A quality-aware ‘Bag of Distortion Words’ is learned from high-level features of discriminator to obtain a novel quality-relevant representation for each synthesized image. The source code is publicly available in Github.

The remainder of this paper is organized as follows. The related works including the state-of-the-art quality metrics designed for RGB-D synthesized images and details of GANs based context inpainter are introduced in section 2. In Section 3, the proposed model is introduced in detail. Then, the experimental results and analysis are presented in section 4. Finally, conclusions are given in Section 5.

2 Related Work

2.1 Conventional image quality metrics not applicable to RGB-D synthesized images

There are numerous image quality metrics [22] designed for evaluating the uniformly distributed distortions, for instance, blurriness and blockiness induced by different compression technologies. These metrics have achieved a big success which show high consistency with human perception. However, most of these conventional quality metrics including PSNR, SSIM [23] fail to well predict the perceived quality of RGB-D synthesized views for the following reasons:

  1. Point-wise metrics like PSNR over-penalize the acceptable global uniform ‘object shifting’ artifact due to the mis-matched correspondences. In another word, the global shifting of objects that may not be noticed by human observers could be penalized heavily by this type of metrics.

  2. Most of the existing quality metrics are not designed for local non-uniform artifacts, such as the inpainted distortions. While observing an image, the artifacts located at a ‘regions of interest’ are much more annoying than the one located at inconspicuous areas [24]. Meanwhile, regions of ‘poor quality’ are more likely to be perceived by humans in an image with more severity than the ones with ‘good quality’. Thus, images with even a small number of ‘poor quality’ regions are penalized more gravely. In the case of DIBR synthesis views, most of the geometric and inpainting distortion regions locate along the boundaries of ‘Regions of Interest’ [25], which makes them easier to be noticed by observers and leads to bad quality judgments.

  3. In practice, considering the fact that the reference of the virtual RGB-D views are generally not available, a well-designed no-reference (NR) quality metric, which is capable of quantifying those local synthesized related distortions on perceptual quality, is in urgent demand. In the wake of development in machine learning technologies, many NR quality assessment models have been proposed recently on the base of advanced deep learning schemes. However, most of them are proposed based on the assumption that the perceived quality of local regions is the same with the perceptual quality of the entire image 

    [26, 27, 28]. This assumption may (in most cases, may not) work for images containing uniform distortions but may not stand for those which contain non-uniform distortions as shown in Fig. 2 since severely distorted local regions are more disturbing and greatly affect the perceived quality of the entire image [24].

2.2 Existing quality metrics for RGB-D synthesized images

To resolve the issues mentioned above, objective quality assessment metrics designed for RGB-D synthesized views are developed, which can be generally classified into full reference (FR) metric and NR metric. Among the FR metrics, one of the very first ones is VSQA

[29], which improves SSIM with three visibility maps by characterizing the complexity of the images. The 3DswIM is proposed by Battisti et al. [13] based on statistical features of wavelet sub-bands. Stanković [30] first employs morphological wavelet decomposition for quality assessment of synthesized images, namely MW-PSNR. Later, another metric, which devises PSNR with morphological pyramids decomposition (MP-PSNR), is proposed in [31]. Targeting the problem that global shifting artifact is normally over-penalized by point-wise metrics, CT-IQM [32] is proposed using a context tree based encoding scheme. To quantify the change of contours’ categories from a higher level, ST-IQM is proposed in [33] using Sketch Token descriptor. To quantify the deformation of curves in synthesized views, EM-IQM is proposed in [34] based on an elastic metric. Li et al. [35] proposed LOGs by considering both the geometric distortions as well as the sharpness of the images. Nevertheless, since the references of the synthesized views are generally not available, NR metric is more desirable. Compared to FR metrics mentioned above, only a few NR metrics are designed for synthesized views. In [36], NIQSV is proposed based on a strong hypothesis that high-quality images are consist of flat areas separated by edges. Later on, NIQSV+ is introduced in [37] to improve NIQSV by taking ‘black holes’ into account. Recently, a novel NR quality metric of DIBR-synthesized images is proposed in [14] using the auto-regression (AR) based local image description. However, all of these metrics still suffer at least one of the following issues: 1) over penalizing uniform shifting; 2) underestimating local non-uniform inpainted distortions; 3) high computational complexity.

3 The Proposed Model

In this section, the proposed GANs based No-reference Quality Metric for synthesized views (GANs-NQM) is described in detail.

Figure 3: Diagram of the proposed model: (1) Deep GANs context encoder pre-training; (2) Distortion codebook training; (3) Quality predicting.

First and foremost, to train a deep neural network (in our case, GANs), it needs a large number of data samples. Even though there are already many existing RGB-D databases [38]

publicly available for researchers targeting at computer vision tasks, the diversity of contents are quite limited compared to the general 2D image datasets such as ImageNet 

[39], CIFAR10/100 [40], PASCAL VOC [41], and Places challenge [42]. Further considering the fact that the corresponding quality of depth data in RGB-D datasets are generally noisy, in this paper, we propose a novel training strategy for RGB-D synthesis view without using RGB-D database (RGB image and depth map) but designing special masks to mimic the holes that locate at possible dis-occluded regions or possible distorted regions introduced in DIBR process.

In our proposed model, a generator is trained to impaint images with holes that are similar to the synthesized views generated by DIBR based methodologies, the discriminator is thus to capture the quality information of the RGB-D synthesized views. The entire scheme is depicted in Fig. 3 correspondingly. Details of each procedure are given in the following subsections.

3.1 Simulation the process of inpainting dis-occluded regions using GANs

3.1.1 Semantic image inpainting based on GANs

In the field of computer vision, semantic inpainting is a brand new application, where the goal is to infer the missing regions within an image according to its semantics. Unlike traditional inpainting or texture synthesis methodologies, semantic inpainting [43, 44, 45, 46, 18, 19] aims at filling the missing parts by using statistical information from external dataset instead of only making use of the internal property of the image needed to be inpainted.

Among the existing state-of-the-art semantic context inpainters, the ones proposed in [18, 19] based on Generative Adversarial Networks (GANs) provide the most appealing performance. In both works, the proposed context encoder (generator) is designed as an auto-encoder with an unfilled image as input. In detail, to insure continuity within the context, norm reconstruction is defined in Equation (1) to regress the missing parts to the ground truth content:


where denotes the binary mask indicating the missing regions need to be inpainted. To overcome the blurry preference problem aroused by loss, i.e., it tends to predict the mean of the distribution resulting in an averaged blurrier image, the adversarial loss is introduced to jointly optimize both and as formalized in equation (2):


where is a hyper-parameter balancing the weights between the two losses and is further defined in (3) by customizing GANs for the context encoder task with the mask :


In this paper, based on the similar recipe, we design our own masks , which mimics the ‘dis-occluded’ regions that appear during DIBR process, to retrain a new ‘context inpainter’. Then, we explore to use the pre-train discriminator to evaluate the quality of RGB-D synthesized images.

3.1.2 Design of masks

As introduced before, dis-occluded regions are introduced during the DIBR synthesis process. There are mainly two types of dis-occluded regions: 1) edge-like holes that are located along the boundaries of the foreground objects as shown in Fig. (a)a, and 2) small or medium size of holes that are distributed throughout the entire images as shown in Fig. (b)b. The shapes of these regions are normally related to the shapes of objects. These regions can be filled with certain inpainting algorithms. However, inpainting-related artifacts may also be introduced.

Generally, dis-occluded regions that are located along the border between the foreground and the background are challenging for existing inpainting algorithms. It is often to see that foreground regions are inpainted with background pixels or vice versa. As a result, the structures of objects are disrupted. Structure related degradation around foreground objects, accompanying with inter-view inconsistency on depth, might then cause binocular rivalry, binocular suppression, or binocular superposition [47, 48] which eventually lead to visual discomfort. Concerning the issues above, and to train a new context encoder that is capable of inpainting the distortions mentioned above, two types of masks are designed:

Figure 4: Examples of typical dis-occluded regions introduced during the process of DIBR based views synthesis. (a) Examples of dis-occluded regions that are around foreground objects’ boundaries (bounded by green boxes); (b) Examples of small and median size of dis-occluded regions (bounded by red and blue bounding boxes correspondingly) that distributed throughout the image;
  • Mask I: to mimic the holes in dis-occluded regions, which is generally around foreground objects’ boundaries. The mask is designed as the dilated object boundaries. An example is shown in Fig.(c)c.

  • Mask II: to mimic the shifted objects’ boundaries in the synthesized views induced by compression on depth map [5]. We generate the second type of masks by simply shifting the first type of mask with certain pixels as shown in Fig. (d)d.

Generally, it is easier to inpaint smooth regions with homogeneous textures than complicated regions with non-homogeneous textures, as the context around a smoother region is more ‘copyable’ and less structure are involved. Hence, the quality of smooth inpainted regions within homogeneous texture is generally better than the non-homogeneous ones. If one wants to train a more powerful context encoder, the selected masks should contain contents/structures that can not be replicated from the surroundings. In addition, unlike the big connected region masks with arbitrary location introduced in [19], dis-occluded regions or missing parts in a virtual view are generally disconnected, and the shapes of these regions are always related to the foreground objects (i.e., related to the depth map). With these two concerns, the third mask is proposed:

  • Mask III: The SLIC super-pixels algorithm [49] is used to select regions where masks should be located for later training. More specifically, an image is first segmented into a set of super-pixels as shown in Fig. (a)a and (d)d. Then, two mask sizes are considered. Super-pixels that contains less than 100 pixels are considered as small size mask, while super-pixel contains 200 to 1000 pixels are considered as medium size mask. Examples are presented in Fig. 6. The black holes in Fig. (b)b and (e)e are small size masks which are similar to the small holes shown in Fig. (a)a. The holes in Fig. (c)c and (f)f are with medium size. By doing so, 1) the selected masks are separately distributed in the entire image; 2) the shape of the masks are related to objects; 3) the content within each mask region is more independent from its neighborhood.

(a) Image
(b) Labeled map
(c) Mask I
(d) Mask II
Figure 5: Example of images in the training set and with mask I and II.

(a) Supper-pixel map (b) Small size mask (c) Medium size mask (d) Supper-pixel map (e) Small size mask (f) Medium size mask

Figure 6: Example of images in the training set and designed mask III. Two mask sizes are considered.

3.2 Bag-of-Distortion-Words (BDW) codebook learning with pre-trained discriminator

As discussed before, the discriminator serves as an indicator telling whether a patch is well inpainted or not. Thus the output of the discriminator is related to the quality of the patch. Therefore, it is reasonable to hypothesize that the intermediate output of is strongly related to inpainting related distortions which affect the perceived quality significantly. Based on this hypothesis, we propose to use the discriminator to get a latent codebook with ‘codewords’ that represent different types of distortions. With this codebook, a higher-level representation could be obtained for each image. Details are illustrated below.

To predict the image level quality by considering local distortion, the image needed to be processed locally. Therefore, a set of multiple overlapping patches , where is the total number of patches, is used to represent the image as done in [28]. In this study, the overlap size is selected as half of the patch size, and the patches are sampled over the whole image (along both the horizontal and vertical direction) to maintain as much structural information as possible. Afterwards, with the pre-trained GANs model, these patches are fed into the adversarial discriminator to extract higher-level features for later patches categorizations. For each patch in the entire dataset

for codebook training, its corresponding feature vector

is extracted from the layer in the discriminator as:


In this study, the feature vector is extracted from the last convolutional layer of the discriminator (Details are shown in Section 3.3). Finally, vectors can be obtained for the images in the codebook training set .

(a) (b) (c) (d) (e) (f)

Figure 7: Selected ‘Words’ in the learned BDW Codebook.

With the set of extracted features in correspondence to their patches, now we want to look for a new representation of the entire image by taking the intermediate output of the discriminator into account so that this new representation is able to link the local information with the entire image quality.

Intuitively, the idea is to categorize image patches into different clusters that can be representatives of perceived quality, so the quality of the tested image can be quantified by checking how many ‘good’ or ‘poor’ patches it has. Formally, the patches , , are reshaped to . Then the are clustered into clusters using an advanced clustering algorithm [50], which is a fast nearest neighbor algorithm robust in high dimensional vectors matching. Selected cluster results are shown in Fig. 7. It can be observed that patches with similar type of distortions are gathered in the same cluster as a ‘distortion word’. For example, both of the cluster and are consist of patches with ‘dark holes’, and the ones in cluster are obviously larger than that of , which indicates worse quality. For other clusters shown in the figure, the distortions of is imperceivable (guarantee good quality), while , and are with more obvious inpainting related artifacts. Naturally, different ‘codeword’ in the clustered ‘codebook’ actually represents a certain level of quality with respect to the types and magnitudes of distortions, which is in consistent with our hypothesis. Based on this observation, in this study, the trained codebook is named after ‘bag of distortion words’ (BDW). With the BDW codebook, each image can then be encoded as a histogram , where each is defined as


is an indicator function that equals to 1 if the specified binary clause is true, and is the number of patches within the image. An intuitive interpretation of this BDW based representation of the image is that the histogram statistically quantifies how many ‘good quality’ and ‘poor quality’ patches that a synthesized image has. As local significant synthesized distortion is more annoying than the global uniform one, this new representation is a higher-level quality descriptor which can indirectly predict the overall quality of one image. During clustering, is an important parameter that will have an impact on the final performance. Therefore, further discussion about the selection of is given in Section 4.2.1.

3.3 Local distortion regions selection

Generally, artifacts located at a region of interest is much more annoying than those located at an inconspicuous area [24]. In our case, ‘poor’ quality regions (i.e., holes and inpainting artifacts) are generally in the regions of interest (such as the foreground object), thus, they are more likely to attract observers than the ‘good’ ones. Therefore, images with even a small number of ‘poor’ regions are penalized more gravely by the observers. Accordingly, it is reasonable to do the same penalization in the objective model as well.

Moreover, as discussed before, the discriminator is trained to distinguish artificial generated picture (inpainted images in this case) from the real one. A well-trained discriminator is supposed to be able to indicate poor inpainted regions. The output of the discriminator is a boolean value indicating whether the input patch is an inpainted or not, where ‘1’ for real patches and ‘0’ for generated patches. It is intuitive to hypothesize that patches assigned with ‘0’ by the discriminator are those with poor quality. Hence, the discriminator is further utilized as a ‘poor’ quality patches selector. As thus, Equation (5) could be modified to:


where is the direct boolean output of the pre-trained discriminator when taking a patch as the input. is the exclusive OR operation, equals to 1 if .

Apart from using the final boolean output of the discriminator for selecting the possible inpainted regions, another possibility is to use the output just before the final sigmoid layer (i.e., the last convolutional layer) with normalization. To do this, the output of the last convolutional layer of the discriminator for all the training patches are collected and normalized into a range of . After the normalization, the output of the last convolutional layer serves as a probability value indicating that whether the test patch is natural (non-inpainted) or not. A smaller value represents a higher probability that this patch is inpainted and with a greater magnitude of distortions. Afterwards, patches that with a certain magnitude of in-painting distortions can be selected according to a certain threshold , meaning that only poorly inpainted regions with certain low-quality level are selected for the final quality decision. By doing so, Equation (6) could be further rewritten as:


where means that we only consider the output of the last convolutional layer in the discriminator with a patch as input. is a threshold for poor-quality patches selection. The setting of threshold is discussed in Section 4.2.4.

3.4 Final quality prediction

After extracting the histogram , Support Vector Regression (SVR) is then applied on it with a linear kernel to predict the final quality score. In the experiment, the entire database is divided into 20% validation set for model parameters selection (e.g., codebook training) and 80% for performance evaluation. During the performance evaluation procedure, a 1000-fold cross-validation is applied. For each fold, the remaining 80% of the dataset is further randomly split into 80% of the images for SVR training and 20% for testing, with no overlap between them [51] (no same viewpoint of the same content). The median Pearsons Correlation Coefficient (PCC), Spearman rank order Correlation Coefficient (SCC), and Root Mean Square Error (RMSE) between subjective and objective scores are reported across the 1000 runs for performance evaluation.

4 Experimental Result

The performance of the proposed GANs-NQM is evaluated on IRCCyN/IVC DIBR images database [5]. Images from this database are obtained from three multi-view RGB-D sequences: Book Arrival, Lovebird1, and Newspaper. Seven RGB-D synthesis algorithms labeled with A1-A7 [3, 52, 6, 53, 9, 54] are used to process the three sequences to generate four new virtual views for each of them. The database is composed of 84 synthesized views and 12 original frames extracted from the corresponding sequences along with subjective quality scores in the form of mean opinion score (MOS).

In our study, as the objective is to evaluate the quality of RGB-D synthesis views, differential MOS (DMOS) [55] are calculated and used. Data augmentation is conducted to provide more robust performance evaluation by rotating each image in the database , and counterclockwise successively, which ends up into totally 384 images. Unlike other data augmentation methodology (such as scaling), rotation operation does not introduce new distortion. We thus assume that the qualities of the augmented images remain unchanged. The performance evaluation procedure is conducted according to [51] as described in section 3.4. For training the BDW codebook, 20 % of the augmented data is utilized as validation set, which contains around pathces of size .

4.1 GANs based dis-occluded regions inpainter

4.1.1 Training data

To generate a new dataset with the three masks mentioned above, we collect images from the PASCAL VOC 2012 [41] and the Places database [42]. There are in total 10K training images in this study.

  • PASCAL VOC 2012 database: The original objective of this database is for a challenge to recognize objects from a number of visual object classes in realistic scenes. It contains 3K images with twenty object classes, which diverse from people, animals to vehicles and indoor scenes. One of the merits of this database is that it provides us with pixel-wise segmentation labels, which gives the boundary of ‘objects’ against the ’background’ label. An example is given in Fig. (a)a and Fig. (b)b. In our study, we utilize this segmentation label to generate Mask I and Mask II mentioned above, which leads to 6K training data.

  • Places database: To have a balanced dataset with mask I and II, the validation set from the ‘Places Challenge 2017’, which contains around 2K images, are selected as a part of the training set in this study with mask III mentioned above. This dataset contains images with diverse contents, which vary from outdoor landscapes, cities views to indoor people portrait images. As there are two mask sizes in Mask III, this leads to 4K training images.

4.1.2 Training process

The framework of the ‘context inpainter’ is implemented based on the pipeline developed by Pathak et al. [19]

with Caffe and Torch packages. The commonly used stochastic gradient descent method Adam

[56] is used for optimization. We start with a learning rate of 0.0002, as set in DCGAN [57], but a different bottleneck of 4000 units. In our experiment, the impact of trade-off between G and D,  i.e., different in Equation (2), on the performance of the proposed metric has been tested.

For the architecture of the GANs network, as it has been tested in [19] that finer inpainting results can be obtained by replacing pooling layers with the convolutional ones, in this study, the pool-free structure remains. Furthermore, since the main focus of this section is to explore the discriminator for quality assessment of synthesized views with local non-uniform distortions, we only change the architecture of the discriminator. Details of all the discriminator architectures that have been tested in this study are summarized in Table I. The main difference between and the other two architectures is the size of images that can be fed into. is of less complex structures than and , where the number of convolutional kernels is halved in each layer. With such design, we could check how the input size and complexity of the discriminator influence the performance of the proposed scheme.

Layer In InSize k s OutL Act Visualization
Discriminator architecture
conv_1 image 4 1 64 Leaky
conv_2 conv_1 4 1 128 Leaky
conv_3 conv_2 4 1 256 Leaky
conv_4 conv_3 4 1 512 Leaky
conv_5 conv_4 4 1 1 Sig
Discriminator architecture
conv_1 image 4 1 32 Leaky
conv_2 conv_1 4 1 64 Leaky
conv_3 conv_2 4 1 128 Leaky
conv_4 conv_3 4 1 256 Leaky
conv_5 conv_4 4 1 512 Leaky
conv_6 conv_5 4 1 1 Sig
Discriminator architecture
conv_1 image 4 1 16 Leaky
conv_2 conv_1 4 1 32 Leaky
conv_3 conv_2 4 1 64 Leaky
conv_4 conv_3 4 1 128 Leaky
conv_5 conv_4 4 1 256 Leaky
conv_6 conv_5 4 1 1 Sig
Table I: Different discriminator architectures tested in this study, is the input of each layer,   is the input size of each layer,  is the kernel size, s

is the stride,  

is the output channels for each layer and

is the activation function of each layer.

4.2 Performance dependency of hyper parameters

4.2.1 Number of ‘Quality Word’ in BQW

To check if the performance of the proposed GAN-NQM is sensitive to the cluster number , different numbers of for the quality-aware codebook training are tested on the validation set. The results are shown in Fig. 8. The corresponding PCC/SCC curves are obtained by fixing other related parameters. It can be observed that the performance of GAN-NQM in PCC/SCC raises gradually along with the increase of at the beginning. After the performance peaks at a certain number (), it starts to drop gradually. Thus, in this study, we set .

Figure 8: Performance dependency of proposed metric with changing number

4.2.2 Solver hyper-parameter

As introduced in [19, 57], the solver hyper-parameter in equation (2

) is suggested to be set as 0.999. It is a tunable parameter balancing the reconstruction loss and the adversarial loss during training. Since the discriminator is utilized for both distortion regions selection, and higher level feature extraction in this study, higher weight for the adversarial loss is tested,

i.e. lower in Equation (2). The performances of the proposed model with different are reported in Table II, where we fix and use the direct output of the discriminator for distortion region selection. By comparing the performances, interestingly, it is found that the PCC increases when increases from 0.5 to 0.9, and drops when . In this study, we set .

0.7802 0.8083 0.7821
0.7377 0.7536 0.7339
0.7266 0.7280 0.7273
Table II: Performance dependency of proposed metric with different solver hyper-parameters

4.2.3 Different Discriminator architecture

The performances of the proposed model equipped with different discriminator architectures, which are described in Table I, are also reported in Table II. It is found that, with any chosen , the proposed method always attains better PCC value with architecture than with or . In the proposed model, we finally choose architecture for the discriminator.

4.2.4 Threshold

The influence of the threshold on the performance of GANs-NQM is illustrated in Table III. The performance of using a strategy of selecting a proper threshold in Equation (7) for distortion regions selections is better than using the direct output of the discriminator. The performance climbs with an increasing until it reaches to 0.7, then the performance declines. In our model, we set .

Boolean output of 0.7463 0.7166 0.5045
= 0.3 0.7525 0.7259 0.4995
= 0.4 0.7631 0.7649 0.4593
= 0.5 0.7889 0.7600 0.4546
= 0.6 0.7963 0.7798 0.4176
= 0.7 0.8195 0.7920 0.4016
= 0.8 0.7704 0.7248 0.4723
= 1.0 0.7425 0.7062 0.5023
Table III: Performance dependency of proposed metric with different Threshold

4.3 Overall quality prediction performance

The performance of the proposed metric is compared with all the quality metrics that are developed for assessing synthesized views’ quality summarized in Section 2.2. For fair comparisons, the median performances of the compared metrics are also reported under a 1000-fold cross-validation.

Performance results are summarized in Table IV. These metrics can be classified into FR or NR metrics. Parameters of GANs-NQM that yield the best performance are selected according to the previous discussion. It can be seen from Table IV that our proposed method attains the best performance within the group of NR metrics in terms of PCC, SCC, and RMSE. The gain of GAN-NQM compared to the second best NR metric APT is 17% in terms of PCC. Furthermore, even compared to FR metrics, its performance is comparable to the best performed metric ST-IQM.

Full Reference Metrics (FR)
3DSwIM [13] 0.7266 0.6421 0.4304
VSQA [29] 0.5096 0.5064 0.5336
MP-PSNR [58] 0.7489 0.7011 0.4148
MP-PSNR [31] 0.7336 0.6634 0.4199
MW-PSNR [58] 0.7400 0.6836 0.4240
MW-PSNR [30] 0.7183 0.6419 0.4401
CT-IQM [32] 0.7107 0.6151 0.4481
EM-IQM [34] 0.7599 0.7012 0.4038
ST-IQM [33] 0.8462 0.7681 0.3415
NO Reference Metrics (NR)
NIQSV+ [37] 0.7010 0.5158 0.4553
APT [14] 0.7046 0.7198 0.4993
GANs-NQM(proposed) 0.8262 0.8072 0.3861

Table IV: Performance of the proposed metric and the state-of-the-art metrics

The scatter plots of all the tested quality metrics versus DMOS are provided in Fig. 9. By comparing the scatter plots of GANs-NQM with others, we can notice that most of the objective scores predicted by the proposed metric are better distributed along the diagonal of the plot. In the scatter plot of APT and NIQSV+, images that synthesized using the same DIBR algorithm are predicted with similar objective scores, which leads to a ‘vertical line’ as shown in Fig. (j)j and Fig. (k)k .

Figure 9: Scatter plots of objective quality scores versus DMOS on IRCCyN/IVC DIBR database [5]. A1-A7 represent different DIBR algorithms in [5].

(a) reference (b) patch with holes (c) [53] PSNR (d) [9] PSNR (e) [54] PSNR (f) Ours PSNR (g) reference (h) patch with holes (i) [53] PSNR (j) [9] PSNR (k) [54] PSNR (l) Ours PSNR (m) reference (n) patch with holes (o) [53] PSNR (p) [9] PSNR (q) [54] PSNR (r) Ours PSNR (s) reference (t) patch with holes (u) [53] PSNR (v) [9] PSNR (w) [54] PSNR (x) Ours PSNR

Figure 10: Results of using our re-trained generator to inpaint the dis-occluded regions. First column: reference patches; Second column: patches with dis-occluded regions; Third column: inpainted results using algorithm proposed in [53]; Forth column: inpainted results using algorithm proposed in [9]; Fifth column:inpainted results using algorithm proposed in [54]; Sixth column: inpaintd results using our retrained generator;

In order to examine the significance of the performances between each two tested quality metrics, Student’s t-test is conducted. More specifically, the 1000 PCC values obtained during the cross performance evaluation described in section for each tested metric are used as input for t-test. The results are concluded in Table

V with a significance level of 0.05, where ‘1’ represents the performance of the under-test metric in row outperforms the one in column significantly, ‘-1’ represents the inverse situation and ‘0’ represents there is no significant difference. According to the table, the performance of the proposed GANs-NQM is significantly better than all the other NR and FR metrics except for ST-IQM, with which it shows no statistically different performance. Considering the fact that our method is NR, the proposed GANs-NQM is more applicable in real scenarios.

3DSwIM - 0 0 0 0 0 0 0 -1 0 0 -1
VSQA 0 - 0 0 0 0 0 -1 -1 0 -1 -1
MP-PSNR 0 0 - 0 0 0 0 0 0 0 0 -1
MP-PSNR 0 0 0 - 0 0 0 0 0 0 0 -1
MW-PSNR 0 0 0 0 - 0 0 0 -1 0 0 -1
MW-PSNR 0 0 0 0 0 - 0 0 -1 0 0 -1
CT-IQM 0 0 0 0 0 0 - 0 0 0 0 -1
EM-IQM 0 0 0 0 0 0 0 - 0 0 0 -1
ST-IQM 1 1 0 0 1 1 0 0 - 1 0 0
NIQSV+ 0 0 0 0 0 0 0 0 -1 - 0 -1
APT 0 0 0 0 0 0 0 0 0 0 - -1
GANs-NQM 1 1 1 1 1 1 1 1 0 1 1 -
Table V: Statistic significance results based on the 1000 times cross performance evaluation
FR Metrics NR Metrics
Time 9.6 12.4 35 90 100 220.4 1.3k+ 4.5k+ 12.7k+ 21 157 13k+
Table VI: Normalized execution time of each metric

Another important application of an objective metric in FTV system is to benchmark different synthesized algorithms, the ground truth ranking of the seven synthesis algorithms (A1-A7) used in our study is obtained by averaging the DMOS. The predicted rankings by objective metrics are reported in Table VII. According to Table VII, the ranking of the proposed GANs-NQM is the most consistent. For GANs-NQM, only the rankings of A4 and A5, which generate similar quality synthesized images, are switched. Therefore, a desirable rank order could be provided by the proposed model to select proper synthesis algorithms.

Ranking of synthesis algorithm
DMOS (ground truth) A1 A5 A4 A6 A2 A3 A7
Full Reference Metric (FR)
3DSwIM A1 A4 A5 A6 A3 A2 A7
VSQA A6 A5 A4 A3 A2 A7 A1
MP-PSNR A4 A5 A6 A3 A2 A1 A7
MP-PSNR A4 A5 A6 A3 A2 A1 A7
MW-PSNR A4 A5 A6 A2 A3 A1 A7
MW-PSNR A4 A5 A6 A2 A3 A1 A7
CT-IQM A1 A4 A2 A5 A6 A3 A7
EM-IQM A1 A2 A6 A3 A4 A5 A7
ST-IQM A1 A5 A6 A4 A3 A2 A7
NO Reference Metric (NR)

A1 A6 A5 A4 A2 A3 A7
APT A1 A2 A4 A3 A5 A6 A7
GANs-NQM A1 A4 A5 A6 A2 A3 A7
Table VII: Ranking of RGB-D synthesis algorithms

Last but not least, to make the evaluation of the quality of RGB-D synthesized view feasible in real applications, the time cost of the quality assessment metric should be reasonable, if possible, the lower the better. To verify the efficiency of the proposed metric, as well as make comparisons with others, the execution time normalized by the run time of PSNR is computed [37]. By calculating the normalized execution time, it is then possible to compare the time complexities of different metrics on different machines and datasets. For a given image from a database, the normalized execution time is defined as


where is the execution time of the objective quality metric for image , and is the corresponding runtime of PSNR. For completeness, in our study, the experiments are conducted on a desktop equipped with i7 CPU (4GHz), 8 GB RAM, and a Nvidia Xeon E3-1200 v3/4th. The runtime of PSNR for one synthesized image in IRCCyN/IVC DIBR images database is 0.05 seconds. The normalized execution time for each metric is summerized in Table VI. Although GANs-NQM is slower than NIQSV+, it is still within a reasonable time cost, which is much faster than the second best NR metric APT, as well as the best FR metric ST-IQM.

4.4 Inpainting results

The theoretical assumption of this paper is that the generator/discriminator are simultaneously trained to inpaint/evaluate the RGB-D dis-occluded regions/RGB-D synthesis views. The performance of utilizing discriminator to predict the quality of the RGB-D synthesis views has been demonstrated in the previous section. As a side outcome of this paper, it would be interesting to evaluate the performance of the pre-trained context inpainter (generator) on the same database, i.e., the synthesized views that contain dis-occluded regions in the IRCCyN/IVC DIBR images database.

PSNR between the reference and the inpainted image is calculated for evaluation. Three inpainting algorithms [9] [53] [54] are used for comparison. Due to the limitation of space, selected results are shown in Fig.10.

Based on the results, it is observed that 1) By comparing our inpainted result in Fig. (f)f to the others with respect to the reference, the shape of the braid of the girl is better remained by our model. Similar results could also be observed in Fig. (l)l where the corner of the poster is better preserved compared to the others; 2) The shape of the dis-occluded regions in Fig. (n)n are better inpainted by the proposed models as shown in Fig. (r)r. There are obvious ‘double-edge like’ shapes, i.e. ghosting artifacts, along the objects after being inpainted by other methods; 3) In the condition that holes appear in homogeneous texture regions which are also close to the borders of foreground objects, our inpainted result is with higher texture consistency than the others as shown in Fig. (x)x.

In conclusion, our proposed context inpainter could maintain the structures of the dis-occluded regions, especially when the dis-occluded regions are large. For the challenging dis-occluded regions that lie on the border of foregrounds and backgrounds, as well as in the homogeneous texture regions close to the border of foreground objects, the proposed inpainter performs better than the others.

The appealing performance of our pre-trained context inpainter (generator) on RGB-D dis-occluded regions validates the effectiveness of the proposed training strategy, which uses specific designed masks to mimic the typical black-hole artifacts induced in DIBR process. The proposed strategy is more flexible in using the large-scale image databases in the computer vision domain rather than the RGB-D datasets where the depth information might be noisy. It should also be noted that the training data scale in our study is only 10K, which could be definitely further augmented by employing the existing datasets. Therefore, there is still improvement space for our current trained model, no matter for quality assessment or for hole filling of RGB-D synthesis view.

5 Conclusions

In this work, we proposed a GANs-based NR quality metric, GANs-NQM, to evaluate the perceptual quality of RGB-D synthesis views. To resolve the challenges of the training data scales in DNNs, a novel strategy is proposed which exploits the current existing large-scale 2D computer vision datasets rather than RGB-D datasets, where depth data may be unreliable. The spirit of the strategy can be easily applied to other applications in RGB-D domain or even other community. Based on the assumption that if a generator of a GANs could be trained to inpaint the dis-occluded regions then the discriminator could be used to predict the quality, in this study, we learned a ‘Bag of Distortions Word’ (BDW) codebook, proposed a local distortion region selector from the discriminator, and eventually mapped the non-uniform inpainting related artifacts to perceptual quality through SVR. According to experimental results, the proposed GANs-NQM provides the best performance compared to the state-of-the-art FR/NR quality metrics for RGB-D synthesized views. As a side outcome, the pre-trained inpainter also shows an appealing performance in inpainting the challenging holes in RGB-D synthesis view.


  • [1] C. Fehn, “A 3d-tv approach using depth-image-based rendering (dibr),” in Proc. of VIIP, vol. 3, no. 3, 2003.
  • [2] M. Tanimoto, M. P. Tehrani, T. Fujii, and T. Yendo, “Free-viewpoint tv,” IEEE Signal Processing Magazine, vol. 28, no. 1, pp. 67–76, 2011.
  • [3] C. Fehn, “Depth-image-based rendering (dibr), compression, and transmission for a new approach on 3d-tv,” in Electronic Imaging 2004.   International Society for Optics and Photonics, 2004, pp. 93–104.
  • [4] M. Tanimoto, T. Fujii, and K. Suzuki, “View synthesis algorithm in view synthesis reference software 2.0 (vsrs2. 0),” ISO/IEC JTC1/SC29/WG11 M, vol. 16090, p. 2009, 2009.
  • [5] E. Bosc, R. Pepion, P. Le Callet, M. Koppel, P. Ndjiki-Nya, M. Pressigout, and L. Morin, “Towards a new quality metric for 3-d synthesized view assessment,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 7, pp. 1332–1343, 2011.
  • [6] Y. Mori, N. Fukushima, T. Yendo, T. Fujii, and M. Tanimoto, “View generation with 3d warping using depth information for ftv,” Signal Processing: Image Communication, vol. 24, no. 1, pp. 65–72, 2009.
  • [7] X. Xu, L.-M. Po, C.-H. Cheung, L. Feng, K.-H. Ng, and K.-W. Cheung, “Depth-aided exemplar-based hole filling for dibr view synthesis,” in Circuits and Systems (ISCAS), 2013 IEEE International Symposium on.   IEEE, 2013, pp. 2840–2843.
  • [8] S. S. Yoon, H. Sohn, Y. J. Jung, and Y. M. Ro, “Inter-view consistent hole filling in view extrapolation for multi-view image generation,” in Image Processing (ICIP), 2014 IEEE International Conference on.   IEEE, 2014, pp. 2883–2887.
  • [9] P. Ndjiki-Nya, M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand, “Depth image-based rendering with advanced texture synthesis for 3-d video,” IEEE Transactions on Multimedia, vol. 13, no. 3, pp. 453–465, 2011.
  • [10] P. Buyssens, M. Daisy, D. Tschumperlé, and O. Lézoray, “Superpixel-based depth map inpainting for rgb-d view synthesis,” in Image Processing (ICIP), 2015 IEEE International Conference on.   IEEE, 2015, pp. 4332–4336.
  • [11] P. Buyssens, O. Le Meur, M. Daisy, D. Tschumperlé, and O. Lézoray, “Depth-guided disocclusion inpainting of synthesized rgb-d images,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 525–538, 2017.
  • [12] J. Gautier, O. Le Meur, and C. Guillemot, “Depth-based image completion for view synthesis,” in 3DTV Conference: The True Vision-capture, Transmission and Display of 3D Video (3DTV-CON), 2011.   IEEE, 2011, pp. 1–4.
  • [13] F. Battisti, E. Bosc, M. Carli, P. Le Callet, and S. Perugia, “Objective image quality assessment of 3d synthesized views,” Signal Processing: Image Communication, vol. 30, pp. 78–88, 2015.
  • [14] K. Gu, V. Jakhetiya, J.-F. Qiao, X. Li, W. Lin, and D. Thalmann, “Model-based referenceless quality metric of 3d synthesized images using local image description,” IEEE Transactions on Image Processing, 2017.
  • [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [16] B. Yang, S. Rosa, A. Markham, N. Trigoni, and H. Wen, “Dense 3d object reconstruction from a single depth view,” IEEE transactions on pattern analysis and machine intelligence, 2018.
  • [17] W. Wang, Q. Huang, S. You, C. Yang, and U. Neumann, “Shape inpainting using 3d generative adversarial network and recurrent convolutional networks,” arXiv preprint arXiv:1711.06375, 2017.
  • [18] R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do, “Semantic image inpainting with deep generative models,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2017, pp. 5485–5493.
  • [19] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2536–2544.
  • [20] J. Zhao, L. Xiong, J. Li, J. Xing, S. Yan, and J. Feng, “3d-aided dual-agent gans for unconstrained face recognition,” IEEE transactions on pattern analysis and machine intelligence, 2018.
  • [21] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas, “Stackgan++: Realistic image synthesis with stacked generative adversarial networks,” arXiv preprint arXiv:1710.10916, 2017.
  • [22] D. M. Chandler, “Seven challenges in image quality assessment: past, present, and future research,” ISRN Signal Processing, vol. 2013, 2013.
  • [23] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [24] A. Ninassi, O. Le Meur, P. Le Callet, and D. Barba, “Does where you gaze on an image affect your perception of quality? applying visual attention to image quality metric,” in Image Processing, 2007. ICIP 2007. IEEE International Conference on, vol. 2.   IEEE, 2007, pp. II–169.
  • [25] O. Le Meur, A. Ninassi, P. Le Callet, and D. Barba, “Overt visual attention for free-viewing and quality assessment tasks: Impact of the regions of interest on a video quality metric,” Signal Processing: Image Communication, vol. 25, no. 7, pp. 547–558, 2010.
  • [26]

    L. Kang, P. Ye, Y. Li, and D. Doermann, “Convolutional neural networks for no-reference image quality assessment,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1733–1740.
  • [27] S. Bosse, D. Maniry, T. Wiegand, and W. Samek, “A deep neural network for image quality assessment,” in Image Processing (ICIP), 2016 IEEE International Conference on.   IEEE, 2016, pp. 3773–3777.
  • [28] D. Li, T. Jiang, and M. Jiang, “Exploiting high-level semantics for no-reference image quality assessment of realistic blur images,” in Proceedings of the 2017 ACM on Multimedia Conference.   ACM, 2017, pp. 378–386.
  • [29] P.-H. Conze, P. Robert, and L. Morin, “Objective view synthesis quality assessment,” in IS&T/SPIE Electronic Imaging.   International Society for Optics and Photonics, 2012, pp. 82 881M–82 881M.
  • [30] D. Sandić-Stanković, D. Kukolj, and P. Le Callet, “Dibr synthesized image quality assessment based on morphological wavelets,” in Quality of Multimedia Experience (QoMEX), 2015 Seventh International Workshop on.   IEEE, 2015, pp. 1–6.
  • [31] D. Sandic-Stankovic, D. Kukolj, and P. Le Callet, “Dibr synthesized image quality assessment based on morphological pyramids,” in 2015 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON).   IEEE, 2015, pp. 1–4.
  • [32] P. L. C. Ling, Suiyi and C. Gene, “Quality assessment for synthesized view based on variable-length context tree,” in Multimedia Signal Processing (MMSP), 2017 IEEE 19th International Workshop on.   IEEE, 2017.
  • [33] S. Ling and P. Le Callet, “Image quality assessment for free viewpoint video based on mid-level contours feature,” in Multimedia and Expo (ICME), 2017 IEEE International Conference on.   IEEE, 2017, pp. 79–84.
  • [34] ——, “Image quality assessment for dibr synthesized views using elastic metric,” in Proceedings of the 2017 ACM on Multimedia Conference.   ACM, 2017, pp. 1157–1163.
  • [35] L. Li, Y. Zhou, K. Gu, W. Lin, and S. Wang, “Quality assessment of dibr-synthesized images by measuring local geometric distortions and global sharpness,” IEEE Transactions on Multimedia, vol. 20, no. 4, pp. 914–926, 2018.
  • [36] S. Tian, L. Zhang, L. Morin, and O. Deforges, “Niqsv: A no reference image quality assessment metric for 3d synthesized views,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on.   IEEE, 2017, pp. 1248–1252.
  • [37] S. Tian, L. Zhang, L. Morin, and O. Déforges, “Niqsv+: A no-reference synthesized view quality assessment metric,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 1652–1664, 2018.
  • [38] M. Firman, “Rgbd datasets: Past, present and future,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 19–31.
  • [39] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on.   Ieee, 2009, pp. 248–255.
  • [40] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Citeseer, Tech. Rep., 2009.
  • [41] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International journal of computer vision, vol. 111, no. 1, pp. 98–136, 2015.
  • [42]

    B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,”

    IEEE transactions on pattern analysis and machine intelligence, 2017.
  • [43] S. Liu, J. Pan, and M.-H. Yang, “Learning recursive filters for low-level vision via a hybrid neural network,” in European Conference on Computer Vision.   Springer, 2016, pp. 560–576.
  • [44] J. S. Ren, L. Xu, Q. Yan, and W. Sun, “Shepard convolutional neural networks,” in Advances in Neural Information Processing Systems, 2015, pp. 901–909.
  • [45] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in neural information processing systems, 2012, pp. 341–349.
  • [46] J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Transactions on image processing, vol. 17, no. 1, pp. 53–69, 2008.
  • [47] Y. J. Jung, H. Sohn, S.-i. Lee, Y. M. Ro, and H. W. Park, “Quantitative measurement of binocular color fusion limit for non-spectral colors,” Optics express, vol. 19, no. 8, pp. 7325–7338, 2011.
  • [48] J. Li, “Methods for assessment and prediction of qoe, preference and visual discomfort in multimedia application with focus on s-3dtv,” Ph.D. dissertation, Université de Nantes, 2013.
  • [49] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “Slic superpixels,” Tech. Rep., 2010.
  • [50]

    M. Muja and D. G. Lowe, “Scalable nearest neighbor algorithms for high dimensional data,”

    IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 11, pp. 2227–2240, 2014.
  • [51] P. Gastaldo, R. Zunino, and J. Redi, “Supporting visual quality assessment with machine learning,” EURASIP Journal on Image and Video Processing, vol. 2013, no. 1, p. 54, 2013.
  • [52] A. Telea, “An image inpainting technique based on the fast marching method,” Journal of graphics tools, vol. 9, no. 1, pp. 23–34, 2004.
  • [53] K. Mueller, A. Smolic, K. Dix, P. Merkle, P. Kauff, and T. Wiegand, “View synthesis for advanced 3d video systems,” EURASIP Journal on Image and Video Processing, vol. 2008, no. 1, pp. 1–11, 2009.
  • [54] M. Köppel, P. Ndjiki-Nya, D. Doshkov, H. Lakshman, P. Merkle, K. Müller, and T. Wiegand, “Temporally consistent handling of disocclusions with texture synthesis for depth-image-based rendering,” in 2010 IEEE International Conference on Image Processing.   IEEE, 2010, pp. 1809–1812.
  • [55] ITU, “Methods for the subjective assessment of video quality, audio quality and audiovisual quality of internet video and distribution quality television in any environment,” ITU-T Recommendation P.913, 2014.
  • [56] D. Kinga and J. B. Adam, “A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2015.
  • [57] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
  • [58] D. Sandić-Stanković, D. Kukolj, and P. Le Callet, “Dibr-synthesized image quality assessment based on morphological multi-scale approach,” EURASIP Journal on Image and Video Processing, vol. 2017, no. 1, p. 4, 2016.