Color Constancy by GANs: An Experimental Survey

12/07/2018 ∙ by Partha Das, et al. ∙ University of Amsterdam 14

In this paper, we formulate the color constancy task as an image-to-image translation problem using GANs. By conducting a large set of experiments on different datasets, an experimental survey is provided on the use of different types of GANs to solve for color constancy i.e. CC-GANs (Color Constancy GANs). Based on the experimental review, recommendations are given for the design of CC-GAN architectures based on different criteria, circumstances and datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 6

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The observed colors in a scene are determined by object properties (albedo and geometry) and the color of the light source. The human visual system has, to a large extent, the ability to perceive object colors invariant to the color of the light source. This ability is called color constancy (CC), or auto white balance (AWB), as illustrated in Figure 1. Although the ability of color constancy is trivial for humans, it’s a difficult problem in computer vision because, for a given image, both the spectral reflectance (albedo) and the power distribution (light) are unknown.

Therefore, traditional color constancy algorithms impose priors to estimate the color of the light source. For example, the grey world algorithm 

[10] assumes that the average reflectance of the surfaces in a scene is achromatic under a (white) canonical light source. Then, using a diagonal model (i.e. von Kries model [48]

), the image can be corrected such that it appears as if it was taken under a white light source. On the other hand, motivated by the success of convolutional neural networks (CNNs) obtained for other computer vision tasks, more recent work uses these powerful deep CNN models. For example, 

[8, 33] are the first to use CNNs for illuminant estimation and achieving state-of-the-art results.

In parallel, image-to-image translation tasks gained attention by the introduction of conditional generative adversarial networks (GANs) [35]. The goal is to map one representation of a scene into another. For example, converting day images to night or transferring images into different styles. Our aim is to model the color constancy task as an image-to-image translation problem where the input is an RGB image of a scene taken under an unknown light source and the output is the color corrected (white balanced) one.

(a) (b)
(c) (d)
Figure 1: (a) is an image taken under an unknown light source, (b) and (c) are images of the same scene under Illuminant D50 and Illuminant A respectively. The task of color constancy is to recover the white-balanced image (d) from images (a), (b) or (c).

Therefore, in this paper, we propose to formulate the color constancy task as an image-to-image translation problem using GANs. By conducting a large set of experiments on different datasets, we provide an experimental survey on the use of different types of GANs to solve the color constancy problem. The survey includes an analysis of the performance, behavior, and the effectiveness of the different GANs. The image-to-image translation problem for color constancy is defined as follows: the input domain is a set of regular RGB images of a scene taken under an unknown light source and the output domain is the set of corresponding color corrected images for a canonical (white) light source. The following state-of-art image-to-image translation GANs are reviewed: Pix2Pix [26], CycleGAN [51], and StarGAN [13]. The goal is to analyze the performance of the different GAN architectures applied on the illumination transfer task. We evaluate the different GAN architectures based on their design properties, scene types and their generalization capabilities across different datasets. Finally, as a result of the experimental review, recommendations are given for the design of new GAN architectures based on different criteria, circumstances and datasets.

In summary, our contributions are: (1) to the best of our knowledge, we are the first to formulate the color constancy task as an image-to-image translation problem using GANs, (2) we provide extensive experiments on 3 different state-of-the-art GAN architectures to demonstrate their (i) effectiveness on the task and (ii) generalization ability across different datasets, and (3) we provide a thorough analysis of possible error contributing factors to aid future color constancy research.

2 Related Work

Color Constancy:

To mimic the human ability of color constancy for cameras, a wide range of algorithms are proposed. These algorithms can be classified into two main categories: (1) static-based methods which are either relying on reflection models or low-level image statistics, and (2) learning-based methods which rely on training sets 

[22].

Static methods assume that an image that is taken under a canonical light source follow certain properties. For example, the grey-world algorithm [10] assumes that the average reflectance of the surfaces in a scene is achromatic. Further, the white-patch algorithm [29] assumes that the maximum pixel response in an image corresponds to white reflectance. These assumptions are extended to first- and higher-order image statistics to provide more accurate estimators such as the grey-edge assumption [46]. In their work, [46] proves that all the statistics-based methods can be unified into a single framework.

Learning-based methods require a set of labeled images to train a color constancy model. Since different statistics-based methods are suited for different scenes, algorithms are proposed to learn how to select (or fuse) the color constancy algorithm which works best for a given scene. For example, [6] first classifies an image into an indoor or outdoor scene. Then, it determines the most appropriate algorithm for that scene. [20] uses Weibull parametrization,  [47] relies on high-level semantic features, and [7] uses decision forests with various low-level features. In addition, [40] proposes to estimate the posterior of the illuminant conditioned on surface reflectance in a Bayesian framework. Instead of modelling the distribution explicitly, the exemplar-based method [27] directly searches for the nearest patches in the training set to adapt a voting scheme for the final illuminant estimation. Furthermore, gamut-based methods [3, 18, 21] assume that only a limited number of colors can be observed in a scene. First, the canonical gamut is computed from the training set as a reference. Then, the input gamut is mapped onto the canonical gamut such that it completely fits in. Finally, the mapping is used to compute the corrected image. Moreover, [1, 49] use regression models, whereas [11, 44] use neural networks.

More recent work uses powerful deep CNN models to approach the color constancy problem. [33] uses an end-to-end deep model to estimate the illuminant. [8] extracts patch features from a pre-trained CNN and feed them to a regression module to estimate the scene illumination. Both works provide state-of-the-art results supported by powerful CNN models. In another work, [36]

formulates the problem as an illumination classification problem using deep learning

[43] proposes a two-branch deep CNN. One generates multiple hypotheses of the illuminant of a given patch. The other branch adaptively selects the best hypothesis. Moreover, [9] estimates the local illuminant color of image patches. [25] proposes a method to weight the input patches to differentiate between semantically valuable and semantically ambiguous local regions. Finally, [37] exploits multiple frames over time within a deep learning framework. On the other hand, [45] bypasses the illuminant color estimation step and directly learns a mapping from the input image to the white balanced image in an end-to-end fashion.

Generative Adversarial Networks: Recently, generative adversarial networks (GANs) [23] gained a lot of attention as they are successful in various computer vision tasks. The network adversarially learns to generate images that are very close to real image samples. They have demonstrated remarkable progress in the fields of image translation [26, 28, 51]

, image super-resolution 

[31], and image synthesis [13, 26, 50].

Traditional GANs use random noise as input to generate images. However, noise inputs do not have much control over the generation. Further improvements [13, 28, 35, 38] show how to constrain the generated outputs by conditioning on the input with specific attributes. By conditioning on the input image, [28] is able to generate objects across domains, like handbags to shoes. Pix2Pix [26] introduces a pixel-wise paired generation for an image-to-image translation task from paired images. The authors illustrate its success on several translation tasks such as transferring from edges to objects or facades to buildings. [32, 51] present an improvement upon this work by learning the attributes of the input and output domains, and training the network in an unsupervised manner. Specifically, [51] introduces a cycle consistency through an architecture called CycleGAN, which ensures that the generated images are mapped back to the original image space. In this context, we propose to model color constancy as an image-to-image translation task where the source domain is a camera sensor image and the target domain is the white balanced version. This enables to map back the white balanced version to the sensor domain by cycle consistency. Although these methods approach the problem with unpaired data, they are only able to achieve one domain transfer at a time. Therefore, [13] proposes StarGAN

, to learn multiple attributes across domains simultaneously. This is then applied to faces where the model can be steered to generate different outputs based on conditioning vectors. For example, transferring from blond to black hair or making someone look younger or older. This can also be applied to the color constancy task as different illuminants can be learned by providing a conditioning vector. In this way, various color corrected images can be generated. This can be beneficial to render the same scene under different illuminant colors or do a consistency check to achieve the same canonical light under different illuminant colors. In the following section, we give an overview of the problem of color constancy, followed by the GAN architectures used in our survey.

Figure 2: Overview of GAN-based color constancy estimation. The input image is provided as a condition to the Generator (), whose output, along with a true sample is passed to the Discriminator ().

3 Methodology

3.1 Color Constancy

For a Lambertian (diffuse) scene, the RGB values of an image I taken by a digital camera are determined by object properties (albedo and geometry ), spectral power distribution (color) of the illuminant and camera spectral sensitivity function as follows [41]:

(1)

where denotes the wavelength of the incoming light, is the visible spectrum, indicates the surface normal, and represents the light source direction.

Color constancy aims to correct an image such that it appears as if it was taken under a canonical (white) light source. First, the color of the light source is estimated. Then, the image is transformed into an image observed under a reference (white) light source. The correction process can be regarded as chromatic adaptation [15]

in which the problem is formulated as a linear transformation:

(2)

where W denotes the canonical image in linear space (white balanced image) and e is the color of the illuminant. The von Kries model [48] is used as the transformation function to recover the color of the light source per pixel:

(3)

where u denotes the unknown light source and c represents the canonical (corrected) illuminant. The aim is to recover . The model describes the color constancy as a problem of modifying the gains of the image channels independently. Under this model, given an image from a camera sensor captured under non-canonical lighting conditions, the problem of color constancy is defined as the estimation of e given I. Thus, it estimates the image under a canonical light source.

Given that we have 3 variables of which only 1 is known, the problem is under constrained. Traditional work usually estimates the illuminant color fist, and then recovers the white balanced image. In this work, we model the color constancy task as an image-to-image translation problem, and directly generate the white balanced image by generative adversarial networks (GANs).

3.2 Generative Adversarial Networks (GANs)

A GAN consists of a generator and a discriminator pair . As shown in Figure 2, takes as input , an image from the camera response under unknown light source, and outputs , the same image under a canonical light source. We want to learn a mapping from a domain having an unknown light source to a domain having a reference (white) light source without modifying the pixel intensities or scene structure. The output is then passed to which classifies whether the input is sampled from the real color corrected domain or is generated by . The supervision signal provided by forces to learn to transform the input with unknown light to appear closer to the color corrected images. In this paper, we select representative state-of-the-art GAN architectures for the color constancy problem.

Pix2Pix [26]: The model uses a U-Net [39] architecture as the generator. This allows the architecture to learn an image transformation, conditioned on the input image. Skip connections [34] are used to transfer high frequency details from encoder to the decoder blocks. The discriminator is fully convolutional , which evaluates the generated samples at the scale of (overlapping) image patches. It enforces the high frequency consistency by modelling the image as a Markov random field.

The network is trained with an adversarial loss:

(4)

where, is an image from the sensor domain under unknown light, and is an image under canonical light. The discriminator takes as input the image generated by the generator and predicts whether it was generated or coming from real samples . Additionally, an extra L1 loss is used to produce generated images to be close to the ground-truth output:

(5)

Thus, the final combined loss becomes:

(6)

The pixel-wise correspondence is particularly suited to the color constancy problem as the white-balanced image is a pixel-wise scaling of the color image. The overall structure remains the same. An advantageous aspect of the method for the problem of color constancy is that this approach, unlike traditional methods, learns a local transformation rather than a global transformation. This is useful when the scene illumination is non-uniform. A disadvantage is that the model requires paired images for training.

CycleGAN [51]: To discard the dependency on paired image data, we adopt CycleGAN operating on unpaired data. It uses samples from two independent distributions. Modelling the learning process from unpaired data allows the network to learn a global transformation. This makes the model more resilient to biases. The generator uses a ResNet [24] based architecture. Following [26], the discriminator is a PatchGAN. To train the network in an unpaired way, CycleGAN introduces a cycle consistency loss:

(7)

where and are functions parameterized by an encoder-decoder network that learns a transformation of an image from one domain to another. It is similar to Equation 5. However, in this case, instead of using the output directly from the generator , the output is passed through another generator that learns to translate the output back to the original input domain. This is termed as a forward cycle, where the transformation is defined as . Similarly, a second backward cycle is introduced for the transformation , to enforce a full cycle and prevent the network from learning the delta function over the distribution.

In our case, learns a translation from the sensor domain with unknown light to the white balanced domain, and learns the vice versa. Then, the model using cycle consistency combines the adversarial and cycle loss as follows:

(8)

This allows the network to be independent of the local pixel-wise dependence. The network learns a global transformation for the input image to the white balanced domain. Due to the unpaired image domains, one could simply use arbitrary data to build the sensor space distribution and correspondingly curate professionally white balanced images to train a network. However, a problem arises when the requirement is to have multiple light sources. In such case, one would need to train a separate model for each of the illuminant transformation combinations.

StarGAN [13]: This architecture specifically addresses the issue of one-to-one transformation learning. Using this architecture, it is possible to learn one-to-many mappings, which is the limitation of CycleGAN. The model builds upon the work of CycleGAN, in which the generator uses an additional condition in the form of a domain vector. The vector specifies which target domain the network output should resemble. Similar to CycleGAN, a second generator is used to enforce cycle consistency, Equation 7. The discriminator is also modified to accommodate one-to-many mappings. It not only discriminates on the image whether being from the real or generated distribution, but also learns to classify the input image domain. This way, the discriminator is able to provide meaningful supervision signals to the generator, not only steering it to the real distribution, but also making sure it follows a certain domain distribution. As a result, the final loss is a combination of Equation 8 and the domain classification loss.

The ability to define domains allows us to train a network that is able to learn a global mapping for different illuminants (i.e. multiple light sources) by a single model. Similar to CycleGAN, the model does not require paired data. It is possible to train a model to learn a mapping between the various domains when sufficient amount of data is available. Furthermore, since the model is able to learn multi-domain transformations, it is able to learn a common transformation that is capable of mapping the camera sensor input image to multiple illuminant types. In this way, we can render the same scene under different illuminant colors or do a consistency check to see if the model can achieve the same canonical light under different illuminant colors.

For the experiments, StarGAN is trained to estimate both the canonical transformation and Illuminant A. That translates to 3 different domains; the canonical, Illuminant A and the input domain. Since the model learns in an unsupervised manner, StarGAN also learns the representation for the input domain as a valid target domain.

3.3 Illuminant Color Prediction

For the experiments, instead of first predicting the color of the light source and then correcting the image, we directly estimate the white balanced image. Hence, the single illuminant color is estimated by calculating the inverse of Equation 2:

(9)

where is the input image and is the white balanced estimation from the network. Both and are converted into the linear space before computing the illuminant.

4 Experiments

4.1 Datasets

We evaluate the performance of the models on 3 standard color constancy benchmarks. The SFU Gray Ball [14] dataset contains over 11K images captured by a video camera. A gray ball is mounted on the camera to calculate the ground-truth. We use the linear ground-truth for evaluations. Second, we use ColorChecker dataset [42] that contains 568 linear raw images including indoor and outdoor scenes. A Macbeth color checker is placed in every scene to obtain the ground truth. Recently, [16] pointed out that original dataset has 3 different ground-truths and for each ground-truth the performance of various color constancy algorithms are different. Instead of using [42], we use the recalculated version of [16] called ColorChecker RECommended (RCC). Finally, [2] provides a dataset of 1365 outdoor images with a cube placed in every scene. Two different ground-truths are estimated from the two gray faces of the cube in different directions. Finally, one is considered as the final ground-truth by visual inspection. For the experiments, the datasets are randomly split into 80% training and 20% testing. The reference objects present in the images are masked out to prevent networks from cheating (directly estimating the color of the light source from the reference).

4.2 Error Metric

Following the common practice, we report on the angular error between the ground-truth illuminant and the estimated illuminant :

(10)

where is the L2 norm. For each image, the angular error is computed, and the mean, median, trimean, means of the lowest-error 25% and the highest-error 25%, and maximum angular errors are reported to evaluate the models quantitatively.

5 Evaluation

5.1 Linear RGB vs. sRGB

In this experiment, we evaluate the performance of the architectures based on the input type; linear vs. . Traditional work usually models the color constancy as a problem of modifying the gains of the image channels independently. As a result, they generally operate on the linear color space. However, in the case of GANs, that assumption is no longer a requirement as they can implicitly estimate a transformation matrix that encodes such information. Moreover, for the linear space, the range of pixel values is often compressed to one part. In practice, 12 bit sensor images are encoded in 16 bits. This approach handles the problem of clipped values and saturated pixels, which will otherwise may bias the results towards x1.0 white balance gains. Nonetheless, the process also creates 4 trailing zero bits. On the other hand, in space, the distribution of the pixel values are more uniform. The color distribution of the linear image is close to zero. Therefore, we report a performance comparison between the linear color space and the color space by using CycleGAN [51]. The model directly outputs a white balanced estimation of an input image. Then, the color of the light source is estimated by Equation (9). The results are presented in Tables 3, 2 and 1 for the different datasets.

Input Mean Med. Tri. Best 25% Worst 25% Max
Linear 9.2 5.7 6.9 1.6 21.6 39.3
8.4 5.9 6.4 1.5 19.6 37.8
Table 1: Performance comparison for linear versus color spaces for the SFU Gray Ball dataset [14]. input yields better results.
Input Mean Med. Tri. Best 25% Worst 25% Max
Linear 5.8 5.5 5.4 2.4 10.0 15.9
3.4 2.6 2.8 0.7 7.3 18.0
Table 2: Performance comparison for linear versus color spaces for the ColorChecker RECommended dataset [16]. input yields better results.
Input Mean Med. Tri. Best 25% Worst 25% Max
Linear 6.2 5.8 5.8 2.7 10.4 20.5
1.5 1.2 1.3 0.5 3.0 6.0
Table 3: Performance comparison for linear versus color spaces for Cube dataset [2]. input yields better results.

The experiments show that the performance of color space is better than linear for all datasets. As a results, all subsequent experiments are performed in the color space to achieve maximum performance.

5.2 GAN Architecture Comparisons

Figure 3: Performance of different GAN architectures on the color constancy task. All models are able to recover proper white balanced images. The images are gamma adjusted for visualization.

In this experiment, we compare the performance of 3 different state-of-the-art GAN architectures for color constancy; Pix2Pix [26], CycleGAN [51] and StarGAN [13]. All models use inputs. Outputs are the white balanced estimates. Then, the color of the light source is estimated by Equation (9). The results are presented in Tables 4, 6 and 5 for the different datasets. Figure 3 provides a number of visual examples.

Model Mean Med. Tri. Best 25% Worst 25% Max
Pix2Pix [26] 6.6 5.3 5.4 1.4 14.2 36.0
StarGAN [13] 10.3 8.9 9.0 4.0 18.9 46.0
CycleGAN [51] 8.4 5.9 6.4 1.5 19.6 37.8
Table 4: Performance of different GAN models for SFU Gray Ball dataset [14]. Pix2Pix achieves the best performance.
Model Mean Med. Tri. Best 25% Worst 25% Max
Pix2Pix [26] 3.6 2.8 3.1 1.2 7.2 11.3
StarGAN [13] 5.3 4.2 4.6 1.5 11.0 21.8
CycleGAN [51] 3.4 2.6 2.8 0.7 7.3 18.0
Table 5: Performance of different GAN models for ColorChecker RECommended dataset [16]. CycleGAN achieves the best performance.
Model Mean Med. Tri. Best 25% Worst 25% Max
Pix2Pix [26] 1.9 1.4 1.5 0.7 4.0 8.0
StarGAN [13] 3.8 3.3 3.5 1.3 7.0 11.4
CycleGAN [51] 1.5 1.2 1.3 0.5 3.0 6.0
Table 6: Performance of different GAN models for Cube dataset [2]. CycleGAN achieves the best performance.

Tables 6 and 5 show that CycleGAN outperforms Pix2Pix and StarGAN. Table 4 shows that Pix2Pix achieves the best performance for the SFU Gray Ball dataset. Most of these cases are scenes with multiple light sources with strong lighting and shadowed regions. Pix2Pix learns a per-pixel mapping of the illumination, and is agnostic about the global illumination. This allows the model to effectively learn different local illumination conditions. Other GANs learn a global representation of the scene illumination. This assumption only holds for uniform, global illumination. In the Gray Ball dataset, there are many instances of scenes with multiple light sources. Thus, Pix2Pix shows to be more resilience to multi-illuminant instances through its local mapping capability. Figure 4 shows an example of non-uniform illumination effects present in an image. Pix2Pix is more resilient as it can recover a proper white balanced image, while preserving the foreground and background (brightness) distinction. On the other hand, CycleGAN appears closer to the ground-truth. This can be explained by the fact that the gray ball that is used to estimate the ground-truth illuminant, is placed in the shadowed foreground. Hence, the estimation completely misses the multi-illuminant case. CycleGAN, due to its global estimator nature, estimates a uniform white balanced image which is closer to the ground-truth, but yields a red cast.

Figure 4: Resilience of Pix2Pix in a multi-illuminant case. The foreground is in shadow, while the background receives full illumination. This creates a multi-illuminant scene. Pix2Pix is able to recover the white-balanced image while keeping the foreground and background illuminant separate. CycleGAN estimates a global uniform lighting yielding a red cast. This is due to the inaccurate GT as the reference gray ball is placed in the shadowed foreground ignoring the multi-illuminant case.
Figure 5: Visualizations of the StarGAN architecture. This architecture learns mappings to multiple domains, e.g. Illuminant A and canonical, simultaneously. The images are gamma adjusted for visualization.

StarGAN has much lower performance than the other 2 models. This may be due to the architecture’s latent representation that has to compensate for learning 3 different (input, canonical, Illuminant A) domain transformations. This is not the case with either Pix2Pix or CycleGAN, where the mapping is one-to-one. We can make use of StarGAN outputting in multiple target illuminant domains, as visualized in Figure 5. This ability can also directly be applied to relighting tasks. The accuracy of this internal estimation defines how close the output is to the target domain. Ideally, this estimated ground-truth illumination should be the same for all the output domains. Hence, by checking the performance of the output in estimating the ground-truth using Equation 9, we can check the consistency of the network. This also allows for observing which illuminant performs better in estimating the ground-truth illuminants. Per-illuminant consistency performance is presented in Table 7. It can observed from the table that for the RECommended Color Checker (RCC) dataset Illuminant A estimation yields better results than the direct canonical estimation. For the Cube dataset, the canonical performs better. This could be explained as the Cube dataset consists of only outdoor images. Hence, the input images seldomly have an illuminant that is closer to Illuminant A. Conversely, for the RCC dataset, indoor scenes generally have incandescent lighting, which is closer to Illuminant A. This intuition is further evaluated in the next experiment section where the performance of outdoor scenes are compared against indoors.

Dataset + Illumination Mean Med. Tri. Best 25% Worst 25% Max
RCC + Canonical 5.3 4.2 4.6 1.5 11.0 21.8
RCC + Illuminant A 2.9 2.2 2.3 0.8 6.4 13.8
Cube + Canonical 3.8 3.3 3.5 1.3 7.0 11.4
Cube + Illuminant A 4.1 3.2 3.4 1.1 8.5 20.7
SFU + Canonical 10.3 8.9 9.0 4.0 18.9 46.0
SFU + Illuminant A 12.7 11.2 11.2 3.2 24.8 61.1
Table 7: Consistency of StarGAN’s illumination estimation.

5.3 Error Sources

Figure 6: Pixel wise angular errors for the predicted images. Regions with smooth color changes and planar surfaces with homogeneous colors produce lower errors.

Since our GAN-based approach bypasses the illumination estimation stage to directly predict the white balanced image, we can analyze the error sources of the generation process by calculating the per-pixel error image. The error images are generated by computing the per-pixel angular error between the estimated white balanced image and the input image corrected with the ground-truth illuminant. Figure 6 provides a number of examples of the estimations of the CycleGAN model. It is shown that the regions with smooth color changes and planar surfaces with homogeneous colors produce lower errors. On the other hand, regions with complex textures and non-homogeneous colors contribute to higher errors. That is expected as the network learns to estimate white balanced images based on neighborhood relations. To overcome this, networks can be further regularized to give more attention to edges or surface normals. Further, the reference object of the RCC dataset causes high errors. Figure 6 shows that for the outdoor images of RCC and Cube, the input images have a greenish color cast. When the model predicts the white balanced images, green areas like grass or trees, whose color is closer to the color cast present in the input images, produce more errors. The reason may be that the networks confuses the color of the light source with object albedo. For the indoor scenes of SFU, most of the errors are caused by direct light coming from the ceiling.

5.4 Cross Dataset Performance

Figure 7: Cross dataset evaluations. Results on the SFU Gray Ball dataset appear much darker. Conversely, the result of the SFU Gray Ball trained model on the Cube and the RCC dataset have many saturation artifacts.

Most of the learning-based illumination estimation tasks are usually trained and tested using the same datasets. This does not always give an intuition regarding the generalization capacity of the algorithm. In this experiment, we further evaluate the generalization ability of CycleGAN, which achieves the best quantitative results over the majority of the 3 datasets. Since the datasets used for experiments have a variety of illumination conditions, picking the best performing model allows us to make an unbiased evaluation. Hence, we train CycleGAN on one dataset and test it on the other two. Same test splits are used for all experiments. The quantitative results are presented in Table 8 and visual results are shown in Figure 7.

Datasets (Train/Test) Mean Med. Tri. Best 25% Worst 25% Max
SFU / SFU 8.4 5.9 6.4 1.5 19.6 37.8
SFU / RCC 10.9 10.5 10.7 5.6 16.6 23.5
SFU / Cube 14.9 14.5 14.6 8.9 21.9 36.8
RCC / RCC 3.4 2.6 2.8 0.7 7.3 18.0
RCC / SFU 10.1 9.3 9.0 2.5 20.2 45.5
RCC / Cube 2.6 2.2 2.3 0.7 5.0 11.7
Cube / Cube 1.5 1.2 1.3 0.5 3.0 6.0
Cube / SFU 11.6 10.4 10.5 3.1 22.2 41.4
Cube / RCC 4.0 3.8 3.7 1.2 7.5 14.1
Table 8: Generalization behavior of CycleGAN [51]. Models trained on the RECommended Color Checker (RCC) [16] and Cube [2] datasets achieve the best generalization performance.

Table 8 shows that the models trained on the RECommended Color Checker (RCC) [16] and Cube [2] datasets achieve the best generalization performance, while SFU Gray Ball dataset [14] has the highest errors. This can be attributed to the fact that the ground-truth for the dataset is not always accurate. The performance of the model trained on the RCC is better than the one trained on the Cube dataset. This is because the RCC dataset contains both indoor and outdoor scenes, and thus the model learns more variation. Figure 7 shows that the results on the SFU Gray Ball dataset appear darker. Conversely, the result of the SFU Gray Ball trained model on the Cube and the RCC dataset have many saturation artifacts. This is most likely due to the way the dataset is provided. Both RCC and Cube datasets are available as 16-bit images, which involves saturated pixel information. The SFU Gray Ball dataset is standard, camera processed (clipped) 8 bit images. Further, the model produces blurred results, therefore the gradient significantly differs from the original image.

Figure 8: Visuals of scene based splits for the RCC dataset [16]. The images are gamma adjusted for visualization.

5.5 Scene Based Performance

In this experiment, the performance of CycleGAN is evaluated based on the type of the scene, i.e. indoor vs. outdoor. The model is trained on the complete dataset, and tested on indoor and outdoor scenes separately. We partition the RECommended Color Checker (RCC) [16] dataset test split into indoors and outdoors. Then, we evaluate the partitions by training CycleGAN model on the RCC training split. Table 9 shows the quantitative results and Figure 8 presents a number of examples. Results show that the performance of the outdoor scenes are significantly better than the indoor scenes. We believe that this is because of the more uniform nature of the lighting conditions in outdoor scenes. On the other hand, indoor scenes include more complex lighting conditions with multiple illuminants. This poses a problem as the network assumes a single light source.

Scene Mean Med. Tri. Best 25% Worst 25% Max
Indoor 4.4 3.8 3.8 1.5 8.5 17.2
Outdoor 2.4 1.6 1.9 0.5 5.3 8.5
Table 9: Performance of CycleGAN [51] on indoor and outdoor scene of ColorChecker RECommended dataset [16]. It performs better on outdoor scenes where the light is more uniform.

5.6 Comparison to the State-of-the-Art

In this section, we compare our models with different benchmarking algorithms. We provide results for Grey-World [10], White-Patch [30], Shades-of-Grey [17], General Grey-World [4], First-order Grey-Edge [46] and Second-order Grey-Edge [46]. The parameter values are set as described in [46]. In addition, comparisons with Pixel-based Gamut [3], Intersection-based Gamut [3], Edge-based Gamut [3], Spatial Correlations[12] and Natural Image Statistics [20] are provided. Further, results for High Level Visual Information [47] (bottom-up, top-down, and their combination), Bayesian Color Constancy [19], Automatic Color Constancy Algorithm Selection [7], Automatic Algorithm Combination [7], Exemplar-Based Color Constancy[27] and Color Tiger [2] are presented. Finally, we provide comparisons for 2 convolutional approaches; Deep Color Constancy [8] and Fast Fourier Color Constancy [5]. Some of the results were taken from related papers, thus resulting in missing entries for some datasets.

Tables 12, 11 and 10 provide quantitative results for 3 different benchmarks. For the SFU Gray Ball [14] dataset, Table 10, Pix2Pix [26] model achieves the best performance in all the metrics with 17.5% improvement in mean angular error, 18.4% in median and 20.6% in trimean. For the ColorChecker RECommended [16], Table 11, CycleGAN [51] framework achieves the second best place in different metrics, only worse than Fast Fourier Color Constancy [5]. For the Cube dataset [2], Table 12, CycleGAN [51] achieves the best performance in the mean error and worst and comparable results with all other methods.

Method Mean Med. Tri. Best 25% Worst 25% Max
Grey-World [10] 13.0 11.0 11.5 3.1 26.0 63.0
Edge-based Gamut [3] 12.8 10.9 11.4 3.6 25.0 58.3
White-Patch [30] 12.7 10.5 11.3 2.5 26.2 46.5
Spatial Correlations [12] 12.7 10.8 11.5 2.4 26.1 41.2
Pixel-based Gamut [3] 11.8 8.9 10.0 2.8 24.9 49.0
Intersection-based Gamut [3] 11.8 8.9 10.0 2.8 24.9 47.5
Shades-of-Grey [17] 11.5 9.8 10.2 3.5 22.4 57.2
General Grey-World [4] 11.5 9.8 10.2 3.5 22.4 57.2
Second-order Grey-Edge [46] 10.7 9.0 9.4 3.2 20.9 56.0
First-order Grey-Edge [46] 10.6 8.8 9.2 3.0 21.1 58.4
Top-down [47] 10.2 8.2 8.7 2.6 21.2 63.0
Bottom-up [47] 10.0 8.0 8.5 2.3 21.1 58.5
Bottom-up & Top-down [47] 9.7 7.7 8.2 2.3 20.6 60.0
Natural Image Statistics [20] 9.9 7.7 8.3 2.4 20.8 56.1
E. B. Color Constancy [27] 8.0 6.5 6.8 2.0 16.6 53.6
Pix2Pix [26] 6.6 5.3 5.4 1.4 14.2 36.0
CycleGAN [51] 8.4 5.9 6.4 1.5 19.6 37.8
StarGAN [13] 11.4 9.2 10.0 3.8 21.8 41.4
Table 10: Performance on SFU Gray Ball [14].
Method Mean Med. Tri. Best 25% Worst 25% Max
Grey-World [10] 9.7 10.0 10.0 5.0 13.7 24.8
White-Patch [30] 9.1 6.7 7.8 2.2 18.9 43.0
Shades-of-Grey [17] 7.3 6.8 6.9 2.3 12.8 22.5
AlexNet + SVR [8] 7.0 5.3 5.7 2.9 14.0 29.1
General Grey-World [4] 6.6 5.9 6.1 2.0 12.4 23.0
CART-based Selection [7] 6.1 5.1 5.3 2.0 12.1 24.7
CART-based Combination [7] 6.0 5.5 5.7 2.6 10.3 17.7
Pixel-based Gamut [3] 6.0 4.4 4.9 1.7 12.9 25.3
Intersection-based Gamut [3] 6.0 4.4 4.9 1.7 12.8 26.3
Top-down [47] 6.0 4.6 5.0 2.3 10.2 25.2
Spatial Correlations [12] 5.7 4.8 5.1 1.9 10.9 18.2
Bottom-up [47] 5.6 4.9 5.1 2.3 10.2 17.9
Bottom-up & Top-down [47] 5.6 4.5 4.8 2.6 10.5 25.2
Natural Image Statistics [20] 5.6 4.7 4.9 1.6 11.3 30.6
Edge-based Gamut [3] 5.5 3.3 3.9 0.7 13.8 29.8
Bayesian [19] 5.4 3.8 4.3 1.6 11.8 25.5
E. B. Color Constancy [27] 4.9 4.4 4.6 1.6 11.3 14.5
Deep Color Constancy [8] 4.6 3.9 4.2 2.3 7.9 14.8
Second-order Grey-Edge [46] 4.1 3.6 3.8 1.5 8.5 16.9
First-order Grey-Edge [46] 4.0 3.1 3.3 1.4 8.4 20.6
FFCC (model Q) [5] 2.0 1.1 1.4 0.3 4.6 25.0
Pix2Pix [26] 3.6 2.8 3.1 1.2 7.2 11.3
CycleGAN [51] 3.4 2.6 2.8 0.7 7.3 18.0
StarGAN [13] 5.7 4.9 5.2 1.7 10.5 19.5
Table 11: Performance on ColorChecker RECommended [16].
Method Mean Med. Tri. Best 25% Worst 25%
Grey-World [10] 3.8 2.9 3.2 0.7 8.2
White-Patch [30] 6.6 4.5 5.3 1.2 15.2
Shades-of-Grey [17] 2.6 1.8 2.9 0.4 6.2
General Grey-World [4] 2.5 1.6 1.8 0.4 6.2
Second-order Grey-Edge [46] 2.5 1.6 1.8 0.5 6.0
First-order Grey-Edge [46] 2.5 1.6 1.8 0.5 5.9
Color Tiger [2] 3.0 2.6 2.7 0.6 5.9
Restricted Color Tiger [2] 1.6 0.8 1.0 0.2 4.4
Pix2Pix [26] 1.9 1.4 1.5 0.7 4.0
CycleGAN [51] 1.5 1.2 1.3 0.5 3.0
StarGAN [13] 4.8 3.9 4.2 1.9 8.9
Table 12: Performance on Cube dataset[2].

6 Conclusion

In this paper, the color constancy task has been modelled by an image-to-image translation problem where the input is an RGB image of a scene taken under an unknown light source and the output is the color corrected (white balanced) one.

We have provided extensive experiments on 3 different state-of-the-art GAN architectures to demonstrate their (i) effectiveness on the task and (ii) generalization ability across different datasets. Finally, a thorough analysis is given of possible error contributing factors for future color constancy research.

Acknowledgements: This project was funded by the EU Horizon 2020 program No. 688007 (TrimBot2020). We would like to thank William Thong for the fruitful discussions.

References

  • [1] V. Agarwal, A. V. Gribok, A. Koschan, and M. A. Abidi. Estimating illumination chromaticity via kernel regression. In International Conference on Image Processing, 2006.
  • [2] N. Banic and S. Loncaric. Unsupervised learning for color constancy. In International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2018.
  • [3] K. Barnard. Improvements to gamut mapping color constancy algorithms. In European Conference on Computer Vision, 2000.
  • [4] K. Barnard, L. Martin, A. Coath, and B. Funt. A comparison of computational color constancy algorithms - part ii: Experiments with image data. IEEE Transactions on Image Processing, pages 985–996, 2002.
  • [5] J. T. Barron. Fast fourier color constancy. In

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2016.
  • [6] S. Bianco, G. Ciocca, C. Cusano, and R. Schettini. Improving color constancy using indoor-outdoor image classification. IEEE Transactions on Image Processing, pages 1057–7149, 2008.
  • [7] S. Bianco, G. Ciocca, C. Cusano, and R. Schettini. Automatic color constancy algorithm selection and combination. Pattern Recognition, pages 695–705, 2010.
  • [8] S. Bianco, C. Cusano, and R. Schettini. Color constancy using cnns. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015.
  • [9] S. Bianco, C. Cusano, and R. Schettini. Single and multiple illuminant estimation using convolutional neural networks. IEEE Transactions on Image Processing, pages 4347–4362, 2017.
  • [10] G. Buchsbaum. A spatial processor model for object colour perception. Sources of Color Science, pages 1–26, 1980.
  • [11] V. C. Cardei, B. Funt, and K. Barnard. Estimating the scene illumination chromaticity by using a neural network. Journal of the Optical Society of America, pages 2374–2386, 2002.
  • [12] A. Chakrabarti, K. Hirakawa, and T. Zickler. Color constancy with spatio-spectral statistics. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1509–1519, 2012.
  • [13] Y. Choi, M. Choi, M. Kim, J. W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [14] F. Ciurea and B. Funt. A large image database for color constancy research. In Color and Imaging Conference, 2003.
  • [15] M. D. Fairchild. Color Appearance Models. Wiley, Chichester, U.K., 2013.
  • [16] G. D. Finlayson, G. Hemrit, A. Gijsenij, and P. Gehler. A curious problem with using the colour checker dataset for illuminant estimation. In Color and Imaging Conference, 2017.
  • [17] G. D. Finlayson and E. Trezzi. Shades of gray and colour constancy. In Color and Imaging Conference, 2004.
  • [18] D. A. Forsyth. A novel algorithm for color constancy. International Journal of Computer Vision, pages 5–36, 1990.
  • [19] P. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp. Bayesian color constancy revisited. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2008.
  • [20] A. Gijsenij and T. Gevers. Color constancy using natural image statistics and scene semantics. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 687–698, 2010.
  • [21] A. Gijsenij, T. Gevers, and J. van De Weijer. Generalized gamut mapping using image derivative structures for color constancy. International Journal of Computer Vision, pages 127–139, 2010.
  • [22] A. Gijsenij, T. Gevers, and J. van De Weijer. Computational color constancy: Survey and experiments. IEEE Transactions on Image Processing, pages 2475–2489, 2011.
  • [23] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014.
  • [24] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [25] Y. Hu, B. Wang, and S. Lin. Fc4: Fully convolutional color constancy with confidence-weighted pooling. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [26] P. Isola, J. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial nets. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [27] H. R. V. Joze and M. S. Drew. Exemplar-based color constancy and multiple illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 860–873, 2014.
  • [28] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. In

    International Conference on Machine Learning

    , 2017.
  • [29] E. H. Land. The retinex theory of color vision. Scientific American, pages 108–129, 1977.
  • [30] E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of the Optical Society of America, pages 1–11, 1971.
  • [31] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [32] M. Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, 2017.
  • [33] Z. Lou, T. Gevers, N. Hu, and M. P. Lucassen. Color constancy by deep learning. In British Machine Vision Conference, 2015.
  • [34] X. Mao, C. Shen, and Y. Yang. Image restoration using very deep fully convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems, 2016.
  • [35] M. Mirza and S. Osindero. Conditional generative adversarial nets. In arXiv preprint arXiv:1411.1784, 2016.
  • [36] S. W. Oh and S. J. Kim. Approaching the computational color constancy as a classification problem through deep learning. Pattern Recognition, pages 405–416, 2017.
  • [37] Y. Qian, K. Chen, J. Nikkanen, J. K. Kamarainen, and J. Matas. Recurrent color constancy. In IEEE International Conference on Computer Vision, 2017.
  • [38] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In International Conference on Machine Learning, 2016.
  • [39] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference On Medical Image Computing & Computer Assisted Intervention, 2015.
  • [40] C. Rosenberg, T. Minka, and A. Ladsariya. Bayesian color constancy with non-gaussian models. In Advances in Neural Information Processing Systems, 2004.
  • [41] S. Shafer. Using color to separate reflection components. Color research and applications, pages 210–218, 1985.
  • [42] L. Shi and B. Funt. Re-processed version of the gehler color constancy dataset of 568 images.
  • [43] W. Shi, C. C. Loy, and X. Tang. Deep specialized network for illuminant estimation. In European Conference on Computer Vision, 2016.
  • [44] R. Stanikunas, H. Vaitkevicius, and J. J. Kulikowski. Investigation of color constancy with a neural network. Neural Networks, pages 327–337, 2004.
  • [45] S. Tan, F. Peng, H. Tan, S. Lai, and M. Zhang. Multiple illumination estimation with end-to-end network. In IEEE International Conference on Image, Vision and Computing, 2018.
  • [46] J. van De Weijer, T. Gevers, and A. Gijsenij. Edge-based color constancy. IEEE Transactions on Image Processing, pages 2207–2214, 2007.
  • [47] J. van de Weijer, C. Schmid, and J. Verbeek. Using high-level visual information for color constancy. In IEEE International Conference on Computer Vision, 2007.
  • [48] J. von Kries. Influence of adaptation on the effects produced by luminous stimuli. Sources of Color Science, pages 109–119, 1970.
  • [49] W. Xiong and B. Funt. Estimating illumination chromaticity via support vector regression. Journal of Imaging Science and Technology, pages 341–348, 2006.
  • [50] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In IEEE International Conference on Computer Vision, 2017.
  • [51] J. Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, 2017.