1 Introduction
The objective of this paper is to show that natural lights must necessarily follow a straightline locus, in a special 2D chromaticity feature space generated using a geometricmean denominator to remove the effect of magnitude from colour, and that this locus can be derived from a camera calibration. Transformed back into nonlog coordinates, the straight line in log colour space means that in terms of ordinary Lnorm based chromaticity lights follow a particular curve. The locus determined is cameradependent. Derivation of the parameters of this locus via a camera calibration means that then one can use the path to help identify the illuminant in the scene, and also to transform from one illuminant to another.
In this paper, the main use we make of the above observation regarding the path following by illuminants is to apply this additional constraint to colour constancy algorithms as extra information that can be brought to bear. We show that the specularlocus thus found does help in discovering the lighting in a scene. Moreover, since we know the path that illuminants would take depending on the colour temperature , we can relight a scene simply by changing and thus moving along the locus. Using measured data for changing lights for static scenes we show below that this shift in lighting is indeed accurate.
The history of using specularities to discover the illuminant is lengthy, and here we simply highlight some key contributions used in this paper. Shafer [1] introduced the widely used and quite effective dichromatic model of reflectance for dielectric materials, wherein surface reflectance consists of (i) a diffuse (‘body’) component that depends on subsurface material properties of a reflecting surface and (ii) a specular (‘surface’) component that depends on the airsurface interface layer and not the bodyreflectance properties. The diffuse component is responsible for generating the colour and shading for an object and the specular component is responsible for highlights. For a dielectric (e.g., plastics) the neutralinterface model [2] states that the colour of the specular contribution is approximately the same as the colour of the illuminant itself. However, simply taking specular colour as identical with light colour is insufficient: typically, specular reflection looks white to the viewer (for dielectric materials), but in fact a careful inspection of specular pixels shows that the body colour is still present to some degree.
Klinker et al. [3]
showed that when the diffuse colour is constant over a surface, the colour histogram of its image forms a Tshaped distribution, with the diffuse and specular pixels forming linear clusters. They used this information to estimate a single diffuse colour. Therefore in order to use this principle, their approach needed to segment an image into several regions of homogeneous diffuse colour. Morever, Lee
[4] proposed a method which uses specularity to compute illumination by using the fact that in the CIE chromaticity diagram [5] the coordinates of the colours from different points from the same surface will fall on a straight line connected to the specular point. This is the case when the light reflected from a uniform surface is a additive mixture of the specular component and the diffuse component. This seminal work initiated a substantial body of work on identifying specular pixels and using these to attempt to discover the illuminant [6, 7]. Another approach extending these algorithms is to define a constraint on the possible colours of illumination, making estimation more robust [8, 9].Finlayson and Drew [10]
used 4dimensional images (more colours than R,G,B) formed by a special 4sensor camera. They first formed colour ratios to reduce the dimensionality to 3 and to eliminate light intensity and shading; then projecting log values into the plane orthogonal to the direction in the 3D space corresponding to a lighting change direction they arrived at generalized colour 2vectors independent of lighting. They noted that in the 2space, specularities are approximately linear streaks pointing to a single specular point. Therefore they could remove specularities by the simple expedient of replacing each 2D colour by the maximum 2vector position at its particular direction from the specular point. Note, however, that in
[10] the authors were constrained to using a foursensor camera. Here we relax that necessity by adding a more complete camera calibration phase.In [11], Lu and Drew carried out an analysis again based on the formulation in [10], but in 3D rather than 4D and using an additional image generated by imaging a withflash exposure in addition to an image with no flash. The addition of an extra image means that by subtracting the images an estimate of illuminant colour temperature can be established based on closeness to a predetermined set of clusters for different lights in a logchromaticity space, using the mean over the image in that space compared to the clusters.
In this paper we present a new camera calibration method aimed at finding a specularpoint locus in the logchromaticity colour feature space, for daylight illuminants. We prove that, in a simplifying model for image formation under nonfluorescent illumination, any candidate illuminants for an image generated by a specific camera must lie on a line in loglog chromaticity space if we use a geometric mean to normalize colour. This has the consequence that ordinary chromaticities formed by dividing by the sum must lie on a specific curve. To support these theoretical considerations, we demonstrate the applicability of the line in log chromaticity space for several different datasets and, as applications, we use the resulting curve for illumination recovery and relighting with a different illumination.
In essence, we are proposing a type of new colour constancy algorithm, one that uses a camera calibration. Many colour constancy algorithms have been proposed (see [12, 13] for an overview). The foundational colour constancy method, the socalled WhitePatch or MaxRGB method, estimates the light source colour from the maximum response of the different colour channels [14]. Another wellknown colour constancy method is based on the GreyWorld hypothesis [15], which assumes that the average reflectance in the scene is achromatic. GreyEdge is a recent version of the GreyWorld hypothesis that says: the average of the reflectance differences in a scene is achromatic [16]. Finlayson and Trezzi [17] formalize greybased methods by subsuming them into a single formula using the Minkowski pnorm. The Gamut Mapping algorithm, a more complex and more accurate algorithm, was introduced by Forsyth [18]. It is based on the assumption that in realworld images, for a given illuminant one observes only a limited number of colours. Several extensions have been proposed [19, 20, 21, 22, 23].
The paper is organized as follows: To begin, in 2 we discuss the underlying assumptions that allow us to create a simplified model of colour image formation. Then in 3 we examine how the simplified model plus an offline calibration of the camera can be used to analyze the specular highlights. We propose a specularpoint locus in chromaticity space in 4 based on the calibration for each camera. In 5 and 6 we use the proposed illuminant locus to demonstrate its applicability in two application areas: illuminant identification, and image relighting. In 7 we introduce a method to generate a matte image using our estimated illuminant, giving a specularfree image. Finally, we conclude the paper in 8.
2 Image Formation
To generate a simplified image formation model we apply the following set of simplifying assumptions (cf. [24]): (1) illumination is Planckian or is sufficiently near the Planckian locus that a blackbody radiator forms a reasonable approximation for this use [25]; (2) surfaces are dichromatic [1]; and (3) RGB camera sensors are narrowband or can be made sufficiently narrowband by a spectralsharpening colourspace transform [26].
Thus we begin by considering a narrowband camera, with three sensors. Note again that in [10] the authors were constrained to using a foursensor camera. Here we relax that necessity by adding a more complete camera calibration phase for a camera with only three sensors.
Real camera sensor curves are not in fact narrowband: Below, we investigate how the assumption of Planckian lighting impacts models of image formation by making use of a 3sensor deltafunction sensitivity camera. It is evident that real sensors are far from idealized delta functions: each is typically sensitive to a wavelength interval over 100nm in extent. Nevertheless, as we shall see, they behave sufficiently like narrowband sensors for our theory to work and moreover this behaviour could be promoted by carrying out calculations in an intermediate spectrally sharpened colour space [26].
Now let us briefly examine image formation in general for a dichromatic reflectance function comprising Lambertian and specular parts. For the Lambertian component, suppose there are lights, each with the same SPD (e.g., an area source) given by Wien’s approximation of a Planckian source [5]:
(1) 
with distant lighting from lights in normalized directions with intensities (the constant determines the units). If the surface projecting to retinal point has spectral surface reflectance and normal then, for a deltafunction narrowband sensor camera with spike sensor sensitivities =, , the 3vector RGB response is
(2) 
The above is the matte model employed. For the specular part, let us assume a specular model dependent on the half–way vector between the illuminant direction and the viewer:
(3) 
where is the colour of the specularity for the light. E.g., in the Phong specular model [27],
(4) 
where a high power makes a more focussed highlight.
Now in a neutral interface model [4], the colour of the specular term is approximated as:
(5) 
Hence for Lambertian plus Specular reflectance, we arrive at a simple model:
(6) 
For each pixel at a retinal position , the second term in the brackets is a constant, say, that depends only on geometry and not on the light colour. Therefore we have
(7) 
with possibly several specular highlights on any surface ().
If we define
(8) 
then our expression simplifies to:
(9) 
3 SpecularPoint Line in Log Chromaticity Space
We note that dividing by a colour channel (green, say) removes the initial factor in eq. (9). We can divide instead by the geometric mean (cf. [10]) so as not to be forced to choose a particular normalizing channel. Define the mean by
(10) 
Then we can remove light intensity and shading by forming a chromaticity 3vector via
(11) 
Thus from eq. (9) we have
(12) 
where we simplify the expressions by defining some shorthand notations as follows:
(13) 
and we define an effective geometricmeanrespecting value by setting
In the case of broadband sensors we replace some of the definitions in eq. (12) above by values that are equivalent for deltafunction cameras but are appropriate for real sensors (extending definitions in [10]):
(14) 
The meaning of eq. (12) is that the log of the chromaticity is given by: (i) A term consisting of the mattesurface term combined with a term , a scalar at each pixel that is the specular contribution; (ii) a constant 3vector offset term, , which is a characteristic of the particular camera; and (iii) a term equal to the product of a “lightingchange” 3vector , also characterizing the camera, times the inverse of the correlated colour temperature encapsulating the colour of the light.
Thus as the light colour (i.e., ) changes, say into a shadow or because of interreflection, the logchromaticity at a pixel simply follows a straight line in 3space (as temperature changes), along the lightchange direction , even including the specular term . For a fixed , if changes on a patch with reflectance vector , then the plot of will be a curved line.
In this paper, we mean to calibrate the camera so as to recover (a projection of) both this lightchange vector as well as the constant additive term . The difference from previous work [10] is as follows.
In the method [10], going over to a chromaticity space meant that 4 dimensions were reduced to 3. Then in that 3space, lightchange vector
was obtained as the first eigenvector of meansubtracted colourpatch values. To then go over to a 2space, logchromaticity values were then projected onto the subspace orthogonal to 3D lightchange vector. This meant that all lighting colour and strength were projected away. In that plane, the illuminant, and consequently the specular point as well, were always located in precisely the same spot. It was argued that, at a highly specular point in an input image, the pixel values would essentially consist of the specular point and thus one could derive that point from training images. Then forming radii from that specular spot out to the leastspecular pixel position effectively removed specularities.
Here, in contrast, we start with 3D colour values, rather than 4D ones, and so chromaticity vectors are effectively 2D. Now calibration of the camera is used to provide both a value of the offset term in eq. (12) as well as of the lightingcolourchange vector .
For specular pixels, there is no surface term above in eq. (12), and , so the value of this loggeometricmean chromaticity at a purely specular pixel becomes the simpler form
(15) 
Thus as changes we have a line, in a 2D colour space, whereon any specular point must lie. To determine just where it does lie, we form an objective function measure, which is in fact minimized provided we choose the correct value of : an example of such a measure is given below in 5.1. Hence we recover the temperature and therefore the light colour. Moreover, since have an illuminant locus we can go on to relight images by moving the illuminant along the locus obtained during the camera calibration phase. Such relit images are shown below in 6 where images are shown as they would appear under a different colour temperature.
Note that although we work with 3vectors, the step of division by the geometric mean creates vectors that lie on a plane: they are all orthogonal to the vector — in fact, each of the three terms in eq. (12) lies in this plane. Thus the components are not independent.
4 Recovery of SpecularPoint Locus
To find the vector , we image matte Lambertian colour patches. Here we use the 18 nongrey patches of the Macbeth ColourChecker [28]. We form values using temperatures from K to K.
According to eq. (12) (with no specular contribution), for each surface we should see a set of points in 3space that falls on a straight line along . Thus for each surface, if we then subtract the mean, in each channel of , we see a set of nearly coincident lines through the origin.
Therefore, as pointed out in [25] (in a 2D setting like eq.(12) but with ), we can find vector by forming the covariance matrix of meansubtracted values and calculating eigenvectors. The first eigenvector is the desired approximation of direction .
To derive the offset term , we utilize the recovered normalized version of vector and image two lights (below) to determine the scaling along the inversetemperature line.
Since we know that our colour features lie on the plane perpendicular to the unit vector , to simplify the geometry we first rotate all our logchromaticity vector coordinates into that plane by forming the projector onto the direction. 2D coordinates are formed by multiplication of the rotation matrix from the eigenvector decomposition of the projector onto the plane:
(16) 
We denote 2vectors in this 2D space as . And, explicitly, we form 2vectors in the plane by
(17) 
Now suppose that in the 2D coordinates , two lights and produce vectors and : for each light we form chromaticity (11), take logs, and then project via (17). Consider the recovered normalized lightchange direction vector, projected into this plane: define the 3vector as having components , and denote its unit 2vector projection as . Note that we recover only a normalized version of from our SVD analysis of imaged colour patches, with the norm unknown. That is, we work in the plane by rotating with , and further normalize that projected 2vector, giving a known, normalized, 2vector from our calibration.
Also, denote by the projected vector : this is what we aim to recover.
Then the 2vector coordinates for the two lights are
(18) 
where is an unknown scale, and are known colourtemperatures. Note that since we are imaging lights, not surfaces, the surface term in eq. (12) is not present.
Forming the difference 2vector , we obtain a result involving only the normalized direction . So we can determine the norm if we know and . For consider the difference 2vector
(19) 
Even from these two data points we can easily determine the normalized vector since it is simply given by the direction of the difference in . Since we know , the norm thus falls out of eq. (19).
Finally, subtracting the term , , from each of the two vectors and taking the mean, we recover the offset term .
Let us denote by the product , so using this vector and the offset we arrive at a line (for this particular camera calibrated as above) parametrized by temperature that must necessarily be traversed by any candidate specular point:
(20) 
In summary, the calibration algorithm proposed is expressed in algorithm 1.
Colour target:  
Record RGB responses , (reflected from colour target)  
for several lights each pixel follows a parallel straight line;  
calculate geometric mean at each pixel from eq. (10).  
Derive geometricmeanbased chromaticity 3vector from eq. (11), and take logarithms.  
Find 3vector as first eigenvector for values, meansubtracted  
for each colour patch.  
For illuminants , characterized by their known temperatures  
(in a lightbox, for example):  
Derive as above, for light reflected from a grey patch.  
Project onto plane orthogonal to via eq. (17), forming 2D coordinates .  
Subtracting pairs of values for known values , find 2D projected lightchange  
vector via eq. (19).  
Using , find mean value of camera offset vector over vectors used, eq. (20). 
As set out in algorithm 1, a more accurate way to recover the offset term and the vector is to utilize several different known illuminants and capture them using the camera to be calibrated: lights should approximately lie on a straight line in space. Then line parameters and
, as well as outliers, can be recovered using a robust regression method such as the Least Median of Squares (LMS)
[29].We shall find in the following sections that the offset and the vector are all the calibration information that we need for different applications such as illuminant identification and relighting.
4.1 Real Images
The image formation theory used is based on three idealized assumptions: (1) Planckian illumination, (2) dichromatic surfaces; and (3) narrowband camera sensors. To determine if real images stand up under these constraints and generate the needed straight line in 2D colour space, we make use of datasets of measured images [30, 31]. Fig. 1 displays measured illuminant points in space for 86 scenes captured by a highquality Canon DSLR camera for 86 different lighting conditions [30]. Notwithstanding the fact that the camera sensors are not narrowband and illuminants are not perfectly Planckian, we can see that these illuminants do indeed approximately form a straight line, thus justifying the suitability of the theoretical formulation.
Since we assume that lights can be characterized as Planckian, we expect that severely nonPlanckian lights will form outliers to the straightline path determined. Figs. 2(a,b) demonstrate that this is indeed that case. Here we show illuminant points transformed to 2D space for 98 images consisting of measured images of 9 objects that are specifically selected to include substantial specular content, under different illumination conditions [31]. In this dataset, illuminants for 26 of the images are fluorescent (Sylvania Warm White Fluorescent(WWF), Sylvania Cool White Fluorescent (CWF) and Philips Ultralume Fluorescent(PUF) ). These show up in Fig. 2(a) as outlier points. Fig. 2(b) shows that the robust LMS method correctly identifies these points as outliers and thus does not include them in calculating line parameters.
[] [] 
5 Illuminant Identification
Our camera calibration process has generated a locus in chromaticity space that candidate natural daylight illuminations will follow. In this section we show how for a new image we can identify a point on this locus as an estimate of the illuminant.
Recently, Drew et al. [32, 33] presented an illuminant estimation method based on a planar constraint. This stated that for nearspecular pixels, LogRelativeChromaticity (LRC) values are orthogonal to the light chromaticity: they showed that if one divides image chromaticity by illuminant chromaticity, then in a log space the resulting set of 3vectors are approximately planar, for nearspecular pixels, and orthogonal to the lighting — for the correct choice of the illuminant only. Hence they propose an objective function based on this planar constraint which is minimized for the correct illuminant.
Here, we utilize this daylight illuminant planar constraint by further constraining the light to lie on the daylight locus we have derived above. The locus provides an additional constraint on the illuminant and hence improves the estimate.
To begin, we briefly recapitulate below the derivation of this planar constraint.
5.1 Plane Constraint
Suppose we rewrite eq. (9) for the 3vector RGB response , here relinquishing the requirements that lighting be Planckian and sensors be narrowband, but instead applying the different simplifying assumption that matte pixel 3vector RGB triples be a componentwise product of a light 3vector , , and a surface triple [34]. Here, is the reflectance at a pixel under equienergy white light.
Adding a Neutral Interface Model term [4] for specular content, as in eq. (7), we have approximately
(21) 
where is shading. E.g., for Lambertian matte shading equals lightingdirection dotted into surface normal. Here, is again a triple giving the overall camera sensor strength [35]; represents the amount of specular component at that pixel. The value of for a pixel will depend upon the lighting direction, the surface normal, and the viewing geometry [1]. Let us lump values into a single quantity and for convenience call this simply . Now we have
(22) 
Instead of the geometricmean based chromaticity in eq. (11), let us make use of the standard Lnorm based chromaticity [5]
(23) 
Thus here we have
(24) 
Let us define the LogRelativeChromaticity (LRC) as the above chromaticity divided by the chromaticity for the lighting itself, . The planar constraint [32] says that for nearspecular pixels, LRC values are orthogonal to the light chromaticity, provided we have chosen the correct illuminant to divide by.
To see how this constraint arises, form the LRC, which we denote as :
(25) 
For convenience, now define where is the L norm.
Near a specular point, we can take the limit as . Let . Then in the limit, goes to
(26) 
The above is the Maclaurin series, accurate up to . By inspection, we have that the LRC vector, , is orthogonal to the illuminant vector: , and hence also orthogonal to the illuminant chromaticity, .
The planar constraint therefore suggests finding which illuminant amongst several candidates is the correct choice, for a particular image, by minimizing the dotproduct over illuminants, for pixels that are likely nearspecular [32]. Define as the dotproduct between and the chromaticity for a candidate illuminant, with formed by dividing by this same illuminant chromaticity:
(27) 
Then we seek to solve an optimization as express in algorithm 2
subject to  (28) 
where is a set of pixel dotproduct values with the candidate illuminant chromaticity that are likely to be near specular, e.g. those in the lowest 10percentile.
5.2 Experimental Results
We apply our proposed method to two different realimage datasets [31, 36] and compare our results to other colour constancy algorithms. The motivation here is to investigate whether the derived daylight locus correctly helps identify illuminants that are indeed daylights. We show that this is the case.
5.2.1 Laboratory Images
Our first experiment uses the Barnard dataset [31], denoted here as the SFU Laboratory dataset (introduced above in 4.1). This contains 321 measured images under 11 different measured illuminants. The scenes are divided into two sets as follows: minimal specularities (22 scenes, 223 images – i.e., 19 missing images); and nonnegligible dielectric specularities (9 scenes, 98 images – 1 illuminant is missing for 1 scene). In this dataset the illuminant for 86 of the images are fluorescents. To compare to other colour constancy methods, we consider the following algorithms: WhitePatch, GreyWorld, and GreyEdge implemented by [16]. For GreyEdge we use optimal settings, which differ per dataset [37] ( , for the SFU Laboratory dataset and , for the GreyBall dataset below). We also show the results provided by Gijsenij et al. [22] for pixelbased gamut mapping, using the best gamut mapping settings for each dataset.
How the daylight locus information is used is as an additional constraint to the optimization (28), whereby candidate illuminants are restricted to the daylight locus determined by our calibration, for the camera used in taking images.
Table 1 lists the accuracy of the proposed method for the SFU Laboratory dataset [31], in terms of the mean and median of angular errors, compared to other colour constancy algorithms applied to this dataset. Since the daylight locus is designed for natural lights (Planckian illuminants) and not fluorescents, we expect performance to be better for nonfluorescents, and this is indeed the case for the 86 scenes imaged under fluorescent lighting. As well, we break out results for all methods for nonfluorescent illuminants (235 images). The results show that in fact using the daylight locus outperforms all other methods in terms of median error, notwithstanding the fact that it is a much less complex method than the gamutmapping algorithms and does not require any tuning parameters.
The main conclusion to be drawn from this experiment is that the daylight locus does aid a planarconstraint driven illuminant identifier when illuminants are indeed natural lights. This justifies the suitability of our daylightlocus formulation as a useful physicsbased constraint of natural lighting.
all  nonfluorescent  

Method 
Median Er  Mean Er  Median Er  Mean Er 
WhitePatch  
GreyWorld  
GreyEdge (,)  
Gamut Mapping pixel ()  
Planar Constraint Search  
Daylight Locus using Planar Constraint  1.6 
5.2.2 RealWorld Images
For a more realworld (out of the laboratory) image experiment we used GreyBall dataset provided by Ciurea and Funt [36]: this dataset contains 11346 images extracted from video recorded under a wide variety of imaging conditions. The images are divided into 15 different clips taken at different locations. The ground truth was acquired by attaching a grey sphere to the camera, displayed in the bottomright corner of the image. This grey sphere must be masked during experiments.
Fig. 3(a) shows the illuminants for this image set, mapped into 2D colour space eq. (17). We see that these illuminants do approximately follow a straightline path in 2space; the LMSbased robust regression method finds a straightline regression line shown reddashed. Transformed back into standard Lnorm based chromaticity space (23) the path is curved, as in Fig. 3(b).
Table 2 shows results for this dataset. We find that the Daylight Locus using the Planar Constraint does better than all the other methods save one: it is only bested by the far more complex Natural Image Statistics method [38]
. This is a machine learning technique to select and combine a set of colour constancy methods based on natural image statistics and scene semantics. Again, we find that adding the Daylight Locus information improves the Planar Constraint approach since here lights used are natural daylights.
[] [] 
6 ReLighting Images
We have shown that by means of calibrating the camera we can recover the specular point for a new image not in the calibration set. That is, the method recovers an estimate of the temperature for the actual natural illuminant in a test image. Moreover, we have a curve that illuminants must traverse as the lighting colour changes. Consequently it should be possible to relight an image by changing the position of the specular point along the curve, thus generating new images with a different illuminant.
If we again adopt the assumption that camera sensors are narrowband, we can use a diagonal colour space transform [39] to move the image into new light conditions, via the following equation:
(29) 
where is a diagonal matrix with values from vector , and with the current specular point and the new specular point; and are the original and transformed vectors for each image pixel.
Fig. 4 shows the same image for different Planckian illuminants from 1500K to 10000K, using the proposed relighting method. The method arguably produces reasonable output images corresponding to the colour of the lights involved.
[] [] [] [] 
[] [] [] [] 
In another experiment, we compare the error of using daylight locus for relighting, via eq. (29) compared to using the actual value of illuminants. Fig. 5 shows the same image transferred to other measured images, using their estimated illuminants on the daylight locus. In all, we generated relit images for a fixed object under 8 different illuminants (56 relightings). In terms of PNSR error for generated images compared to measured ones, we found a median PSNR value of dB, with minimum and maximum values of and dB. These values demonstrate acceptable faithfulness of rendition for images under new lighting. QAs another comparison, instead of using illuminants on the locus we instead used actual measured illuminants in transforming the image via eq. (29). Now the min/median/max PSNR values are , and , almost identical with those found using the illuminant approximation derived from the locus. This demonstrates that using the locus is nearly as good as using the actual illuminant, for this relighting task, with negligible difference in results.
[] [] [] [] 
[] [] [] 
7 Matte Image from Angle Image
We would like to generate a matte output image, which will then act as an invariant image free of shading and specularity (which could then be used as input to a segmentation scheme, for example). However, our specularinvariant quantity is the angle from the recovered specular point, to each image pixel in feature space. However, this angle encapsulates hue information. The main point is that the angle from the specular point to the feature point of a pixel is approximately independent of the presence or absence of specular content at that pixel. Hence, if there is any structure in the image feature space from specular content, then by going over to this 2D chromaticity space radii from the specular point will be in the same direction for pixels of the same body colour with or without specular content.
Based on the chromaticityspace model [4], a pixel value is a linear combination of the light colour and the matte colour, as measured by the camera, resulting in a line in chromaticity space starting from the matte point for any particular colour and leading towards the illuminant colour. Since we already know the light, assumed to be the colour of the specular point, we have this line direction for each pixel, leading from from specular point to that pixel. Moreover, these lines correspond to the angular values that we already assigned to each pixel. We can therefore consider the pixels with the same angular value as belonging to the same matte object — although in real images it is possible that two matte values fall on the same line toward the specular point. Here we initially simply take any such cases as belonging to the same matte value; however, below, considering spatial information we can in fact separate these two matte values from each other.
To make the calculation simpler we transform the chromaticity of the specular point to the origin and use polar coordinate . We discretize angle values by using to have 360 bins. Therefore for each chromaticity point , we consider , where is the specular point.
The final step to generate a specularityfree colour image is to find a matte value for each pixel. We take the farthestmost pixel from the specular point (i.e., maximum radius ) for each as the matte colour (after removing outliers). So the matte colour for each pixel at that angle is identified with the farthest pixel. We call this process “angular projection to matte colour”. In other words we are projecting chromaticity points to the border of chromaticity values for each angle, considering the specular point as the center of projection:
(30) 
Fig. 6 illustrates the projection for chromaticity points for a real image by angular projection to matte colour.
[] [] [] 
The angular projection is more sensitive to noise the closer are image feature points to the specular point. Generally, because of noise angular projection to matte colour may completely fail for highlights. Hence we deal with the 10% of pixels that are close to each candidate specular point differently — we iteratively inpaint these pixels using matte colour data from neighbouring pixels that correspond to the same angular value (1D inpainting). That is, we use voting based on the matte colour of the pixel’s neighbours: the new matte colour for that pixel will be the majority of its neighbours’ matte colour if it garners at least half of the votes.
A complete synthetic example consists of accurately modeled matte plus specular components. Here, let us consider a test image consisting of three shaded spheres (as in [10]), with surface colours equal to patches 1, 4, and 9 of the Macbeth ColourChecker [28] (dark skin, moderate olive green, moderate red), and under standard illuminant D65 (standard daylight with correlated colour temperature 6500K [5]) using the sensor curves for a Kodak DCS420 digital colour camera. If we adopt a Lambertian model then the matte image is as in Fig. 7(a). We now add a specular reflectance lobe for each surface reflectance function. We use the Phong illumination model[27], together with underlying matte Lambertian shading. Here, we use a Phong factor of 1 for the magnitude relative to matte. For the Phong power, we use a power of 20, where the inverse is basically roughness, 0.05. The matte image goes over to one with highlights as in Fig. 7(b). For our synthetic example, the resulting chromaticity image is shown in Fig. 7(d). Comparing to the input chromaticity image in Fig. 7(c), we see that the algorithm performs very well for generating the underlying matte image — specularities in the center of each sphere are essentially gone. In comparison, Fig. 7(a) shows the theoretical, correct, matte image, which is indeed very close to the algorithm output in Fig. 7(a).
[] [] 
[] [] 
Fig. 8 shows results, including finding the specular point and generating a matte colour image, for 4 of input images: whereas the original images’ chromaticity clearly shows highlight effects and some shading, output for the proposed method effectively eliminates these effects.




8 Conclusion
In this paper we present a new camera calibration method aimed at recovering parameters for the locus followed by illuminants in a special 2D chromaticity space. The objective is to discover the colourtemperature of the illuminant in the scene, for a new image not in the training set but captured using the calibrated camera.
As a testing method to verify the validity of the proposed locus idea, we compare illuminant recovery making use of the suggested locus as opposed to not using it. We determined that adding the locus constraint does indeed help identify the scene illuminant. While the effect is not large, nonetheless the experiments do provide a justification of the locus approach — a new insight in physicsbased vision.
As an additional capability, we can subsequently generate a new version of the input image, shown as it would appear relit under new lighting conditions by considering different illuminant values as the illuminant moves along the specularpoint locus.
In future work we will investigate how to make the method more robust to illuminants that differ more substantially from Planckians.
References
 [1] S. Shafer, “Using color to separate reflection components,” Color Research and Applications 10, 210–218 (1985).
 [2] H.C. Lee, E. Breneman, and C. Schulte, “Modeling light reflection for computer color vision,” IEEE Trans. Pattern Analysis and Machine Intelligence 12, 402–409 (1990).

[3]
G. Klinker, S. Shafer, and T. Kanade, “The measurement of highlights in color images,” International Journal of Computer Vision
2, 7–32 (1988).  [4] H.C. Lee, “Method for computing the sceneilluminant chromaticity from specular highlights,” The Journal of the Optical Society of America A 3, 1694–1699 (1986).
 [5] W. Wyszecki and W. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulas (Wiley, New York, 1982), 2nd ed.
 [6] T. Lehmann and C. Palm, “Color line search for illuminant estimation in realworld scenes,” The Journal of the Optical Society of America A 18, 2679–2691 (2001).
 [7] R. Tan and K. Ikeuchi, “Separating reflection components of textured surfaces using a single image,” IEEE Transactions on Pattern Analysis and Machine Intelligence pp. 178–193 (2005).

[8]
G. Finlayson and G. Schaefer, “Convex and nonconvex illumination constraints for dichromatic color constancy,” in “Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,” (2001), pp. 598–605.
 [9] G. Finlayson and G. Schaefer, “Solving for colour constancy using a constrained dichromatic reflection model,” Int. J. Comput. Vision 42, 127–144 (2001).
 [10] G. Finlayson and M. Drew, “4sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in “ICCV’01: International Conference on Computer Vision,” (IEEE, 2001), pp. II: 473–480.
 [11] C. Lu and M. Drew, “Practical scene illuminant estimation via flash/Noflash pairs,” in “Color Imaging Conference,” (2006).
 [12] S. Hordley, “Scene illuminant estimation: past, present, and future,” Color Research and Application 31, 303–314 (2006).
 [13] A. Gijsenij, T. Gevers, and J. van de Weijer, “Computational color constancy: Survey and experiments,” IEEE Transactions on Image Processing 20, 2475–2489 (2011).
 [14] E. Land, “The retinex theory of color vision,” Scientific American 237, 108–128 (1977).
 [15] G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 310, 1–26 (1980).
 [16] J. van de Weijer and T. Gevers, “Color constancy based on the greyedge hypothesis,” in “Int. Conf. on Image Proc.”, (2005), pp. II:722–725.
 [17] G. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” in “Twelfth Color Imageing Conference: Color, Science, Systems and Applications.”, (2004), pp. 37–41.
 [18] D. Forsyth, “A novel approach to color constancy,” in “Proceedings of the Int. Conf. on Computer Vision,” (1988), pp. 9–18.
 [19] K. Barnard, “Improvements to gamut mapping colour constancy algorithms,” in “European conference on computer vision,” (2000), pp. 390–403.
 [20] G. Finlayson and S. Hordley, “Improving gamut mapping color constancy,” IEEE Transactions on Image Processing 9, 1774–1783 (2000).
 [21] G. Finlayson, “Color in perspective,” IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 1034–1038 (1996).
 [22] A. Gijsenij, T. Gevers, and J. van de Weijer, “Generalized gamut mapping using image derivative structures for color constancy,” International Journal of Computer Vision 86, 127–139 (2008).
 [23] H. Vaezi Joze and M. Drew, “White patch gamut mapping colour constancy,” in “Proceedings of IEEE Int. Conf. on Image Proc.”, (IEEE, 2012).
 [24] G. Finlayson, S. Hordley, C. Lu, and M. Drew, “On the removal of shadows from images,” IEEE Trans. Patt. Anal. Mach. Intell. 28, 59–68 (2006).
 [25] G. Finlayson and S. Hordley, “Colour constancy at a pixel,” The Journal of the Optical Society of America A 18, 253–264 (2001).
 [26] G. Finlayson, M. Drew, and B. Funt, “Spectral sharpening: sensor transformations for improved color constancy,” The Journal of the Optical Society of America A 11, 1553–1563 (1994).
 [27] J. Foley, A. van Dam, S. Feiner, and J. Hughes, Computer Graphics: Principles and Practice (AddisonWesley, 1990), 2nd ed.
 [28] C. McCamy, H. Marcus, and J. Davidson, “A colorrendition chart,” J. App. Photog. Eng. 2, 95–99 (1976).

[29]
P. Rousseeuw and A. Leroy,
Robust Regression and Outlier Detection
(Wiley, 1987).  [30] P. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” in “CVPR’08: Computer Vision and Pattern Recognition,” (2008).
 [31] K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for colour research,” Color Research and Applications 27, 147–151 (2002).
 [32] M. Drew, H. V. Joze, and G. Finlayson, “Specularity, the zetaimage, and informationtheoretic illuminant estimation,” in “CPCV2012: European Conference on Computer Vision Workshop on Color and Photometry in Computer Vision,” (2012).
 [33] M. S. Drew, H. R. V. Joze, and G. D. Finlayson, “The zetaimage, illuminant estimation, and specularity manipulation,” Computer Vision and Image Understanding 127, 1–13 (2014).
 [34] C. Borges, “Trichromatic approximation method for surface illumination,” The Journal of the Optical Society of America A 8, 1319–1323 (1991).
 [35] M. Drew and G. Finlayson, “Multispectral processing without spectra,” The Journal of the Optical Society of America A 20, 1181–1193 (2003).
 [36] F. Ciurea and B. Funt, “A large image database for color constancy research,” in “IS&T/SID Color Imaging Conference,” (2003), pp. 160–164.
 [37] A. Gijsenij, “Color constancy : Research website on illuminant estimation,” http://staff.science.uva.nl/~gijsenij/colorconstancy/index.html.
 [38] A. Gijsenij and T. Gevers, “Color constancy using natural image statistics and scene semantics,” IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 687–698 (2011).
 [39] H. Chong, S. Gortler, and T. Zickler, “The von kries hypothesis and a basis for color constancy,” in “IEEE International Conference on Computer Vision,” (2007), pp. 1–8.
Comments
There are no comments yet.