I Introduction
Digital cameras are designed in analogy to the trichromatic human visual system which has three cone sensors. If a camera is to capture colors like a human observer, arguably the camera sensors should equal the cone fundamentals [1]. Practically, however, engineering cameras having spectral sensitivities similar to the cone fundamentals is only required if we wished to construct a biologically plausible model of how we see[2]. For most practical applications  e.g. photography and video  it is more important that we can transform the recorded device RGBs to drive a display so that the image captured by a camera either looks the same to a human observer or records triplets of numbers  e.g. CIE XYZ coordinates  that are referenced to the human visual system[3]. We say that a digital camera is colorimetric if it meets the socalled Luther condition [4, 5], i.e. its spectral sensitivity functions are linearly related to the CIE XYZ color matching functions (CMFs).
The Luther condition places a very strong constraint on the shape of camera spectral sensitivities. A strong constraint is required because the Luther condition effectively assumes that any and all spectral stimuli are possible. However, many studies have shown that the actual spectra (measured in the real world) are far from being arbitrary. Indeed, reflectance spectra tend to be quite smooth[6, 7, 8] and as a consequence can be fit with low dimensional linear basis[9, 10]. Indeed, spectral basis with dimensions from six to eight, for different applications, are often proposed as adequate models of spectral reflectance. Illuminants by contrast are much less describable by small parameter models. Indeed, for artificial lights such as fluorescent and LED lights, the light spectrum can be very spiky and the number and position of the spikes can vary considerably. And yet, illumninants are also far from being arbitrary. They are designed to have colors near the Planckian locus[3], a requirement to score highly on color rendering indices [11].
Possibly, a more practically useful variant of the Luther condition would be one that is datadriven. That is, where camera RGBs can be mapped to XYZs for the spectral data that are likely to be encountered in the real world. Equally, in principle, we might consider whether a nonlinear mapping could or indeed, should be used.
It is a classical result [12] that if reflectances were exactly modelled by a 3dimensional linear model then for a given light spectrum there would be a specific transform matrix taking camera RGBs to XYZs. While reflectance spectra are not adequately described by a 3dimensional model, RGBs can be approximately mapped to corresponding XYZs using a matrix. Indeed, this regression approach is adopted in almost all cameras with good results (we are mostly happy with the colors a camera records). But, as we shall see later the ‘fit’ is not sufficient from a color measurement point of view.
Of course, rather than using a linear matrix to map RGBs to XYZs, we could use a nonlinear transform instead. Possible nonlinear methods include Polynomial and Rootpolynomial regressions and Lookuptables [13, 14, 15]. However, the linear transform method of using a matrix  even though it is not optimal in terms of fitting error  has two advantages compared to most nonlinear methods. First, the transform scales linearly with exposure. If the scene is made twice as bright (e.g. by doubling the quantity of incoming light), the same matrix correctly maps the camera measurements to XYZs (because the magnitude of camera RGBs and XYZs both double). Typically, nonlinear methods do not have this exposureinvariant property (one exception is [16]).
The second advantage is that a linear transform is, well, linear. The human eye measures color stimuli linearly: at the cone quantal catch level, the response to the sum of two spectral stimuli is equal to the sum of the responses to the two stimuli viewed individually [2]. This can be an important physical consideration. As an example, when we view a surface that has highlights, the recorded color is a convex sum of the socalled body color (the color name we would assign to the object) and the color of the highlight[17], sometimes called the interface color. If, we are viewing a red shiny plastic surface the body color is red and if the viewing light is white then the interface color is also white (i.e. the same as the color of the light). As we move from pure body to pure highlight color, the measured XYZs lie on a 2D plane in the color space. Equally the camera, which at the sensor level has a linear response, will also make measurements that lie on a 2D plane. But, a nonlinear correction will distort the plane and the result will be an image that is not physically accurate or even physically plausible. This problem is discussed in detail in [18].
Practically, the closer the spectral sensitivities of a camera are to being linearly related to the XYZ color matching functions the better it will perform as a tool for color measurement, i.e. the more colorimetric
it will become. Interestingly, when we linearly regress RGBs to XYZ tristimuli, we can interpret this as linearly transforming the sensors themselves. That is, a new camera whose sensitivities are modified by the linear regression transform approximately measures the desired XYZs. It follows that one strategy to improving the color measurement capability of a camera would be to change the camera sensitivities (to ones that are more linearly related to the XYZ color matching functions). However, there are constraints on sensitivities achievable in physically realisable cameras which means we can never get 100% colorimetric accuracy.
In this paper, we make an easy modification to the camera spectral sensitivities. We propose simply to place a specially designed filter in front of a camera with the goal of making the filtered RGB measurements it records are as linearly related to the target XYZs as possible.
How we design the best filter to make a camera colorimetric is the central concern of this paper. We develop optimisation methods to solve for the optimal filter that either makes the camera best satisfy the Luther condition or  in a datadriven approach  can best predict measured XYZs for a range of real reflectances and lights. In the rest of this paper we respectively discuss Luthercondition and Datadriven filters.
The methods we develop are not closedform but adopt the alternating leastsquares paradigm[19, 20, 21]. For the Luthercondition filter optimisation problem we aim to find a filter so that the filtered camera response functions are a linear transform from the XYZ CMFs [22]. In the first step, we find the filter that best maps the spectral sensitivities of the camera to the XYZ CMFs directly. Then we find the best linear combination of the filtered camera sensitivities that approximate the XYZ sensitivities. Holding this mapping fixed we can solve for a new best filter. Then we hold the new filter fixed to solve for the best linear transform. We iterate in this way until the procedure converges [23]. Each individual step in the optimisation can be solved, in closedform, using simple linear leastsquares. In the Datadriven approach we analogously find the filter based on actual measured RGBs and XYZs following the alternating leastsquares technique [24]. For the Luther and Datadriven techniques, the constraint that the recovered filters must be positive is considered.
Clearly, the filter shown in Fig. 1e is not desireable. In the short wavelengths there is a sharp change in transmittance and as a whole the filter is not smooth. For most of the wavelength range the filter transmits little light (20%). Thus, we extend our optimisation framework to incorporate minimum and maximum bounds on transmittance and also that the filters are smooth [25].
Experiments demonstrate that we can find Luthercondition and Datadriven filters that can dramatically increase the colorimetric accuracy across a large set of commercial cameras.
The rest of the paper is structured as follows. In Section II we review the color matching and color image formation, both ideas underpin our filter design method. The mathematical optimisations for the optimal color filter for a given camera are presented in Section III. The experimental results are reported in Section IV. The paper concludes in Section V.
Ii Background
Iia Color Matching Functions
Color matching functions provide a quantitative link between the physical light stimuli and the colors perceived by the human vision system. Figure 2 shows a typical setup for the color matching experiment. The observer views a bipartite field where one side is lit by a test light while the other side is lit by the light mixtures of three primaries (i.e. monochromatic red, green, blue lights). The intensities of three primary lights are adjusted by the observer to make a visual match, i.e. the two stimuli on each side the bipartite field match if they look visually indistinguishable to the observer. Sometimes, no match is possible. In this case one of the primary lights should be added to the test light field. Mathematically, we can model this as if we were subtracting some of the primary light. See [26] for more discussion.
By successively setting the test light to be a unit power monochromatic light at sample points across the visible spectrum, we can measure the Color Matching Functions[27]. That is we find the RGB mixture that affords a match on a per wavelength basis. Because color matching is linear (the sum of two test spectral lights is matched by the sum of their individual matches) then given the color matching functions we can compute the match for any arbitrary test light [28]. We simply integrate any test light spectrum with the Color Matching Functions. It can also be shown that the color matching functions are necessarily linearly related to the cone sensitivities [29, 30].
Assuming monochromatic primary lights at 650nm, 530nm and 460nm, the resulting CIE RGB CMFs are shown to the right of Fig. 2. The XYZ CMFs are a linear combination of the RGB CMFs, see in Fig. 1b. That XYZ CMFs are used (as opposed to RGB CMFs), in part, because they have, by design, no negative sensitivities. Standardised in 1931, the lack of negatives made pencil and paper calculations with matching curves easy.
The X, Y and Z scalar values we compute when we integrate a test light spectrum with the XYZ CMFs are called XYZ tristimuli. In this paper we are interested in using a camera to measure XYZ tristimuli.
The color matching experiment is of direct practical importance. Indeed, suppose a display device can output three colors equal to the R, G and B stimuli used in the color matching experiment. It follows that a camera that had sensitivities equal to RGB Color Matching Functions would see the correct RGBs to drive the display that would result in a perceptual match to the test light. Equally, it would suffice that camera’s sensitivities were linearly related to the CMFs since we could linearly transform the camera measurements to the correct RGB to drive the display. Of course, we note that  unlike the RGB color matching functions  we can only drive the putative display with positive numbers. Consequently, there are real world colors that cannot be reproduced on displays.
IiB Continuous Color Image Formation
The color of a pixel recorded by a digital camera depends on three physical factors: the spectral power distribution of the illuminant, the spectral reflectance of the object, and the sensor’s spectral sensitivities. The color formation at a pixel, under the Lambertian surface model, can be modeled as
(1) 
where the subscript indicates the color channel, denotes wavelength variable defined on the visible spectrum (approximately, 400nm to 700nm). The functions and respectively denote the spectral power distribution of the illuminant and the spectral reflectance. The function is the spectral sensitivity of the th camera color channel.
Let us define the color signal :
IiC Discrete Color Image Formation
In the optimisations that will be presented in Section III, it is useful to recast the continuous integrated responses using discrete representations where spectra are represented as sampled vectors of measurements. Typically and justifiably from a color accuracy point of view
[31], a spectrum can be represented by 31 measurements made between 400nm and 700nm by 10nm sampling interval. Note, the methods we set forth  since they are designed for the discrete domain  are agnostic about the sampling interval. If the data is given at a 5nm sampling distance then each spectrum would be represented as a 61component vector. Henceforth, we will talk about spectra being 31vectors (and this corresponds to the format of most available measured spectral data).Equation (3) is now, equivalently, written as:
(4) 
Respectively and denote the sampled version of the color signal spectrum and the th spectral sensitivity function. We assume the sampling distance is incorporated in the spectral sensitivity vectors. Here ‘’ denotes the dot product of two vectors.
One advantage of transcribing image formation into vector notation is that we can usefully deploy the tools of linear algebra. Let the matrices and denote and matrices whose columns respectively contain color signal spectra and the 3 device spectral sensitivities. The set of RGB responses can be written as a single concise expression:
(5) 
where denotes the matrix transpose.
Denoting, X as a matrix whose colmuns contain the discrete XYZ color matiching functions, the XYZ tristimulus responses can be written as:
(6) 
IiD The Luther Condition for Camera Spectral Sensitivities
Let us denote individual camera and XYZ responses as and . We can use a camera to exactly measure colors if and only if there exists a function such that for all spectra. It follows that if a pair of spectra integrates to the same RGB then this pair must also integrate to the same XYZ:
(7) 
This implies is simultaneously in the null space of and X. Since any spectrum in the nullspace of is a physically plausible spectral difference this implies that the nullspaces of and X are the same and this in turn implies the Luther condition
(8) 
where is a matrix. See [32] for the original proof of the Luther condition (which we precis above).
IiE The Datadriven Luther Condition
The Luther condition for Spectral Sensitivities presupposes that any and all physical spectra are plausible. However, we know that reflectance spectra are smooth[8]. Lights while more arbitrary are designed to integrate to fall near or close to the Planckian locus[3] and to score well on measures such as the Color Rendering Index[11]. Pragmatically, a camera is colorimetric if its responses are a linear transformation from the XYZ tristimulus values
(9) 
where is a matrix.
IiF Filter Design
There are many papers in the literature where the spectral sensitivities that a camera should have had are designed [33, 34, 35, 36, 37]. For example, we can solve for the camera spectral sensitivities with respect to which the RGBs mapped with respect to many illuminant spectra can be mapped to corresponding XYZs for a single target illuminant. The procedure where we measure colors under a changing light  e.g. in the real world  but then reference (that is, map) these colors back to a fixed illuminant viewing condition is a standard methodology in color science. Curiously, the best sensors that solve this problem are not linearly related to the XYZ CMFs [38].
Perhaps, the closest work to our study is [39]. Here two images are captured. The first with the native spectral sensitivities and the second through a colored filter. The emphasis of that work was to increase the spectral dimensionality of the capture device. Since we make two captures we have 6 measurements per pixel. Effectively we have a 6dimensional sensor set. We match target XYZs by applying a correction matrix. In [39], the best filter was chosen from a set of commercially available filters.
There are many other works which propose recovering spectral information by capturing multiple exposures of the same scene through multiple filters, e.g. [40, 41, 42, 43, 44]. A disadvantage of all these methods is that the capture process is longer and more laborious. Filters need to be changed between exposures. The multiexposure process then need to be registered. Image registration remains a far from solved problem. Moreover, scene elements between exposures may move (making registration impossible).
The method we propose here is much simpler. We simply place a specially designed filter in front of a camera and then take conventional single exposure images.
Iii Designing a Filter to Make a Camera more Colorimetric
Iiia Luthercondition Filters
The Luther condition states that a camera system is colorimetric if its sensitivities are a linear transform from the XYZ color matching functions. We propose a modified Luther condition where a camera is said to be colorimetric if there exists a physically realisable filter which, when placed in front of the camera, generates effective sensitivities which are a linear transform from the XYZ CMFs.
Physically, the role of a filter, which absorbs light on a per wavelength basis, is multiplicative. If is a transmittance filter and denotes the camera sensitivities then is a physically accurate model of the effect of placing a filter in front of the camera sensor.
In Equation (10) we write an optimisation statement for the Filterbased Luther condition:
(10) 
Here and X are matrices encoding respectively the spectral sensitivities of a digital camera and the CIE standard XYZ color matching curves. The 31vector is the sampled equivalent of the continuous filter function . The operation converts a vector into a diagonal matrix (the vector components appear along the diagonal). The meaning of is the same as , i.e. the diagonal matrix allows us to express componentwise multiplication. denotes a matrix. We minimise the square of Frobenius norm (we minimise the sum of squares error). Notice the constraint that the filter value is larger than 0 (physically, we cannot have a filter that has negative transmittance).
We do not have to constrain the maximum transmittance because we can only solve for and up to an unknown scaling factor. Indeed, suppose the filter is returned where the max transmittance is larger than 1. The fitting error in Equation (10) is unchanged if we divide by its maximum value (resulting in a max transmittance of 100%) so long as we multiply the corresponding correction matrix by the same value.
We minimise Equation (10) using an alternating leastsquares (ALS) procedure given in the following algorithm.
where
denotes componentwise matrix multiplication. Steps 4 and 5  where we find the filter and then the linear transform  are solved using simple, closedform leastsquares estimation. For completeness we provide details of how these calculations are made in the Appendix.
At each iteration, the filter and linear transform  and  are calculated relative to the previous filters and matrices. It follows in step 8 that the final solution is the multiplication of all the periteration solutions: (componentwise vector multiplication) and (componentwise matrix multiplication).
Notice nowhere in the above procedure do we constrain the filter transmittance to be larger than 0 (even although this constraint is in the optimisation statement). Empirically, we found that the optimised filter is always positive for all the cameras we tested (see experimental section). Moreover, Theorem , presented below, proves that when there exists a filter which makes the camera sensors perfectly colorimetric that the filter has to be everywhere positive.
The theorem is presented for continuous spectral sensitivity functions. As such, we write the XYZ CMFs and camera sensitivities as vector functions: and . In this representation, we, effectively, have taken transposes of the matrices and X. So, here, we write for the matrix. Of course, matrix in the proof and in the algorithm presented above signifies the same linear transform.
Theorem III.1
Assuming there exists an exact solution that for , the variable is defined over the domain where are continuous and full rank (no one spectral sensitivity can be written as a linear combination of the other two) and are also continuous then .
Proof: First we remark on the continuity of the camera and XYZ functions. Both are the result of physical processes which are continuous in nature. To our knowledge it is not possible to make a physical sensor system that captures light which has discontinuous sensitivities. And, in terms of physiological systems, biological sensor response functions are always continuous.
Next, if across all wavelengths and , then . In this case must be all positive and so an allpositive filter can be found. The interesting case to consider is when the filter has both negative and positive values.
Clearly the matrix must be full rank otherwise the mapped camera sensitivities would be rank deficient and therefore could not model the CMFs. Equally, multiplying by a filter does not change the rank of the sensor set. Because, by assumption are continuous it follows that must also be a continuous function since otherwise would be discontinuous (multiplying a continuous and discontinuous functions together, save for the case where one of the functions is everywhere 0, results in a discontinuous function).
As is continuous if the function has both negative and positive values there must be at least one wavelength where and so . But, this cannot be the case since the XYZ color matching functions are not all zero at any given wavelength within the defined domain.
QED
IiiB Datadriven Filters
Simple Case: in the simple Datadriven approach, we look for a color filter and the color correction matrix that, in a leastsquares sense, best maps camera measurements for a training color signal data set to the corresponding groundtruth XYZ tristimulus values. Denoting a collection of color signals in the color signal matrix , the Datadriven optimisation is written as:
(11) 
Solving Equation (11) depends on the structure of the color signal matrix. If we choose (the identity matrix) then we can solve this optimisation using Algorithm 1 (in this case, Equations (11) and (10) are the same). This assumption is related to the Maximum Ignorance assumption [45] where all possible spectra are considered equally likely.
General Case: Let us develop a more general optimisation statement. One, where we have color signal matrices  denoted ( and the corresponding color correction matrices . Each color signal matrix typically corresponds to a training set of surface reflectances illuminated by a single spectrum of light , thus the color signal matrix is , where is a matrix of reflectances, one reflectance spectrum per column. But, the different light assumption is not a necessary assumption. As an example, we might mix color signals for the Maximum Ignorance assumption with measured data (i.e. two color signal matrices) where both measurement sets contain multiple lights.
The general Datadriven filter optimisation problem is written as:
(12) 
and is solved using Algorithm 2.
Finally, could denote some other privileged standard reference viewing condition (where the reference viewing illuminant is not in the set of training lights). For example, in color measurement we are often interested in the XYZ tristimuli for a daylight illuminant D65 which has a prescribed but not easily physically replicable spectrum.
We are going to solve Equation (12) for the filter using an alternating leastsquares procedure. Notice that the input to the optimisation is an initial filter guess denoted by . Let us consider 3 candidate minimisations corresponding to 3 common scenarios for color measurement each of which can be solved using Algorithm 2.
1) Multiple Lights: Here we assume that indexes over illuminants and (per illuminant we make the target XYZs using the same color signals). We find a single filter which given perilluminant optimal leastsquare correction matrices will best fit camera data to the corresponding multiilluminant XYZs.
2) Multiple measurement lights, single target light: Again indexes over illuminants. But, the target is a single illuminant, for example CIE D65[3].
3) Single Light. This case is, in effect, the simple restriction of the general case, . We have one measurement light and one target light. Like the Luthercondition optimisation, we solve for a single correction matrix.
In Algorithm 2, it is straightforward to solve for the th color correction matrix at iteration step , , in step 4 using the MoorePenrose inverse. Step 5, where we find the filter, can also be solved directly using simple leastsquares, although the basic equations need to be rearranged. Details of the leastsquares computation are given in the Appendix. Here, to ensure that the filter is all positive we can also solve for the filter by solving the optimisation subject to the positivity constraints, we solve a quadratic programming problem [46] (unlike the Luthercondition case there is no a prior physical reason why the best filter should be all positive).
Quadratic programming allows linear leastsquares problem subject to linear constraints to be solved rapidly and, crucially, a global optimum is found.
IiiC Adding Filter Constraints
By default the filter found using Algorithm 2 can be arbitrarily nonsmooth and might also be very nontransmissive. Nonsmoothness limits manufacturability (at least with dye based filters) and a filter that absorbs most of the incoming light would, perforce, have limited practical utility. Both these problems can be solved by placing constraints on the filter optimisation.
Let us now constrain the target filter according to:
(13) 
here denotes a basis matrix with each column representing a basis and the vector denotes an component coefficient vector. The scalars and denote lower and upper thresholds on the desired transmittance of the filter; specifically, is set to 1 by default as fully transmissive and is a positive value between 0 and 1. Equation (13) forces the optimised filter to be in the span of the column vectors of and to meet the min and max transmittance constraints. By judicious choice of the basis we can effectively bound the smoothness of the filter. For example, we could choose the first terms of the discrete cosine transform basis expansion[47].
With respect to the new filter representation, we rewrite the new overall filter design optimisation in Equation (12) as
(14) 
The current minimisation can be solved using the same alternating leastsquares paradigm of Algorithm 2. But, in step 5, we need to substitute the constraint with
(15) 
That is, the filter we find at the th iteration when multiplied by all the filters from the previous iterations is constrained to be in the basis .
Looking at Equation (15) we see that
(16) 
That is, effectively the basis for the ith filter changes at each iteration. Again we can solve step 5 subject to the constraints of Equation (16) using Quadratic Programming.
Finally, we note that we could, of course, rewrite Algorithm 2 so that at each iteration we solve for a filter defined by a coefficient weight directly (we could solve for an term coefficient vector rather than a 31component filter). The two formalisms are equivalent. Here, we chose to solve for the per iteration filter for notational convenience: we can use the general Datadriven algorithm and simply change how we calculate the per iteration filter optimisation.
IiiD Initialising the Datadriven Optimisation
Alternating leastsquares is guaranteed to converge but it will not necessarily return the global optimal result. But, it is deterministic. So, given the state of the correction matrices and filter at the th iteration we will ultimately arrive at the same solution. Equally, if we change the initialisation condition, , in Algorithm 2, we may end up solving for a different filter. Empirically, we observed that the filter returned by the algorithm depends strongly on the initial filter that seeds the optimisation.
Let us consider 3 different ways to seed the Datadriven optimisation:
1) Default: . This uniform unit vector denotes a fully transmissive filter over the spectrum.
2) Luther filter: . That is we seed the Datadriven optimisation with the optimal Luthercondition filter found using Algorithm 1.
3) By sampling: Here we find a set of sample filters (which meet our smoothness and transmittance boundedness constraints) and for each filter sample, , we will run Algorithm 2.
Algorithm 3 generates (number of initial filters)—subject to bounded smoothness constraints—by uniformly and randomly sampling the filter coefficient space. Before sampling, the algorithm first finds the min and max values of the coefficients (which are calculated in each of the dimensions individually). Explicitly, for the th component in vector , we denote its minimum and maximum values as and .
In Algorithm 3, for the minimum value of the th coefficient, we write: . That is, over all possible vectors , which satisfy the transmittance constraints, we take note of the minimum value of the th component. That is we find the minimum value that can be over the set of all possible solutions. The maximum of the th coefficient term is written similarly (see step 3, Algorithm 3). The minimum and maximum values of
can be solved using linear programming
[46].All min and max components, taken together, make the two vectors and . These vectors together define the extremal values in each dimension of an mdimensional hypercube. A vector that lies outside the hypercube is guaranteed not to satisfy the boundedness and transmittance constraints we have placed on our filters. This hypercube usefully delimits our search space (of the sample set of solutions).
To generate a set of filters for initialising the optimisation (solved in Algorithm 2) we will sample uniformly and randomly this hypercube. We use the notation to denote sampling a number in the interval uniformly and randomly. A filter constructed from the corresponding vector (, will be added into the initial filter set only if it lies within the transmittance bounds and it is sufficiently far from those filters already in the set, see step 7. In algorithm 3 sufficiently far means at least degrees from the other set members. The function returns the number of members in a set.
Iv Results
Iva Experiments for Luthercondition Optimised Filters
Let us return to the example shown in Fig. 1. The optimal Luthercondition filter solved using Algorithm 1 is shown in Fig. 1e. We multiply the camera sensors by this filter and then find the linear leastsquares transform mapping the new effective sensitivities to the XYZ matching functions. The fitted filtered camera sensitivities (to the XYZ color matching functions) are shown Fig. 1f. In contrast, Fig. 1d, shows the native camera spectral sensitivities fitted to the XYZs. Visually, the addition of our derived filter makes the camera much more colorimetric.
The reader will notice that the filter, Fig. 1e, absorbs more than 80% of the light except at the shortest wavelengths where it is maximally transmissive. This need not be a problem for color measurement as we can increase exposure time, for example. Though, it does mean that the camera with and without the filter would, for the same recording conditions, capture significantly less light. And, this could result in an increase in the noise in the final image output by a camera reproduction pipeline. Indeed, if we deploy this filter  and keep the capture conditions the same  we would need to ‘scale up’ the recorded values and this operation also scales up the noise. Effectively, we capture an image at a higher ISO number.
IvB VoraValues
To quantitatively measure the spectral match between the filtered and linear transformed camera sensors and the XYZ color matching functions, we calculate the VoraValue[48]. The VoraValue measures the closeness between the spaces spanned by a set of filter sensitivities set and that by the color matching functions X. It is defined as
(17) 
where returns the sum of the diagonal of a matrix and is the MoorePenrose inverse (see Appendix). The VoraValue is a number between 0 and 1 where 1 indicates the two sensitivity sets span the same space, i.e. the Luther condition is fully satisfied. While there is not a guide on what different VoraValues mean, empirically we have found when the VoraValue is respectively larger than 0.95 and 0.99 then we have acceptable and very good color measurement performance. An explanation of why the VoraValue is useful for quantifying the color measurement potential of a set of color filters together with its derivation can be found in [48].
The VoraValue performance for a set of 28 camera spectral sensitivities [49]  with and without their optimised filters  are shown in Fig. 3. The VoraValue for the unfiltered, NATive sensitivities are shown in blue and for the LUTHercondition optimised sensitivities in red. On average, the native VoraValue is 0.918 but when the optimised filter is added it increases to 0.961. This digital cameras data set [49] comprises of diverse camera types, including professional DSLRs, pointandshoot, industrial and mobile cameras.
Note, we make a distinction between using a camera for color measurement and for making attractive looking images. Here, we are interested in using a camera to measure XYZ tristimuli (or measures like CIE Lab values that are derived from tristimuli [3]). From a measurement perspective, we need higher tolerances and a higher VoraValue. Clearly, from the point of view of making attractive images cameras that have VoraValues less than 0.95 can work very well. Indeed, many of the 28 cameras with VoraValues less than 0.95 can still take images that look appealing from a photographic perspective. But, commercial cameras are not suitable vehicles for accurate color measurement. Quantitative color measurement results are presented in the next section.
IvC Color Measurement Experiments
Now let us evaluate the derived Luthercondition and Datadriven filters in terms of a perceptually relevant color measurement/image reproduction metric. The CIELAB [3]  the Euclidean distance calculated between two tristimuli  is a single number that roughly correlates with the perceived perceptual distance between two colors. One corresponds approximately to a ‘Just Noticeable Difference’ to a human observer [27]. When is less than 1 we say that the difference is imperceptible to the visual system.
IvC1 Single Light Case
Let us use the Canon 5D Mark II camera spectral sensitivities as a putative measurement device and quantify how well it can measure colors  with and without a filter. In this experiment the measurement light is either a CIE D65 (bluish) or a CIE A (yellowish) illuminant. For reflectances we use the SFU set of 1995 spectra [50] (itself a composite of many widely used sets). The 1995 XYZs for these reflectance and lights are the groundtruth with respect to which we measure color measurement error.
Using the Canon camera sensitivities, the reflectance spectra and either the CIE D65 and A lights we numerically calculate two sets of 1995 RGBs. Now, we linearly regress the RGBs for each light to their corresponding groundtruths (we map the native RGBs for CIE D65 and A to respectively the XYZs under the same lights). We call these color corrected RGBs the NATive camera predictions (and we adopt this notation in the results shown in Table I). Rows 1 and 4 of Table I report the mean, median, 90, 95 and 99 percentile and the maximum CIELAB error for CIE D65 and CIE A lights.
Now let us place a filter in front of the camera. Again, we calculate two sets of RGBs (one for each light) for the camera spectral sensitivities multiplied by the filter found using the Luthercondition optimisation (Algorithm 1). The recorded filtered RGBs are mapped best to corresponding XYZs using linear regression. The LUTHer error statistics are shown in rows 2 and 5. It is clear that placing a Luthercondition Filter substantially increases the ability of the camera to measure colors accurately. Across all metrics the errors reported are about a third of those found when a filter is not used.
We repeat the experiment for a Datadriven color filter (found using Algorithm 2 where the seed filter for the optimisation is the Luthercondition optimised filter). Again, the two sets of filtered RGBs are linearly mapped to corresponding XYZs to minimise a leastsquares error. Results for the corrected DATAdriven filtered RGBs for the two lights are reported in rows 3 and 6 of Table I. Clearly, the camera plus filter can measure colors more accurately compared to the case where a filter is not used. Across all metrics the errors reported are about a quarter of those found when a filter is not used.
Significantly, the best Datadriven filter also delivers improved performance compared with the results reported for the Luthercondition optimised filter. The errors are further reduced by about a quarter. Incorporating knowledge of typical lights and surfaces into the optimisation leads to improved color measurement performance.
IvC2 Multiple Lights Case
We now repeat the experiment for a set of 102 measured lights[50]. The results of this second experiment are shown in the last 3 rows of Table I. Here, the reported error statistics are averages. For each illuminant  as described in the single light case above  we calculate the mean, median, 90 percentile, 95 percentile, 99 percentile and maximum . That is, we calculate 6 error measures for 102 lights. We then take the mean of each error statistic over all the lights. The aggregate illuminant set performance is reported in rows 7, 8 and 9 of Table I for respectively unfiltered RGBs and RGBs measured with respect to Luthercondition and Datadriven optimised filters.
In terms of the reported errors of the raw RGBs compared to the filtered RGBs for the Luther and Datadriven filters we see the same data trend for the multiple lights case as we saw previously for single lights. A Luthercondition derived filter reduces the measurement error by 2/3 and for the Datadriven filter the measurement error is reduced by 3/4, on average.
IvC3 Multiple Cameras
Now, we calculate the mean error (for the 102 lights and 1995 reflectances) for each of 28 cameras [49]. For each camera we calculate the optimal Luthercondition and Datadriven optimal filters (where as before the Luthercondition filter seeds the Datadriven optimisation). Per camera, Figure 4 summarises the per camera mean and 95 percentile performance.
Grey bars in Figs. 4a and 4b show respectively the mean and 95 percentile error performance of native (unfiltered) color corrected RGBs for the 28 cameras. Respectively, the dashed green and solid red lines record the performance of the best Luthercondition and Datadriven filters.
It is evident that the optimised filters support improved color measurement for all 28 cameras and on average the performance increment is significant. For many cameras the Datadriven optimised filter delivers significantly better color measurement performance compared with using the Luthercondition optimised filter.
Mean  Median  90%  95%  99%  Max  
CIE D65  
NAT  1.65  1.03  3.55  4.94  11.23  19.29 
LUTH  0.46  0.25  1.09  1.45  3.49  5.90 
DATA  0.38  0.20  0.93  1.25  2.45  4.62 
CIE A  
NAT  2.30  1.44  4.65  6.17  16.96  26.41 
LUTH  0.64  0.40  1.33  1.84  4.75  8.19 
DATA  0.44  0.26  1.02  1.41  2.81  4.31 
102 illuminants  
NAT  1.72  1.02  3.68  5.12  12.94  28.39 
LUTH  0.53  0.30  1.15  1.65  4.11  6.83 
DATA  0.41  0.21  0.96  1.32  2.78  6.78 
IvD Smooth and Bounded Transmittance Filters
Both the Luthercondition and Datadriven filters absorb much of the available light (low transmittance values) and are far from being smooth, e.g. see the derived Luthercondition filter in Fig. 1e. Here across much of the visible spectrum the filter transmits little (below 20%) of the available light. When a strongly lightabsorptive filter is placed in front of a camera then we need to either increase the exposure time (or widen the aperture) or apply a higher ISO (which can increase the conspicuity of noise) to obtain an image with the same range of brightnesses (as when a filter is not used). Plus the filter in Fig. 1e is not smooth it may be difficult to manufacture.
In this subsection we will constrain our optimisation so that the calculated filters are smooth and transmit, per wavelength, a minimum amount of incident light (here, 20%). We enforce smoothness indirectly by assuming that our filters lie within the span of either a 6 or 8dimensional Cosine basis.
Let us visualise the 20% bounded transmittance constraint using the Canon 5D Mark II camera sensitivities and our Datadriven optimisation. First, we initialise the optimisation (Algorithm 2) using the uniform vector (100% transmittance across the spectrum) as the seed filter. The optimisation returns the filter, denoted DATA_1s and is plotted in the bluedotted line in Fig. 5a. The Datadriven optimisation seeded with the Luthercondition filter DATA_Luther is also shown in Fig. 5a (dotted purple line).
We repeat this experiment where we both require the derived filters to transmit at least 20% of the light and also that they belong to the span the first 6 or first 8terms in a Cosine basis expansion. When the recovered filter is constrained to lie in the span of a 6dimensional Cosine basis, the recovered DATA_1s and DATA_Luther  for the two initialisation conditions  filters are shown in Fig. 5b (respectively blue and purple dotted lines). See Fig. 5c for the filters calculated using an 8dimensional Cosine basis.
The red lines shown in Figs. 5b and 5c are the filters optimised by our sampling Algorithm (using Algorithm 3 to find a set of filters to seed algorithm 2 and then choosing the best one that has the best overall performance). The experiment for deriving these filters are described in the next subsection.
By examining Figs. 5a, 5b and 5c it is evident that  when no basis, a 6 or an 8dimensional Cosine basis are used  that changing the initialisation condition results in a different filter being found.
IvE Sampled Optimisation
Using Algorithm 3, let us run a sampled optimisation. That is for a given Cosine basis we calculate a set of candidate solutions, the sample set . Here we populate with 20,000 uniformly and randomly generated filters where the angular threshold between any two filters in the set is 1 degree, , see step 7 in Algorithm 3. Each filter in transmits at least 20% of the light (and is populated by filters described as linear combination of a 6 or 8dimensional Cosine basis).
Each filter in is used to initialise the Datadriven algorithm. That is we find 20,000 optimised filters. The color measurement performance of each filter in this solution set can be calculated. Then we simply choose the filter that delivers the best overall measurement performance. In Figs. 5b and 5c we show the best sampleoptimised filters (red lines) which respectively lie in the span of a 6 and 8dimensional Cosine basis (and transmit at least 20% of the light). Here ’best’ is defined to be the filter that results in the smallest mean performance.
Table II reports the color error performance 1995 reflectances and 102 lights [50] for the Canon 5D Mark II sensitivities. The row NAT reports the baseline color correction results when a per illuminant based linear correction matrix is applied while no filter is used (note that row 7 in Tables I and row 1 in Table II are the same).
In Table II we report the correction performances in 3 tranches. Rows 2 and 3 correspond to the two filters without using Cosine basis as shown in Fig. 5a. Here we find the best filters with only the 20% minimum transmittance bound. Rows, 4,5 and 6 report the performance when the 3 filters shown in Fig. 5b are used, where the filter is additionally constrained to be in the span of the 6dimensional Cosine basis. Finally, when the filter is constrained to belong to an 8dimensional Cosine basis, the 3 derived filters lead to the error statistics shown in rows 7 through 9.
Table I reported the color measurement performance of the filters found using an unconstrained optimisation. Table II reports the color measurement results that are found when filters are constrained to have a bounded transmittance (here at least 20% of the light) and be smooth. Let us consider the bounded transmittance first. Comparing row 9 of Table I to row 3 of Table II we see that adding a lower transmittance bound returns a filter that delivers poorer measurement performance (but still much better compared with the native camera response). Additionally, requiring that our filters smooth on top of the minimum also yields relatively poorer performance compared to the unconstrained filter optimisation.
However, with either the 6 and 8dimensional Cosine basis constraint we can find the best filter by seeding Algorithm 2 with many possible filter initialisations (and then choosing the best filter overall). Here, we find that comparable performance is possible. Compare rows 6 and 9 of Table II to row 9 of Table I. It is remarkable how well a constrained filter can work: the performance is ever so slightly worse than the unconstrained optimisation. But, the filter is much smoother and more likely to be able to be manufactured.
Mean  Median  90%  95%  99%  Max  
NAT  1.72  1.03  3.68  5.12  12.94  28.39 
minimum transmittance of 20%  
DATA_1s  0.69  0.42  1.47  2.11  4.69  19.48 
DATA_Luther  0.58  0.38  1.36  1.80  2.77  5.75 
6 cosine basis with 20% minimum transmittance  
DATA_1s  0.81  0.49  1.80  2.54  5.21  18.85 
DATA_Luther  0.94  0.54  2.03  2.84  7.00  21.14 
DATA_Sampling  0.59  0.35  1.30  1.83  3.77  14.19 
8 cosine basis with 20% minimum transmittance  
DATA_1s  0.71  0.38  1.60  2.38  5.42  19.25 
DATA_Luther  0.62  0.38  1.41  2.01  3.47  9.52 
DATA_Sampling  0.45  0.25  1.02  1.41  3.10  10.63 
IvF Sampling vs Optimisation
It is worth reflecting on our samplebased optimisation. Clearly, that sampling makes such a difference to the performance that our optimisation can deliver (for filtered color measurement) teaches us that the minimisation at hand has many local minima. By sampling we are effectively allowing our minimiser (Algorithm 2) to find many solutions and then we have the latitude to choose the (closer to) global minimum. Given we seed our optimisation with 20,000 filters we might wonder whether we need to actually carry out the Datadriven optimisation.
In answering this question, first we remark that it is well known that as the dimension of a space increases it is more sparse. On the Cartesian plane if we have more than 360 vectors (anchored at the origin) then the closest angular distance to at least one vector’s nearest neighbours must be less than 1 degree. In 3dimensions we can have thousands of vectors where every vector is more than 1 degree from its nearest neighbour.
For our 20,000 member sample set we calculated the average angular distance for each element to its nearest neighbour in the set. When is calculated subject to the 6dimensional Cosine basis constraint, the average nearestneighbour distance was 2.6 degrees (with a maximum of 7) and for the 8dimensional case it was 4.6 degrees (with a maximum of 10). Running the optimisation, Algorithm 2, with each element in , we effectively refine the initial guess. And, the refinement (difference between the starting and endpoint filter) is on the same order as the average nearestneighbour distance.
Significantly, running the optimisation  carrying out the refinement  results in a significant performance increment compared to using the only sample filters. That is we cannot use the sampling strategy alone to find the best optimised filter. The importance of the refinement step will increase as a greater number of basis functions are used in the optimisation.
V Conclusion
In this paper, we developed two algorithms that design transmittance filters which, when placed in front of a camera, make the camera more colorimetric. Our first algorithm is driven by the camera sensitivities themselves. It is well known that a camera that has sensitivities that are a linear transform from the XYZ color matching functions  the camera meets the Luther condition  can be used to measure color without error. Our first algorithm finds the filter that best satisfies the Luther condition. A second algorithm that tackles color correction for a given set of measured lights and surfaces, which we call Datadriven filter optimisation, is also developed. Both Luther and Datadriven Filters provide a step change in how well a camera can measure color.
Our default optimisation  though compellingly simple to formulate  deliver filters which are not smooth (difficult to make) and may also transmit very little light. Our optimisations are reformulated to incorporate both smoothness and a lower bound on how much light must be transmitted. Initially, when these constraints are adopted, the solvedfor filters work less well. However, experiments demonstrated that our optimisations were highly dependent on the initialisation parameters, specifically the seed filter (initial guess) that drives the filter evolution. A simple sampling strategy  i.e. severally running the optimisation for a set of judiciously chosen seed filters  allows us to mitigate this problem. Significantly a smooth filter that transmits more than 20% of the light across the visible spectrum delivers almost as good performance as a very nontransmittive and nonsmooth filter (found via the unconstrained optimisations).
[Implementation]
For both Algorithms 1 and 2 presented in Section III the filter and the color correction matrices can be found using simple leastsquares regression. To remind the reader, given and  matrices of rank where , then the leastsquares regression  an matrix, mapping to () can be found in closedform using the MoorePenrose inverse [23]:
a Algorithm 1: ALS for the Luthercondition Optimisation
In step 4 of the algorithm, the optimal filter is found by finding scalars that maps each row of to the corresponding row of X. The best scalar mapping the vector to (for the th row of the data matrices) can be written in closed form using the MoorePenrose inverse: .
Similarly, in step 5, the MoorePenrose inverse can be used for finding . Denoting then .
B Algorithm 2: ALS for the Datadriven Optimisation
In step 4, each can be solved directly using the MoorePenrose inverse. Denoting then .
In step 5 of the Datadriven optimisation, the filter is embedded in the equation and so we cannot solve for it directly as we could for the Luthercondition case.
To solve for the filter it is useful to vectorise the minimisation. We recapitulate the minimisation statement of step 5:
(18) 
The meaning of is the Frobenius norm squared, i.e. the sum of all the argument terms squared. This Frobenius norm is generally applied to matrices (as here) but can equally be applied to vectors (where the operator stacks the columns of a matrix on top of each other):
(19) 
Now let us rewrite the diagonal filter matrix as a summation of each value in the diagonal, , multiplied with a single entry matrix as . Here, is a matrix with a single nonzero entry at . By substituting this new filter representation into the first term of the minimisation in Equation (19), we obtain
(20) 
Now let us denote a matrix where its column represents , Equation (20) can be expressed more compactly as
(21) 
where . Note that if is a matrix then is an matrix (where denotes the number of color channels) and thus is which makes matrix have size of .
Now we stack all , matrices (under cnt different lighting conditions) on top of each other making an matrix, . Similarly we stack all targeting XYZs on top of each other denoted as which has the size of . We remind the reader that might equal . Or, might denote a single privileged illuminant such as CIE D65.
Now the minimisation in Equation (19) can be equivalently rewritten as:
(22) 
The best can be found in closed form using the MoorePenrose inverse:
(23) 
C Filter Constraints
Equation (23) solves for the 31component in one step. Suppose we write . We constrain the filter to be describable by a linear basis ( is where ). Additionally, the filter is restrained by a minimum and maximum bounds on the transmittance. Then to solve for the filter we find the coefficient vector that minimises:
(24) 
Equation (24) where there is a quadratic objective function and linear inequality constraints can be solved using quadratic programming [46].
Acknowledgment
This work was supported by EPSRC under Grant EP/S028730. The authors would also like to thank Dr. Javier VazquezCorral for his insightful comments.
References
 [1] R. W. G. Hunt and M. R. Pointer, Measuring colour, 4th ed. John Wiley & Sons, 2011.
 [2] B. A. Wandell, Foundations of vision. Sinauer Associates, 1995.
 [3] N. Ohta and A. Robertson, Colorimetry: fundamentals and applications. John Wiley & Sons, 2006.
 [4] H. E. Ives, “The transformation of colormixture equations from one system to another,” Journal of the Franklin Institute, vol. 180, no. 6, pp. 673–701, 1915.
 [5] R. Luther, “Aus dem gebiet der farbreizmetrik,” Zeitschrift Technische Physik, vol. 8, pp. 540–558, 1927.
 [6] J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of munsell colors,” Journal of the Optical Society of America A, vol. 6, no. 2, pp. 318–322, 1989.
 [7] M. J. Vrhel, R. Gershon, and L. S. Iwan, “Measurement and analysis of object reflectance spectra,” Color Research & Application, vol. 19, no. 1, pp. 4–9, 1994.
 [8] C.C. Chiao, T. W. Cronin, and D. Osorio, “Color signals in natural scenes: characteristics of reflectance spectra and effects of natural illuminants,” Journal of the Optical Society of America A, vol. 17, no. 2, pp. 218–224, 2000.
 [9] L. T. Maloney, “Evaluation of linear models of surface spectral reflectance with small numbers of parameters,” Journal of the Optical Society of America A, vol. 3, no. 10, pp. 1673–1683, 1986.
 [10] D. H. Marimont and B. A. Wandell, “Linear models of surface and illuminant spectra,” Journal of the Optical Society of America A, vol. 9, no. 11, pp. 1905–1913, 1992.
 [11] P. R. Boyce, Human factors in lighting, 2nd ed. Talor & Francis, 2003.
 [12] M. S. Drew and B. V. Funt, “Natural metamers,” CVGIP: Image Understanding, vol. 56, no. 2, pp. 139–151, 1992.
 [13] G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Research & Application, vol. 26, no. 1, pp. 76–84, 2001.
 [14] G. D. Finlayson, M. Mackiewicz, and A. Hurlbert, “Color correction using rootpolynomial regression,” IEEE Transactions on Image Processing, vol. 24, no. 5, pp. 1460–1470, 2015.

[15]
P.C. Hung, “Colorimetric calibration in electronic imaging devices using a lookuptable model and interpolations,”
Journal of Electronic Imaging, vol. 2, no. 1, pp. 53–62, 1993.  [16] C. F. Andersen and D. Connah, “Weighted constrained hueplane preserving camera characterization,” IEEE Transactions on Image Processing, vol. 25, no. 9, pp. 4329–4339, 2016.

[17]
G. J. Klinker, S. A. Shafer, and T. Kanade, “A physical approach to color
image understanding,”
International Journal of Computer Vision
, vol. 4, no. 1, pp. 7–38, 1990.  [18] M. Mackiewicz, C. F. Andersen, and G. D. Finlayson, “Method for hue plane preserving color correction,” Journal of the Optical Society of America A, vol. 33, no. 11, pp. 2166–2177, 2016.

[19]
T. Zhang and G. H. Golub, “Rankone approximation to high order tensors,”
SIAM Journal on Matrix Analysis and Applications, vol. 23, no. 2, pp. 534–550, 2001.  [20] G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Research & Application, vol. 40, no. 3, pp. 232–242, 2015.
 [21] G. D. Finlayson, H. G. Gong, and R. B. Fisher, “Color homography: Theory and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, pp. 20–33, 2019.
 [22] G. D. Finlayson, Y. Zhu, and H. Gong, “Using a simple colour prefilter to make cameras more colorimetric,” in Color and Imaging Conference, vol. 2018, no. 1. Society for Imaging Science and Technology, 2018, pp. 182–186.
 [23] G. H. Golub and C. F. Van Loan, Matrix computations, 3rd ed. Johns Hopkins University Press, 1996.
 [24] G. D. Finlayson and Y. Zhu, “Finding a colour filter to make a camera colorimetric by optimisation,” in International Workshop on Computational Color Imaging. Springer, 2019, pp. 53–62.
 [25] Y. Zhu and G. Finlayson, “An improved optimization method for finding a color filter to make a camera more colorimetric,” in Electronic Imaging 2020. Society for Imaging Science and Technology, 2020.
 [26] J. Guild, “The colorimetric properties of the spectrum,” Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, vol. 230, no. 681693, pp. 149–187, 1931.
 [27] G. Wyszecki and W. S. Stiles, Color science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed. Wiley New York, 1982.
 [28] D. H. Krantz, “Color measurement and color theory: I. representation theorem for grassmann structures,” Journal of Mathematical Psychology, vol. 12, no. 3, pp. 283–303, 1975.
 [29] P. DeMarco, J. Pokorny, and V. C. Smith, “Fullspectrum cone sensitivity functions for xchromosomelinked anomalous trichromats,” Journal of the Optical Society of America A, vol. 9, no. 9, pp. 1465–1476, 1992.
 [30] A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middleand longwavelengthsensitive cones derived from measurements in observers of known genotype,” Vision research, vol. 40, no. 13, pp. 1711–1737, 2000.
 [31] B. Smith, C. Spiekermann, and R. Sember, “Numerical methods for colorimetric calculations: sampling density requirements,” Color Research & Application, vol. 17, no. 6, pp. 394–401, 1992.
 [32] B. K. P. Horn, “Exact reproduction of colored images,” Computer Vision, Graphics, and Image Processing, vol. 26, no. 2, pp. 135 – 167, 1984.
 [33] P. L. Vora and H. J. Trussell, “Mathematical methods for the design of color scanning filters,” IEEE Transactions on Image Processing, vol. 6, no. 2, pp. 312–320, 1997.
 [34] M. J. Vrhel and H. J. Trussell, “Filter considerations in color correction,” IEEE Transactions on Image Processing, vol. 3, no. 2, pp. 147–161, 1994.
 [35] M. J. Vrhel and H. J. Trussell, “Optimal color filters in the presence of noise,” IEEE Transactions on Image Processing, vol. 4, no. 6, pp. 814–823, 1995.
 [36] M. Wolski, C. Bouman, J. P. Allebach, and E. Walowit, “Optimization of sensor response functions for colorimetry of reflective and emissive objects,” IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 507–517, 1996.
 [37] P. L. Vora and H. J. Trussell, “Mathematical methods for the analysis of color scanning filters,” IEEE Transactions on Image Processing, vol. 6, no. 2, pp. 321–327, 1997.
 [38] G. Sharma and H. J. Trussell, “Digital color imaging,” IEEE transactions on Image Processing, vol. 6, no. 7, pp. 901–932, 1997.
 [39] J. E. Farrell and B. A. Wandell, “Method and apparatus for identifying the color of an image,” Dec. 26, 1995, U.S. Patent 5479524.
 [40] J. Y. Hardeberg, Acquisition and reproduction of color images: colorimetric and multispectral approaches. UniversalPublishers, 2001.
 [41] D. Connah, S. Westland, and M. Thomson, “Recovering spectral information using digital camera systems,” Coloration Technology, vol. 117, no. 6, pp. 309–312, 2001.
 [42] J. L. Nieves, E. M. Valero, S. M. Nascimento, J. HernándezAndrés, and J. Romero, “Multispectral synthesis of daylight using a commercial digital CCD camera,” Applied Optics, vol. 44, no. 27, pp. 5696–5703, 2005.
 [43] D.Y. Ng and J. P. Allebach, “A subspace matching color filter design methodology for a multispectral imaging system,” IEEE Transactions on Image Processing, vol. 15, no. 9, pp. 2631–2643, 2006.
 [44] E. M. Valero, J. L. Nieves, S. M. Nascimento, K. Amano, and D. H. Foster, “Recovering spectral data from natural scenes with an RGB digital camera and colored filters,” Color Research & Application, vol. 32, no. 5, pp. 352–360, 2007.
 [45] G. Sharma and R. Bala, Digital color imaging handbook. CRC press, 2002.
 [46] D. G. Luenberger and Y. Ye, Linear and nonlinear programming, 4th ed. Springer, 2015.
 [47] G. Strang, “The discrete cosine transform,” SIAM review, vol. 41, no. 1, pp. 135–147, 1999.
 [48] P. L. Vora and H. J. Trussell, “Measure of goodness of a set of colorscanning filters,” Journal of the Optical Society of America A, vol. 10, no. 7, pp. 1499–1508, 1993.
 [49] J. Jiang, D. Liu, J. Gu, and S. Süsstrunk, “What is the space of spectral sensitivity functions for digital color cameras?” in 2013 IEEE Workshop on Applications of Computer Vision (WACV). IEEE, 2013, pp. 168–179.
 [50] K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for color research,” Color Research & Application, vol. 27, no. 3, pp. 147–151, 2002.
Comments
There are no comments yet.