Structure-preserving color transformations using Laplacian commutativity

11/01/2013 ∙ by Davide Eynard, et al. ∙ 0

Mappings between color spaces are ubiquitous in image processing problems such as gamut mapping, decolorization, and image optimization for color-blind people. Simple color transformations often result in information loss and ambiguities (for example, when mapping from RGB to grayscale), and one wishes to find an image-specific transformation that would preserve as much as possible the structure of the original image in the target color space. In this paper, we propose Laplacian colormaps, a generic framework for structure-preserving color transformations between images. We use the image Laplacian to capture the structural information, and show that if the color transformation between two images preserves the structure, the respective Laplacians have similar eigenvectors, or in other words, are approximately jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices, we use Laplacians commutativity as a criterion of color mapping quality and minimize it w.r.t. the parameters of a color transformation to achieve optimal structure preservation. We show numerous applications of our approach, including color-to-gray conversion, gamut mapping, multispectral image fusion, and image optimization for color deficient viewers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

page 7

page 8

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A wide class of image processing problems relies on transformations between color spaces. Some notable examples include gamut mapping, image optimization for color-deficient viewers, and multispectral image fusion. Often these transformations imply a reduction in the dimensionality of the original color space, resulting in information loss and ambiguities.

Original image Luma [LHM11] Laplacian
0.98 1.23 0.50
Figure 1: Decolorization experiment results. From left: original RGB image, grayscale conversion results using the Luma channel, the method of [LHM11], and our Laplacian colormap, with their respective RWMS error images and mean RWMS score values. Luma conversion results in loss of image structure due to metamerism (the green island disappears). The proposed Laplacian colormap better preserves the original image structure.

Decolorization or color-to-gray conversion is a classical example one frequently encounters when printing a color image on a black-and-white printer. The ambiguity of such a conversion (called metamerism, when many different RGB colors are mapped to the same gray level) may result in a loss of important structure in the image (see Figure 1). Preserving salient characteristics of the original image is thus crucial for a quality color transformation process. These characteristics can be represented in different ways, e.g. as contrasts between color pixels in terms of their luminance and chrominance [GOTG05], color distances [GD07], image gradients [ZF12] and Laplacians [BD13].

Color-to-gray maps can be classified into

global (using the same map for each pixel) and local (or spatial, allowing different pixels with the same color to be mapped to different gray values, at the advantage of a better perception of color contrasts). Members of the first group include the pixel-based approaches by Gooch et al. [GOTG05] and Grundland et al. [GD07], and the color-based ones by Rasche et al. [RGW05b], Kuhn et al. [KOF08b], Kim et al. [KJDL09], Lu et al. [LXJ12]. Between local methods [NvN07, KAC10, ZF12], several try to preserve information in the gradient domain. Smith et al. [SLTM08] present a hybrid (local+global) approach that relies on both an image-independent global mapping and a multiscale local contrast enhancement. Lau et al. [LHM11] propose an approach defined as ‘semi-local’, as it clusters pixels based on both their spatial and chromatic similarities. The color mapping problem is solved with an optimization aimed at finding optimal cluster colors such that the contrast between clusters is preserved.

Gamut mapping is the process of adjusting the colors of an input image into the constrained color gamut of a given device. Gamut mapping algorithms can be mainly divided into clipping and compression approaches [Mor08]. The former ones change the source colors that fall outside of the destination gamut (e.g. HPMINDE [CIE04, BSBB06]); the latter also modify the in-gamut colors. Similarly to color-to-gray conversion, gamut mapping methods can also be categorized as global and local. To address metamerism in gamut mapping, local approaches [BdQEW00, NHU99, KSES05] allow two spatially-distant pixels of equal color to be mapped to different in-gamut colors. Global approaches, conversely, will always apply the same map to two pixels of the same color, regardless of their location. Many gamut mapping algorithms optimize some image difference criterion [NHU99, KSES05, AF09, LHM11].

Color-blind viewers cannot perceive differences between some given colors, due to the lack of one or more types of cone cells in their eyes [dal, MG88]

. Image perception by a color-deficient observer is typically simulated by first applying a linear transformation from a standard color space such as RGB 

[KJY12, BVM07, VBM99], XYZ [MG88, RGW05a], or CIE Lab* [KOF08a, HTWW07] to a special LMS space, which specifies colors in terms of the relative excitations of the cones. Then, the color domain is reduced in accordance with the color deficiency (typically, by means of a linear transformation in the LMS space [VBM99, KJY12, HTWW07]). Finally, the reduced LMS space is mapped back to RGB.

When trying to adapt an image for a color-blind viewer, one has to ensure that the structure of the original image is not lost due to color ambiguities. Kuhn et al. [KOF08a] focus on obtaining natural images by preserving, as much as possible, the original image colors. Rasche et al. [RGW05a], instead, try to maintain distance ratios during the reduction process. Lau et al. [LHM11] is aimed at preserving both the contrast between color clusters and the reduced image colors.

Multispectral image fusion aims to combine a collection of images captured at different wavelengths into a single one, containing details from several spectra. Zhang et al. [ZSM] and Lau et al. [LHM11] present a method that adaptively adjusts the contrast of photographs by using the contrast and texture information from near-infrared (NIR) image. Kim et al. [JKDB11] show how to use different bands of the invisible spectrum to improve the visual quality of old documents. Süsstrunk and Fredembach [SF10] provide a good introduction to the topic and present, as examples of image enhancements, haze removal and realistic skin smoothing.

General approaches. We should stress that despite a significant corpus of research on color transformations, most of the methods are targeted to specific applications and lack the generality of a framework that could be applied to different classes of problems. At the same time, there is an obvious common denominator between the aforementioned problems: for example, both color-blind transformations [RGW05b] and color-to-gray conversions [CHRW10, ZT10, ZF12] can be regarded as mappings to a gamuts of lower dimension [GOTG05]. To the best of our knowledge, only the recent work of [LHM11] introduces a comprehensive approach that works with generic color transformation and easily adapts to different applications.

Main contribution. In this paper, we present Laplacian colormaps, a new generic framework for computing structure-preserving color transformations that can be applied to different problems. Our main motivation comes from recent works on Laplacians as structure descriptors [BD13] and joint diagonalization of Laplacians [EGBB12, KBB13, GB13, BGL13] (to the best of our knowledge, our paper is the first application of these methods in the domain of image analysis).

Using Laplacians as image structure descriptors, we observe that an ideal color transformation should preserve the Laplacian eigenstructure, implying that the Laplacians of the original and color-converted image should be jointly diagonalizable. Employing the relation between joint diagonalizability and commutativity of matrices [GB13, BGL13], we use Laplacians commutativity as a criterion of image structure preservation. We try to find such a colormap that would produce a converted image whose Laplacian commutes as much as possible with the Laplacian of the original image. Since Laplacians can be defined in any colorspace, our approach is generic and applicable to any kind of color conversions (in particular, color-to-gray, gamut mapping, color-blind optimization, etc.). Furthermore, we can work with both global and local colormaps.

The rest of the paper is organized as follows: in Section 2, we review the main results related to joint diagonalization and commutativity of matrices. In Section 3 we formulate our optimization problem and discuss its numerical solution. Section 4 shows examples of applications of our framework to different problems involving color transformations. Finally, Section 5 concludes the paper. Technical derivations are given in the Appendix.

Figure 2:

Image structure similarity is conveyed by the eigenstructure of their Laplacians. Top: original RGB image; middle: grayscale conversion by our method; bottom: luma only conversion. Left-to-right: original image, first four eigenvectors of the corresponding Laplacian, result of spectral clustering.

2 Background

Notation and definitions. We denote by a matrix, by

a (column) vector, and by

a scalar. We denote by

(1)

the Frobenius norm of the matrix and the Euclidean norm of a vector , respectively. is a diagonal matrix with diagonal elements , are the diagonal elements of arranged as a column vector, is a diagonal matrix obtained by setting to zero the off-diagonal elements of , and is a column vector obtained by column-stacking of .

Let us be given an image with color channels, column-stacked into an matrix . The problem of color conversion is creating a new image with color channels, by means of a colormap . In particular, we are interested in parametric colormaps , parametrized by an -dimensional vector of parameters . In the simplest case, is a global color transformation applied pixel-wise, i.e., each pixel of the original image is mapped by means of the same such that (a simple example is linear RGB to gray mapping, where , , and , where in addition we require and ).

Let denote a subset of the image pixel indices (this subset can be the whole set of pixels, a regularly subsampled image, ‘representative’ pixels obtained by clustering the image, etc.). Considering these pixels as vertices of a graph, we define edge weights (adjacencies) as a combination of spatial and ‘radiometric’ distances,

(2)

where is the spatial distance between pixels and , and are parameters (more generally, the ‘radiometric’ part of the adjacency does not have to work on pixel-wise colors, and one can consider some local features, the simples of which are patches [WK12]). For practical computations, it is usually assumed that between spatially-distant pixels, so they are disconnected. We define the (unnormalized) Laplacian111There exist numerous ways of defining Laplacian matrices; we consider the unnormalized one merely for the sake of simplicity. The ability to work with practically any operator capturing the image structure is one of the strengths of our method. of this graph as a symmetric positive semi-definite matrix , where is the adjacency matrix with elements as in (2), and . In the following, we refer to as the Laplacian of image .

Since is symmetric, it admits an orthonormal eigendecomposition by means of a matrix , such that , where the columns of are orthonormal eigenvectors, and

are the corresponding eigenvalues, sorted in ascending order

. For simplicity, we assume that there are no repeating eigenvalues, and thus the eigenvectors are defined up to sign. We say that two Laplacians and are jointly diagonalizable if they have the same eigenvectors , i.e., and . and are said to commute if their commutator is .

Image Laplacians as structure descriptors. Laplacians have been successfully used in image processing to guide anisotropic diffusion [SKM98]. Shi and Malik [SM00] showed that a spectral relaxation of the normalized cut criterion for image segmentation boils down to finding the first eigenvectors of an image Laplacian and performing segmentation in the low-dimensional eigensubspace. This approach inspired the popular spectral clustering algorithm [NJW02]. More recently, Bansal and Daniilidis [BD13] used the eigenvectors of image Laplacians to perform matching of images taken in different illumination conditions, arguing that the Laplacian acts as a self-similarity descriptor [SI07] of the image.

Applying this idea to color transformations, we can use the similarity of Laplacian eigenspaces as a criterion of structural similarity of two images. Figure 

2 shows three images (original RGB image and two decolorized versions thereof, a ‘bad’ and a ‘good’ one) and the first eigenvectors of the corresponding Laplacians. One can see that a good colormap preserves the image structure, which is manifested in the two Laplacians having similar eigenvectors (first and third rows). In particular, if one applies spectral clustering to such images, the resulting segmentation will be similar. Thus, an ideal color transformation should make the corresponding Laplacians and jointly diagonalizable.

Joint approximate diagonalization (JAD) is a way to enforce two matrices to have the same eigenstructure. Given two matrices and , one seeks a joint approximate eigenbasis such that and are approximately diagonal,

(3)

where . JAD has been recently applied to jointly diagonalize Laplacian matrices in order to find compatible Fourier bases on graphs [EGBB12] and surfaces [KBB13]. The drawback of this formulation is that both matrices are assumed to be given, while in our problem only one matrix (the original image Laplacian, ) is given, while the other (the transformed image Laplacian, ) has to be found.

Closest commuting operators (CCO). Joint diagonalizability is intimately related to matrix commutativity. It is well-known that and are jointly diagonalizable iff they commute, i.e., [HJ90]. It appears that this relation also holds for almost-commuting matrices, in the following sense:

Theorem 2.1 (Glashoff-Bronstein [Gb13])

Let be two symmetric matrices normalized such that . Then,

where are functions satisfying ;

or in other words, almost commuting matrices are almost jointly diagonalizable.

Bronstein et al. [BGL13] studied an alternative problem of finding the closest commuting matrices to the given and ,

(4)

Since commute, they are jointly diagonalizable. Furthermore, if approximately commute, is guaranteed to be small, i.e., almost-commuting matrices are close to commuting ones [Lin97].

Somewhat surprisingly, it turns out that the JAD and CCO problems are equivalent, in the following sense:

Theorem 2.2 (Bronstein et al. [Bgl13])

Let be symmetric matrices, be the minimizer of the JAD problem (3), and be the minimizers of the CCO problem (4), jointly diagonalized by . Then:

  1. ;

  2. ;

  3. and .

Despite being equivalent, JAD and CCO problem have a key difference: in the former, optimization is performed w.r.t. the joint eigenbasis, while in the latter, optimization is performed w.r.t. closest commuting matrices. In our problem, the CCO formulation allows to optimize w.r.t. Laplacians, which, in turn, can be parametrized through the colormap.

Let us summarize the main results of this section, which will motivate our approach described in the following. First, Laplacians can be used as structural descriptors of images. Second, two images having similar structures translates into having the corresponding Laplacians jointly diagonalizable. Third, joint diagonalizability is equivalent to commutativity.

The key idea of this paper is to find such a colormap that the Laplacian of the input image and the Laplacian of the output image commute as much as possible. Due to the relation between approximate commutativity and joint diagonalizability, it will imply that and have similar eigenvectors, and thus the underlying images are structurally similar.

3 Laplacian colormaps

Problem formulation. Let be a given original image and be the desired color-converted image. Our goal is to find a set of parameters such that the structures of the images and are as similar as possible, where the similarity is judged by the commutativity of the corresponding Laplacians. This leads us to a class of optimization problems of the form

One can easily recognize in problem (3) a parametric version of the CCO problem (4) with one of the Laplacians fixed. Note that the Laplacian

is parametrized by a small number of degrees of freedom

, and thus it would be usually impossible to make it exactly commute with the given - hence, unlike the CCO problem, the commutator norm appears as a penalty rather than a constraint.

Additional regularization (third and fourth terms in (3)) is used if we have some ‘nominal’ parameters representing a standard color transformation, or if some colors should be mapped into some known in advance (for example, in some cases it is important to preserve black and white colors). Finally, depending on the type of the colormap , one may impose some constraints on the parameters (e.g., in linear RGB-to-gray conversion, and ).

Local maps. Our approach imposes no limitations on the complexity of the colormap ; in particular, this map does not have to be global. Let us assume that the source image is partitioned into (soft) regions, represented by weight vectors of size , such that and . In each region , we allow for a different colormap . Then, the overall colormap is given as parametrized by . Optimization w.r.t. to the parameters of the local colormap is performed in exactly the same manner as described above.

Multiple Laplacians. In some applications like multispectral image fusion, one may wish to impose structural similarity between the output image and multiple images, with colorspaces of dimensionality . The input image may be one of the images or a merged image with -dimensional colorspace. In this case, our optimization problem (3) assumes the form

where are constants determining the tradeoff between different penalties.

4 Results and Applications

In this section, we show several applications of our approach for decolorization, image optimization for color-blind people, gamut mapping, and multispectral image fusion, providing extensive comparison to previous works. As a quantitative criterion of the colormap quality, we use the root weighted mean square (RWMS) error proposed by [KOF08b], measuring the distortion of relative color distances in two images,

(7)

where is the image size, and denote the th pixel of the input and output images, respectively, and is the color range of image . Plotting the pixel-wise RWMS error as an image allows to see which pixels are most affected by the color transformation. The average is used as a single number representing the quality of the colormap.

All experiments share a common setup: first of all, RGB values are scaled by 255. Then we calculate a weighted adjacency matrix according to (2) using all pixels () if the images are small enough, and resizing the image to have long side of 300 pixels otherwise. We used fixed 4-neighbors connectivity and parameters , . Default weights for the cost function are , and regularization term . Parameters are initialized randomly and normalized to satisfy the condition .

As a last step, since mapping might produce color values out of the range, output channels are normalized. Optimization was implemented in MATLAB, using interior-point method from the Optimization Toolbox.

Decolorization. For RGB-to-gray mapping, we used a global colormap, applying in each pixel the following transformation: where is the th RGB channel of the th pixel, is the grayscale output, and are the colormap parameters w.r.t. which the optimization is performed.

Images used for this experiment were taken from [Čad08]. Figure 8 shows the results of our transformations, compared to previous works [GOTG05, RGW05b, GD07, NvN07, SLTM08]. Results were evaluated using two different metrics: quantitative (RWMS) and qualitative perceptual evaluation following [Čad08]. In the perceptual evaluation conducted through a Web survey, 107 volunteers were shown the original RGB image together with a pair of its gray conversions, and were asked which of the two results better preserved the original image. Then, we used Thurstone’s law of comparative judgments to convert the 1857 pairwise evaluations into interval z-score scales [Thu27, TG11]. Table 1

provides average RWMS values and z-scores calculated on an 8-images subset of Čadik’s. Our approach performs the best w.r.t. both criteria.

CIE Y [GOTG05] [RGW05b] [GD07] [NvN07] [SLTM08] Laplacian
RWMS 2.86 2.31 2.49 2.21 4.91 2.94 1.42
z-score -0.14 -0.24 -0.55 0.63 -0.45 -0.06 0.81
Table 1: Comparison of color-to-gray conversions in terms of mean RWMS value and z-score, averaged on all images.
Original image Luma [LHM11]
Laplacian (global) Laplacian (local) Clusters
Figure 3: Global vs Local maps results. Top row, left-to-right: original image, Luma, result by Lau et al. [LHM11]. Bottom row: Laplacian colormaps using a global (left) and local (middle) map; the spatial weights used in the latter.

Color-blind viewers. We model the color distortion of an RGB image as perceived by a color-blind person by means of a map . Since is given and beyond our control, we try to ‘pre-transform’ the original image by means of in such a way that the image that appears to the color-blind person has the structure of the original image . We extend our problem formulation so that the transformed image maintains its structure both when seen by a color-blind observer and when seen by a regular observer. In our optimization problem, this translates into requiring the two pairs of Laplacians and to commute. The cost function is similar to the multiple Laplacians setting (3):

(8)

Figure 4 shows Laplacian colormaps results for two different types of color blindness (protanopia and tritanopia). Qualitatively, our result appears to be much closer to the original image compared to [LHM11] (this is especially apparent in the tritanopia case) such that a ‘normal’ viewer sees less distorted colors, while a color-deficient viewer can clearly see the structure structured in the image (digit 6 and different candies) which otherwise would disappear. Quantitatively, we obtain smaller RWMS error, suggesting that our mapping better preserves the original structure of the image, even in those areas that are critical for other approaches.

Original image Color-blind [LHM11] Laplacian
0.98 1.23 0.50
1.27 1.69 0.53
Figure 4: Color mapping for color-blind observers (top: protanope, bottom: tritanope). From left: original image, simulated color-blind, result from [LHM11], and our result, with their respective RWMS error images and mean RWMS values.

Gamut mapping is a problem similar to the previous one, and has a setting similar to the one in the previous experiment. A transformation which maps colors from RGB to the XY chromaticity space and a color gamut (a convex polytope, and in this particular experiment a triangle) are given. Our goal is to find minimizing the cost (8) subject to , which is imposed as a set of linear constraints. We used the parameters , and . Figure 5 compares our results with the outputs of HPMINDE [CIE04] and by the method of Lau et al. [LHM11]. Qualitatively, the output of Laplacian colormaps preserves more details of the original picture (see e.g. the plumage on the red parrot’s head). Quantitatively, our algorithm outperforms the other methods in term of percentage of out-of-gamut.

Original image [LHM11] Laplacian HPMINDE [CIE04]

Original image [LHM11] Laplacian HPMINDE [CIE04]
Figure 5:

Gamut mapping results. Odd rows, left-to-right: original image, gamut mapping with method of Lau et al.

[LHM11], our approach and HPMINDE [CIE04]. Even rows: gamut alerts for the images above (green shows the out-of-gamut pixels).

Multispectral image fusion can be seen as an extension of the color-to-gray problem, where the number of input channels and the output image has channels. We use the cost function (3), with and ; the latter does not only act as regularization, but also provides us a way to automatically order the three output channels.

Figure 6 shows multispectral to RGB transformations where the input space is the concatenation of RGB and NIR (). In this specific example, the NIR channel is used to enhance the RGB image with an additional source of information. Comparing our result with the method of Lau et al. [LHM11], we can see that Laplacian colormap provides an enhanced version of RGB while preserving the correct colors (e.g. trees on the mountains have more detail than in RGB, but at the same time they do not present the blue-green halo that appears in [LHM11]). Finally, in Figure 7 we show a fusion of four photos of a city in different lighting conditions into a single image, which looks visually plausible.

NIR RGB [LHM11] Laplacian
Figure 6: Multispectral (RGB+NIR) fusion results.
Morning Day Evening Night Fusion
Figure 7: Fusion of images of four different illuminations into a single RGB image (rightmost).
RGB CIE Y [GOTG05] [RGW05b] [GD07] [NvN07] [SLTM08] Laplacian
2.13 / -1.16 1.42 / 1.03 2.34 / -0.44 1.49 / 0.57 4.62 / -1.55 2.07 / 0.31 0.94 / 1.24

2.14 / -0.68 1.93 / 0.40 1.43 / -1.24 1.33 / 1.34 2.30 / -1.35 2.09 / 0.11 1.16 / 1.42

5.06 / -0.06 3.47 / -0.73 3.71 / -0.08 3.46 / 1.73 5.44 / -2.29 5.07 / 0.37 1.36 / 1.06

9.33 / -0.15 7.02 / -0.04 7.32 / 0.36 7.32 / 1.98 10.03 / -2.17 9.19 / -0.92 4.37 / 0.94

1.26 / 0.50 1.37 / -0.64 1.12 / 0.17 1.24 / 0.21 2.72 / -1.97 1.12 / 0.73 0.98 / 1.00

0.98 / 0.29 1.24 / -0.84 0.99 / 0.33 1.02 / 0.92 1.65 / -1.99 1.07 / 0.38 0.87 / 0.92
Figure 8: Decolorization experiment results. Left: original RGB image, right: grayscale conversion results. Rows 2, 5, : RWMS error images and mean RWMS (the smaller the better) / z-score (the larger the better) values. Our Laplacian colormap method performs the best in most cases. (Continues in next page)
RGB CIE Y [GOTG05] [RGW05b] [GD07] [NvN07] [SLTM08] Laplacian
1.09 / 0.60 1.12 / 0.56 1.57 / -1.22 1.15 / 0.35 2.09 / -1.78 1.17 / 0.49 1.03 / 1.01
0.81 / 1.13 0.88 / 0.05 1.41 / -1.24 0.68 / 1.32 11.02 / -2.14 1.82 / -0.15 0.59 / 1.04
Figure 8: (Continues from previous page) Decolorization experiment results. Left: original RGB image, right: grayscale conversion results. RWMS error images and mean RWMS (the smaller the better) / z-score (the larger the better) values. Our Laplacian colormap method performs the best in most cases.

5 Conclusions

Laplacian colormaps address the problem of structure-preserving color transformations by relying on Laplacians as image structure descriptors and using Laplacian commutativity as a criterion for structure preservation. Given a parametric colormap, we optimize for the parameters that produce an image whose Laplacian commutes as much as possible with the one of the original image, thus preserving its original structure. Since Laplacians can be defined in any colorspace, our approach can be applied to different kinds of colormaps (global or local, with any number of input and output channels, and where part of the mapping is provided a priori). Moreover, Laplacians can be computed using similarity of local feature descriptors rather than individual pixels colors. Computationally, pixel-wise relationships is the main bottleneck of our approach: a benchmark we ran on all the 25 pictures of C̆adík’s dataset using a MacBook Pro with 8GB RAM showed that the average computation time for color-to-grayscale conversion was 117 seconds. This cost can be alleviated by computing Laplacians on resized images, at a price of potentially lower accuracy.

Overall, we believe that our results show the promise in the use of Laplacians commutators to measure structure similarity, and seem to be the first application of rather theoretical results on joint diagonalization of matrices to very practical problems in image processing. In future works, we intend to explore additional applications such as image correspondence and alignment. We believe that our approach will be especially useful when handling visually different but structurally similar scenes.

Acknowledgements

This research was supported by the ERC Starting Grant No. 307047 (COMET).

References

  • [AF09] Alsam A., Farup I.: Colour gamut mapping as a constrained variational problem. In Image Analysis. 2009, pp. 109–118.
  • [BD13] Bansal M., Daniilidis K.: Joint spectral correspondence for disparate image matching. In Proc. CVPR (2013).
  • [BdQEW00] Balasubramanian R., de Queiroz R. L., Eschbach R., Wu W.: Gamut mapping to preserve spatial luminance variations. In Proc. Conf. Color Imaging (2000).
  • [BGL13] Bronstein M. M., Glashoff K., Loring T. A.: Making Laplacians commute. arXiv:1307.6549 (2013).
  • [BSBB06] Bonnier N., Schmitt F., Brettel H., Berche S.: Evaluation of spatial gamut mapping algorithms. In Proc. Conf. Color Imaging (2006).
  • [BVM07] Brettel H., Viénot F., Mollon J. D.: Computerized simulation of color appearance for dichromats. JOSA 14 (2007), 2647–2655.
  • [Čad08] Čadík M.: Perceptual evaluation of color-to-grayscale image conversions. CGF 27, 7 (2008), 1745–1754.
  • [CHRW10] Cui M., Hu J., Razdan A., Wonka P.: Color-to-gray conversion using isomap. Visual Computer 26, 11 (2010), 1349–1360.
  • [CIE04] Guidelines for the evaluation of color gamut mapping algorithms. Tech. Rep. CIE 156:2004, 2004.
  • [dal] Daltonize. http://www.daltonize.org/.
  • [EGBB12] Eynard D., Glashoff K., Bronstein M. M., Bronstein A. M.: Multimodal diffusion geometry by joint diagonalization of Laplacians. arXiv:1209.2295 (2012).
  • [GB13] Glashoff K., Bronstein M. M.: Matrix commutators: their asymptotic metric properties and relation to approximate joint diagonalization. LAA 438, 8 (2013), 2503–2513.
  • [GD07] Grundland M., Dodgson N. A.: Decolorize: Fast, contrast enhancing, color to grayscale conversion. Pattern Recognition 40, 11 (2007), 2891–2896.
  • [GOTG05] Gooch A. A., Olsen S. C., Tumblin J., Gooch B.: Color2gray: salience-preserving color removal. TOG 24, 3 (2005), 634–639.
  • [HJ90] Horn R. A., Johnson C. R.: Matrix Analysis. Cambridge University Press, 1990.
  • [HTWW07] Huang J.-B., Tseng Y.-C., Wu S.-I., Wang S.-J.: Information preserving color transformation for protanopia and deuteranopia. Signal Processing Letters 14, 10 (2007), 711–714.
  • [JKDB11] Joo Kim S., Deng F., Brown M. S.: Visual enhancement of old documents with hyperspectral imaging. Pattern Recognition 44, 7 (2011), 1461–1469.
  • [KAC10] Kuk J., Ahn J., Cho N.: A color to grayscale conversion considering local and global contrast. In Proc. ACCV (2010).
  • [KBB13] Kovnatsky A., Bronstein M. M., Bronstein A. M., Glashoff K., Kimmel R.: Coupled quasi-harmonic bases. CGF 32 (2013), 439–448.
  • [KJDL09] Kim Y., Jang C., Demouth J., Lee S.: Robust color-to-gray via nonlinear global mapping. TOG 28, 5 (2009), 161:1–161:4.
  • [KJY12] Kim H.-J., Jeong J.-Y., Yoon Y.-J., Kim Y.-H., Ko S.-J.: Color modification for color-blind viewers using the dynamic color transformation. In Proc. ICCE (2012).
  • [KOF08a] Kuhn G. R., Oliveira M. M., Fernandes L. A. F.: An efficient naturalness-preserving image-recoloring method for dichromats. Trans. VCG 14, 6 (2008), 1747–1754.
  • [KOF08b] Kuhn G. R., Oliveira M. M., Fernandes L. A. F.: An improved contrast enhancing approach for color-to-grayscale mappings. Vis. Comput. 24, 7 (2008), 505–514.
  • [KSES05] Kimmel R., Shaked D., Elad M., Sobel I.: Space-dependent color gamut mapping: a variational approach. Trans. Image Processing 14, 6 (2005), 796–803.
  • [LHM11] Lau C., Heidrich W., Mantiuk R.: Cluster-based color space optimizations. In ICCV (2011).
  • [Lin97] Lin H.: Almost commuting selfadjoint matrices and applications. Fields Inst. Commun. 13 (1997), 193–233.
  • [LXJ12] Lu C., Xu L., Jia J.: Contrast preserving decolorization. In Proc. ICCP (2012).
  • [MG88] Meyer G. W., Greenberg D. P.: Color-defective vision and computer graphics displays. IEEE Comput. Graph. Appl. 8, 5 (1988), 28–40.
  • [Mor08] Morovič J.: Color Gamut Mapping. Wiley, 2008.
  • [NHU99] Nakauchi S., Hatanaka S., Usui S.: Color gamut mapping based on a perceptual image difference measure. Color Research & Application 24, 4 (1999), 280–291.
  • [NJW02] Ng A., Jordan M. I., Weiss Y.:

    On spectral clustering: Analysis and an algorithm.

    In Proc. NIPS (2002).
  • [NvN07] Neumann L., Čadík M., Nemcsics A.: An efficient perception-based adaptive color to gray transformation. In Computational Aesthetics (2007).
  • [RGW05a] Rasche K., Geist R., Westall J.: Detail preserving reproduction of color images for monochromats and dichromats. IEEE Comput. Graph. Appl. 25, 3 (2005), 22–30.
  • [RGW05b] Rasche K., Geist R., Westall J.: Re-coloring images for gamuts of lower dimension. CGF 24, 3 (2005), 423–432.
  • [SF10] Süsstrunk S., Fredembach C.: Enhancing the visible with the invisible. In Proc. SID (2010).
  • [SI07] Shechtman E., Irani M.: Matching local self-similarities across images and videos. In Proc. CVPR (2007).
  • [SKM98] Sochen N., Kimmel R., Malladi R.: A general framework for low level vision. Trans. Image Processing 7, 3 (1998), 310–318.
  • [SLTM08] Smith K., Landes P.-E., Thollot J., Myszkowski K.: Apparent greyscale: A simple and fast conversion to perceptually accurate images and video. CGF 27, 2 (2008), 193–200.
  • [SM00] Shi J., Malik J.: Normalized cuts and image segmentation. Trans. PAMI 22, 8 (2000), 888–905.
  • [TG11] Tsukida K., Gupta M. R.: How to Analyze Paired Comparison Data. Tech. rep., DTIC Document, 2011.
  • [Thu27] Thurstone L. L.: A law of comparative judgment. Psychological Review 34, 4 (1927), 273–286.
  • [VBM99] Viénot F., Brettel H., Mollon J. D.: Digital video colourmaps for checking the legibility of displays by dichromats. Color Research & Application 24, 4 (1999), 243–252.
  • [WK12] Wetzler A., Kimmel R.: Efficient Beltrami flow in patch-space. In Proc. SSVM. 2012.
  • [ZF12] Zhou B., Feng J.: Gradient domain salience-preserving color-to-gray conversion. In SIGGRAPH Asia (2012).
  • [ZSM] Zhang X., Sim T., Miao X.: Enhancing photographs with Near InfraRed images. In CVPR 2008.
  • [ZT10] Zhao Y., Tamimi Z.: Spectral image decolorization. In Proc. ISVC (2010).

Appendix A: Gradients of the cost function

Let be the image Laplacian as defined in (2). We denote by the number of non-zero elements in the adjacency matrix , and by the number of parameters of the colormap, respectively. The non-zero elements are indexed as . denotes the th channel of the colormap, such that , and is its gradient w.r.t. .

We now derive the gradients of the cost function (3). The gradient of the -term is trivial,

Denote by the matrix of size , whose th row is the gradient of the th channel at the th pixel, , and define matrix . Differentiating the -term w.r.t gives

The gradients of the first two terms of (3

) are obtained by applying the chain rule. First, we differentiate the terms w.r.t

, obtaining a gradient of size . Next, we differentiate w.r.t. . The gradient of the adjacency matrix elements w.r.t. is

The gradient of the commutator (-term) is:

The gradient of the -term is:

Here, are matrices with equal columns given by

Finally, the gradient of the colormap appearing in the expressions above depends on the choice of the colormap. For all the experiments using the colormap defined in Section 4, the derivation of the gradient is straightforward. In the experiments simulating color blindness, the colormap is , whose gradient is given as , where is the Jacobian of . In our experiments, transformation simulating the deficient observer is linear , and thus .