Stated generally, a “painting” in the physical world is a two-dimensional arrangement of material. This material may be oil or watercolor paint, or it may be ink from a pen or marker, or charcoal or pastel. These pigments achieve a colorful appearance by virtue of how they absorb and reflect light and their thickness. Kubelka and Munk [1, 2] described a model for the layering of physical materials, and Duncan  extended it to include homogeneous mixing. In this model, the appearance of a material (reflectance and transmission of light) is defined by how much it scatters and absorbs each wavelength of light and its overall thickness. These models are widely used to model the appearance of paint, plastic, paper, and textiles; they have been used previously in the computer graphics literature [4, 5, 6, 7].
When painting, artists choose or create a relatively small set of pigments to be used throughout the painting. We call this set the primary pigment palette. We assume that all observed colors in the painting are created by mixing or layering pigments from the palette.
When we view a painting, either directly with our eyes or indirectly after digitizing it into a three-channel RGB image, we observe only the overall reflectance and not the underlying material parameters. In RGB-space, the underlying pigments which combine to form the appearance of a pixel are not accessible for editing. One color in the palette cannot be easily changed or replaced. Translucent objects, common in paintings due to the mixing of wet paint, cannot be easily extracted or inserted.
We propose an approach to decompose a painting into its constituent pigments in two stages. First, we compute a small set of pigments in terms of their Kubelka-Munk (KM) scattering and absorption parameters. Second, we compute per-pixel mixing proportions for the pigments that reconstruct the original painting. We show that this decomposition has many desirable properties. Particularly for images of paintings, it is able to achieve lower error reconstructions with smaller palettes than previous work. Furthermore, the decomposition enables image editing applications to be posed in pigment space rather than RGB space, which can make them more effective or more expressive. We demonstrate tonal adjustments by editing pigment properties; recoloring; selection masking; copy-paste; palette summarization; and edge enhancement.
, which both present ways to interpret structure in unstructured documents to enable high level edits based on the interpreted structure. In Lillicon’s case, the structure is an alternate vector representation of the artwork, while in Project Naptha, the structure is styled text within the image. Our contribution is to apply this strategy to flat, unstructured RGB images of paintings, which are created via a complex structure (physical pigments and brush strokes). Our analysis allows us to interpret the complex structure of the painting from the RGB image, which enables editing operations based on that structure.
2 Related Work
Our work is inspired by the recent efforts of Tan et al. , Lin et al. , Aksoy et al.  and Zhang et al.  to decompose an arbitrary image into a small set of partially transparent layers suitable for RGB compositing. Tan et al.  use RGB-space convex hull geometry to extract a palette, and then solve an optimization problem to extract translucent layers for the Porter-Duff “over” compositing operator (alpha compositing), which is the standard color compositing model. Lin et al.  extract translucent layers from images and videos based on an additive color mixing model. They use locally linear embedding, which assumes that each pixel is a linear combination of its neighbors. Aksoy et al.  extract translucent layers from images, also based on an additive color mixing model. However, unlike Tan et al.  and Lin et al. , each layer’s color varies spatially. Zhang et al.  use a clustering-based method to extract palette colors and then decompose the entire image into a linear combination of them. This is a similar representation as the additive mixing layers from Lin et al.  and Aksoy et al. . All of these decompositions allow users to edit the image in a more intuitive manner, effectively segmenting the image by color and spatial coherence. Similarly, Chang et al. 
extract a small palette of colors from an image and implicitly model each pixel as a mixture of those palette colors to enable image recoloring using radial basis functions. We extend these results specifically for physical paintings by using a physically-inspired model of pigment mixing (Kubelka-Munk) and estimating multispectral (greater than RGB) pigment properties.
Our work is contemporaneous with Aharoni-Mack et al. , who decompose watercolor paintings into linear mixtures of a small set of primary pigments also using the Kubelka-Munk mixture model. The primary differences between our approaches is that they target (translucent) watercolor paintings and use 3-wavelength (RGB) parameters with varying thickness, while we evaluate our approach with (opaque) acrylic and oil paintings and compute an 8-wavelength constant-thickness decomposition. They similarly use a convex-hull in color-space to identify palette colors. Both methods regularize the problem at least in part with spatial smoothness. Both methods leverage existing datasets of measured Kubelka-Munk scattering and absorption parameters (3-wavelength watercolor pigment parameters from Curtis et al.  versus 33-wavelength acrylic parameters from Okumura ).
Algorithmically, our work is most similar to that of Kauvar et al. , which optimizes a set of multispectral illuminants and linear mixing weights to reproduce an image. This is suitable for their scenario (choosing projector illuminants) but not for mimicking physical paintings. The nonlinear nature of the Kubelka-Munk equations makes our problem much harder.
While the Kubelka-Munk (KM) equations  can be used to reproduce the color of a layer of pigment, the pigment coefficients are difficult to acquire , so researchers have pursued a simplified model. Curtis et al.  use a three wavelength model they compute from samples of paint over white and black backgrounds. In our multispectral scenario given RGB data and a fixed background, direct extraction is ill-posed. The IMPaSTo system  uses a low dimensional approximation of measured pigments to enable realtime rendering. In contrast, we focus on the problem of decomposing existing artwork. Xu et al. 
use a neural network to learn to predict RGB colors from a large number of synthetic examples. RealPigment estimates composited RGB colors from exemplar photos of artist color mixing charts. In our scenario, we are given the RGB colors and estimate the multispectral scattering and absorption parameters.
There is extensive work on multispectral acquisition systems using custom hardware . Berns et al.  use a standard digital camera with a filter array. Parmar et al.  use a bank of LED’s to capture the scene under different illumination spectra. Park et al.  optimize a set of exposures and LED illuminants to achieve video rates. Multispectral images have many useful applications. Ibrahim et al.  demonstrate intrinsic image reconstruction and material identification. Berns et al.  compare a multispectral imager to a point spectrophotometer for measurements of paintings. Multispectral imaging provides a non-invasive way to preserve paintings and analyze their construction. Liang et al.  used a combination of optical coherence tomography (OCT) imaging with multispectral imaging to identify pigments’ reflectance, absorption, and scattering parameters. Berns et al.  estimate the full reflectance spectrum of a painting using a reduced dimension parameterization made from spectra of known KM pigments. Zhao et al.  achieve better reconstructions by fitting mixtures of known pigments to estimated multispectral reflectances. Pelagotti et al.  and Cosentino  both use multispectral images as feature maps to identify single layers of known pigments. Most similar to our work, Zhao et al.  use multispectral measurements of Van Gogh’s “Starry Night” to estimate one parameter masstone KM mixing weights for known pigments to reconstruct the painting. Delaney et al.  use fiber optic reflectance spectroscopy and X-ray fluorescence to help identify and map pigments in illuminated manuscripts. Abed et al.  described an approach to identify pigment absorption and scattering parameters and extract pigment concentration maps from a multispectral image via a simplified, one-parameter Kubelka-Munk model. All of these works require exotic acquisition hardware, whereas we focus on generating plausible results using standard, easy-to-obtain RGB images. There are plenty of high-quality RGB images of paintings freely available via the Google Art Project.
The intuition behind this work comes from how pigments mix in real versus digital media. Digital RGB color mixing is a linear operation: all mixtures of two RGB colors lie on the straight line between them in RGB-space. Mixing two physical pigments with the same two apparent RGB colors, however, produces a curve in RGB-space (Fig. 2). The shape of this curve is a function of the multispectral Kubelka-Munk coefficients of the pigments being interpolated. Our intuition is that those multispectral coefficients can be deduced by the observed shape of a mixing or thinning curve in RGB.
3.1 Kubelka-Munk Equations
The Kubelka-Munk equations (KM) are a physically-inspired model for computing the per-wavelength reflectance value of a layer of homogeneous pigment atop a substrate:
where is the thickness of the pigment layer, and are the pigment’s absorption and scattering per unit thickness, is the substrate reflectance, and is the final reflectance of the pigment layer. , , , and are all per-wavelength, while the thickness is constant for all wavelengths. For convenience, we use to represent both KM coefficients with a single vector variable across all wavelengths . We denote the vectorized Equation 1 as .
Mixtures of pigments are modeled as the weighted average of KM coefficients:
where is the pigment parameter vector.
To render a KM pigment to RGB requires knowing the pigment’s KM coefficients, the substrate reflectance, the layer thickness, the illuminant spectrum, and the color matching functions which map from a reflectance spectrum to a tristimulus value, which can then be converted to linear RGB and gamma corrected to sRGB. Figure 3 shows the pipeline for a single pigment. We use the D65 standard illuminant and CIE color matching functions .
Every pixel has a parameter vector . For a pixel in the image with mixed KM coefficients , yields a reflectance spectrum defined at each of the wavelengths. We denote the spectrum rendering pipeline in Figure 3 as a function , so is the sRGB color for pixel . Thus to render an image we have:
where is the matrix of all pixels’ pigment parameters.
In contrast to RGB color compositing, this model is highly non-linear and results in much more of an “organic” feel of traditional media paints as compared to digital paintings (Fig. 2).
It is important to consider the required number of wavelengths to simulate. Too many wavelengths will be difficult to optimize, whereas too few may not be able to accurately reconstruct the image appearance. We experimented with mixtures of cyan, magenta, and yellow pigments from 33 wavelengths to 3. We found that below 8 wavelengths, the color reproduction loses fidelity (Fig. 4). We can also see that the size of the RGB gamut that can be reconstructed is artificially restricted at 3 wavelengths versus 8. This is in agreement with prior work such as RealPigment  and IMPaSTo . (Aharoni-Mack et al.  is based on a gamut of 3-wavelength KM pigment parameters, which is potentially more restrictive than our gamut of 8-wavelength KM pigment parameters.)
3.2 Problem Formation
A painter creates a palette from a set of e.g. tubes of paint, which we call the primary pigments. Every color in the painting is a mixture of these primary pigments. Therefore, mixtures of the primary pigments’ KM coefficients are sufficient to reproduce the RGB color of every pixel in the painting. Our method estimates the coefficients of a small set of primary pigments to minimize the RGB reconstruction error.
For wavelengths, each primary pigment is a vector of coefficients. We represent the set of primary pigments as an matrix . Every pixel in the painting can be represented as a convex combination of these primary pigments, , where is the vector of mixing weights (). We can express all pixels in the image as the matrix product , where the form the rows of the matrix . Eq. 3 becomes:
where is the matrix of our painting’s per-pixel RGB colors.
To simplify the problem, we assume the canvas is pure white (). We further assume that the entire canvas is covered with a single layer of constant thickness paint, where each pixel’s paint is a weighted mixture of pigments. Thus, our equation becomes:
We use Eq. 5 to pose an optimization problem:
where and are column vectors of ones, and subject to the constraints and . forces our per-pixel weights to sum to one, since each pixel’s coefficients are a convex combination of the primary pigments. As an alternative to , one could use in . This would allow unconstrained variation of while maintaining that the weights (rows) of sum to one. However, in our experiments we found that has better convergence properties.
We make the assumption that thickness , because we are primarily focused on acrylic and oil paints, which are quite thick, especially as compared to watercolor. Our assumption means that we cannot capture impasto effects or thin watercolor effects accurately. Note, however, that the choice of which constant thickness value to use is arbitrary. Thickness appears in the KM equations as a scale factor for , but neither nor appear elsewhere except as a ratio. Therefore, changing the constant thickness to another value is equivalent to uniformly scaling all and .
Allowing the thickness to vary introduces an additional degree-of-freedom per pixel. Figure5 shows an experiment in which we solve for two pigments’ multispectral and parameters and per-pixel mixing weights; we optionally allow thickness to vary per-pixel. When thickness varies, the problem is under-constrained. To make the problem tractable, we add a smoothness regularization term. However, this leads to incorrect thickness estimation and less accurate multispectral reflectance (and slower optimization performance). While varying thickness may be particularly useful for watercolor or translucent paint, we did not pursue it in our thick-paint scenario beyond these initial experiments.
3.3 Solution Space
In Eq. 6, and are both unknown, so we have unknown variables and known equations, which makes our problem under-constrained for . We can use regularization to make the problem over-constrained. While this results in a solution, there are infinitely many solutions to the problem as originally stated for any particular image. This is for two reasons.
First, projects from -dimensional reflectance spectra to -dimensional tristimulus values. For any given tristimulus value there are infinitely many possible spectra (metamers) that could produce it. This is analogous to seeing only the 2D projection or “shadow” of a 3D curve. No matter how many high dimensional samples we obtain, projects them all in parallel.
Second, if there exists s.t. , for , and , then and is another solution that generates the same RGB result. In a simple geometric sense, could be a rotation or scale of the KM coefficients associated with the primary pigments. So long as the set of observed pigment parameters all lie within the polytope whose vertices are the rows of , then e.g. rotations and scales that maintain that property will also produce solutions. If the colors are near the edges of the gamut, or the pigment parameters are near the edges of the KM space (i.e. have small values in the KM coefficients), then there will be very little “wiggle room” for the pigments to move. Conversely, if the set of observed pigment parameters are compact (i.e. no KM coefficients near zero), then many different gamuts may be possible.
Our naively posed optimization problem (Eq. 6) is too slow to run on an entire, reasonably-sized image at once. To improve performance, we decompose our task into two subproblems: estimating primary pigments and estimating per-pixel mixing weights.
4.1 Estimating Primary Pigments
The first step in our pipeline is to estimate a set of primary pigment coefficients that can reconstruct the painting. Even for small input images of 0.25 megapixels, doing this estimation over every pixel in the image would be computationally very expensive. We observe that it is not necessary to consider every pixel, since many pixels contain redundant information. Therefore, we optimize over a small subset of representative pixels, carefully chosen to well-represent the image’s color properties.
To find a small subset of representative pixels , we find the 3D convex hull of the set of RGB colors in the image using the QHull algorithm . For the images we tested, this usually results in a few hundred unique colors. These pixels are particularly well suited to the task of estimating the primary pigments because they span the full gamut of the painting’s colors. They are guaranteed to include the most extreme combinations of pigments. Conversely, pixels in the interior of the convex hull are less distinct, resulting in less vibrant recovered primary pigments.
The optimization problem as posed in Eq. 6 is very similar to non-negative matrix factorization—which is non-convex—with the added non-linearities of the KM equation and gamma correction. Therefore, we use the Alternating Nonlinear Least Squares (ANLS) method for our optimization.
In the first step, we fix the set of primary pigment coefficients and solve for the mixing weights :
with the constraint , where is the number of representative pixels, and is the number of pigments, and .
In the second step, we fix and solve for . When estimating the primary pigments, we add an additional regularization term to avoid creating physically implausible pigment coefficients. Specifically, KM pigment coefficients and should vary smoothly across wavelengths , and the ratio (which determines the pigment’s masstone) should also vary smoothly across wavelengths (Figure 5). We encode these smoothness observations as:
over all primary pigments and wavelengths, where and control the relative influence of the terms. Putting it all together, our optimization for the second step is:
with the constraint , where is the number of pigments, and is the number of wavelengths, and .
As with any non-convex optimization problem, initialization is a key factor in how quickly the solution converges and whether a local minimum is found. In our case, the solution is not unique, so there are many potential minima to converge upon. Therefore, initialization is very important for finding a good solution.
One option is to initialize randomly, which produces somewhat unpredictable results, though we do find that plausible solutions (where the colors of the primary pigments roughly match a painter’s expectations) are often well-represented. In the absence of other information, we can present the user with the results of e.g. ten random initializations to choose their preferred solution.
An alternative is to take advantage of a prior, such as a natural distribution of real pigments in KM space. We use a set of 26 measured acrylic paints from Okumura . When the prior is similar to the pigments used in the painting, reconstruction often finds the approximately correct KM coefficients. When the prior is of a different media than the painting (e.g. using an acrylic prior with a watercolor painting), then while the result will have low reconstruction error and look plausible, the mixing properties of the pigments may not be correct (e.g. a watercolor painting may have more opaque pigments in the reconstruction than in reality). When multiple such priors are available, a user could select the correct prior to use for a given painting. In our experiments, we rely on the dictionary of Okumura’s 26 acrylic pigments. To boost the size of the dictionary, we also include every pair of pigments mixed 50%, for a total of 351 entries.
To initialize with pigments using our prior, we start from the convex hull of the RGB colors in the image. We simplify the convex hull to vertices as in Tan et al. . We then match these RGB colors to the closest matching (Euclidean distance) RGB colors in the dictionary and use the corresponding KM coefficients. The RGB color of a dictionary pigment is obtained by rendering with the same pipeline as Eq. 3 (with thickness and substrate ). If two convex hull colors match to the same dictionary color, the closer match is used and the other convex hull vertex matches to its second closest dictionary color.
4.2 Estimating Mixing Weights
The second step of our pipeline uses the estimated set of primary pigments to compute per-pixel mixing weights for the entire image. We use observations about the nature of painting construction to add additional regularization terms, improving convergence and making the results more useful for editing applications.
First, we add a term for per-pixel weights sparsity , which encourages each pixel’s pigment weights to be close to 0 or 1:
where is matrix of ones. This term has the effect of maximizing color separation throughout the painting, so that each pigment influences as small a portion of the image as possible. This is desirable because it results in more localized pigment editing operations.
Second, we add a term for spatial smoothness of the weights:
where is a Laplacian or a bilateral smoothing matrix. We use a bilateral operator  in order to preserve edges that appear between brush strokes of different colors of paint. In our experiments, the Laplacian operator blurred edges and caused pigments to incorrectly bleed into image regions.
With these additional terms, our optimization to reconstruct mixing weights becomes:
where , , and subject to , where is the number of pixels in the entire image and is the number of pigments.
This optimization is still very large and difficult to solve directly. Instead, we solve it in a coarse-to-fine manner. We downsample the image by factors of two until the short edge is less than 80 pixels. We solve the optimization on the smallest image, initializing each pixel’s mixing weights to . We upsample each solution (mixing weights) as the initialization for the next larger optimization. We repeat this procedure to obtain a solution for the original image.
Pseudocode for our method can be found in Algorithm 1. Computational complexity is difficult to analyze because our approach is based on iterative nonlinear optimization Run-time performance is dominated by the optimization for all pixels’ weights (line 15), as discussed in the following section. To evaluate the convergence of our pigment parameter estimation’s two alternating optimization steps, we measure the total energy () per iteration. Fig. 7 plots this for six of the examples used in Table I. In all examples, the energy decreases rapidly after a few iterations. Some examples reach the maximum number of iterations rather than our strict convergence criteria (Algorithm 1).
To demonstrate our results, we conducted a series of experiments on synthetic and real images, comparing amongst different conditions and with previous work. All tests were run on a single core of either a 2.53 GHz Intel Xeon E5630 or a 2.50 GHz Intel Core i7-4870HQ, implemented in Python using the L-BFGS-B  solver. Runtime information is presented in Table I, which shows that we are generally faster than Tan et al. . Once the primary pigments and mixing weights are estimated, all of our editing applications occur in realtime.
We used synthetic images to evaluate our pipeline’s recovery performance against ground truth. We used our pipeline to obtain weight maps from a painting. We then created five synthetic paintings by randomly choosing sets of pigments from a dataset of measured multi-spectral KM coefficients of real acrylic paint ; mixing them according to our weight maps; and rendering them to sRGB using Eq. 3. These five synthetic paintings appear to depict the same flower painted with different colors. To make our initialization fair, we used a hold-one-out methodology for the pigment dictionary: we removed the five pigments used to construct the synthetic image from the set of candidate pigments for initialization, leaving a dictionary of 21 (plus mixed pigments, so 231 in total). Fig. 8 shows one example of our five synthetic images and its recovery. All reconstruction errors are presented in Table II.
The results of this experiment are that the pigment coefficients and have relatively high error, where for our measured pigments, and . Our reflectance spectra have lower recover error, , because there are many values of and that can create the same appearance (metamers). Since the pigments are different from ground truth, the recovered weight maps , are different as well. However, the RGB image’s reconstruction error is small, generally below the noticeable threshold. We also tested the weight map recovery step in isolation by using ground truth pigments to estimate *, which has a smaller but still significant recovery error. The final RGB* image reconstruction error stays low. This experiment confirms that there are many solutions to our reconstruction problem, but that we are able to reproduce plausible values.
Influence of Palette Size
Since the number of primary pigments is not automatically determined by our algorithm, we evaluated a set of images over a wide range of palette sizes (Fig. 9). Unsurprisingly, as the number of pigments increases, aggregate reconstruction error decreases. Interestingly, each image seems to have a number of pigments beyond which the RMSE stops decreasing. Intuitively, this would be the natural number of pigments in the painting. For paintings with very large numbers of pigments, it is unlikely that this property would hold, as eventually a large set of primary pigments would be over-complete and no additional information could be gained. However, painters often use relatively small palettes in practice.
We show our pipeline running on scans of physical paintings in Figures 6 and 10: extracted primary pigments, weight maps and reconstructed RGB images. Reconstruction errors are reported in Table I. We reconstructed with 4 to 6 pigments for every example, but only show the result with the smallest palette that produced low reconstruction error. Painting four_colors_2 is known to have been created with only four paints: titanium white, cadmium yellow lemon, cadmium red, and ultramarine blue. Our extracted palette’s RGB colors are very similar, though the yellow is a bit greenish and the blue is dark.
Comparison to Tan et al. .
Our algorithm uses multispectral pigments with the nonlinear KM model, in contrast to previous work like Tan et al. , which solves a similar problem using a linear compositing RGB model. Intuitively, we would expect that our model would be able to reconstruct paintings at lower error with fewer parameters. The experiment we show in Fig. 10 confirms this. For two paintings, our technique is able to reconstruct the images with low error using only four pigments. Tan et al. ’s algorithm results in much higher error for the same number of colors. In order to achieve a similar RGB reconstruction error, Tan et al.  must increase the palette to six colors. In general, it is easier to edit a painting with a smaller palette.
Comparison to Aksoy et al. .
Figures 10 and 11 show the same examples decomposed using the approach of Aksoy et al. . Their approach extracts additive linear RGB mixing layers. The layers contain color distributions, not single colors, though their approach guarantees zero reconstruction error. The number of layers is automatically selected, balancing the colorfulness of layers against smaller palettes. The colorful layers are difficult to edit, since color distributions must be modified.
Comparison to Chang et al. .
Figure 10 shows palettes for the same input images extracted by the approach of Chang et al. . In their approach, the palette size is manually chosen by the user. For the four_colors_2 image with known four ground truth pigments, even very large palettes never contains a color similar to ultramarine blue. Instead, “redundant” colors are added.
Influence of Wavelengths
Our pipeline recovers 8 wavelength pigment absorption and scattering coefficients, because of the experiment in Figure 4 that shows a limited RGB gamut for 3 wavelength rendering. For completeness, we compare with 3 wavelength recovery in Figure 12. As 3 wavelength recovery is not multispectral anymore, we slightly amend our model equation, Eq. 3: we change the illuminant from D65 to pure white (
), and we set the color matching function to be the identity matrix (). This has the effect of directly mapping the and coefficients to the RGB color channels, as done by Curtis et al. .
We find that 3 wavelength recovery has larger RGB reconstruction RMSE for the same size palettes in all of our experiments, though many of the achieved errors are still low enough to be generally unnoticeable. For some images, such as the two pictured in Fig. 12, there is obvious color distortion. We believe this is due to the restricted gamut of the 3 wavelength pigment model, which has a significant (visible) impact only on paintings that include colors in those extreme portions of the gamut, notably certain greens and reds. For paintings with colors entirely within the 3 wavelength gamut, the differences will be negligible.
Once we have analyzed a painting to extract its primary pigments (inset for most figures) and mixing weight maps, we can re-pose a number of image editing operations in pigment space to enable interesting paint-aware applications.
Selection masking in images of paintings can be improved by optimizing on pigment weights instead of RGB colors. Semantic image boundaries are likely to correspond with changes in paint, whereas RGB edges may be less obvious, when different paint mixtures are used to paint distinct objects. Also, paint thickness can create lighting variations across the surface of the painting that can confuse RGB boundary analysis. We demonstrate a standard GrabCut  implementation on two paintings on pigment maps vs. RGB values in Figure 13, which clearly shows improved localization of painted objects in the black rectangular regions. GrabCut was performed on the red pigment for the top painting, and on the black pigment for the bottom painting. No background and foreground scribbles are provided to the GrabCut algorithm.
The pigment mixing maps provide a novel parameterization for image edits that may be difficult in RGB, by adjusting the relative values of the mixing weights, or adjusting the coefficients of the extracted pigments. First, we can vary the weight of any of the extracted pigments by scaling its map up or down and optionally re-normalizing the per-pixel weight sum to one. For a painting that has e.g. yellow pigment, this change corresponds to varying the amount of yellow in the image, in a way that would be difficult to reproduce using the features of a digital image manipulation program (Fig. 14).
Similarly, most extracted palettes include some white and black pigments for creating tints and shades. Adjusting the relative weights of these pigments is akin to adjusting the brightness and contrast of an image, but again with different results. For example, in Fig. (a)a, the result of increasing the black weight is more like emphasizing shadows and detail, instead of just darkening, while the result of increasing the white weight is desaturation of the colors.
The KM coefficients of the pigments can also be relatively adjusted for interesting effects. Fig. (b)b shows scaling the per-wavelength scattering coefficients of the green pigment, while keeping absorptions constant. Increasing scattering means that more light will be reflected back, so in some sense this is similar to brightening the green and making it more opaque, while decreasing scattering creates a darker green that absorbs more than it scatters, so perhaps more like a stained glass. Changing the scattering coefficients produces different hues of green compared to manipulating the pigment mixing weight (rightmost image in Fig. (b)b).
Previous work focused on recoloring images by changing the extracted palette colors. Tan et al.  reconstructs each image pixel as a set of RGB layers, so changing a palette color has a straightforward impact on the resulting image. Our recoloring result is similar to Tan et al. , with the difference being that we replace KM pigments in the extracted palette with other KM pigments (from Okumura ) and re-render the image, creating different mixed colors in the style of real traditional media paints. Fig. (a)a shows three examples. To enable a more direct comparison, we use our extracted palette RGB colors as the layer colors in Tan et al. . In the cat painting, the KM mixing weight map for the blue pigment is sparse and therefore the recoloring effect is localized on the body of the cat. The weight map from Tan et al.  has non-zero values in the background resulting in recoloring artifacts. For the rooster painting, using our KM model, more vibrant green is obtained from mixing yellow and blue in the circled region. For Starry Night, when swapping the extracted yellow pigment with a different yellow, the KM recoloring result reveals the green hue in the new yellow pigment, whereas the RGB recoloring result is similar to the original painting since the two yellow pigments have similar masstones in RGB space. Fig. (b)b shows a different recoloring comparison between Tan et al. , Chang et al.  and ours. Both Tan et al.  and Chang et al.  have color artifacts when using their own pipelines to recolor the painting to be similar to our result.
6.4 Cut, Copy, Paste
From a selection mask, we can use the pigment weight maps to do painterly cut, copy, and paste operations on images as well. For copy-paste, the user can specify a mask (using any mechanism) and the subset of pigments to copy. The selected region can then be pasted elsewhere as a new layer of paint on top of the image and re-composited. The paste operation can adjust paint properties simultaneously such as the thickness of the pasted layer, to achieve different compositing results, or can be added into the mixture model as additional paint mixed into the painting layer, with relative scaling and renormalization. These options result in different painterly variations on standard image copy-pasting (Fig. 17 and Fig. 1).
6.5 Palette Summarization
The first stage of our algorithm can also be seen as yet another method for extracting a small palette from an arbitrary image, not necessarily of paintings. Tan et al.  and Chang et al.  both present palette-extraction methods, as does Adobe’s Color CC app . We compare these results in Fig. (a)a, where it is clear to see that Chang et al.  and Kuler attempt to find “salient” or meaningful colors in some sense, whereas Tan et al.  and our work focus on colors that reconstruct the images. We achieve similar results to Tan et al. , but as we showed earlier our reconstructions are much lower error for the same number of colors, as we have more success with paint-like color mixtures such as green and cyan.
We can also use our palette extraction method to analyze collections of images, by amending our method to jointly reconstruct the pixels of multiple images. We use this approach to extract aggregate palettes from paintings of Van Gogh organized by year (Fig. (b)b). Two results are clear from this analysis. First, the range of colors Van Gogh painted with expanded over the 1880’s, as we expanded from eight pigments to ten pigments to achieve good reconstruction errors. Second, the vibrancy increased dramatically as well.
6.6 Edge Detection and Enhancement
Our weight maps can improve edge-based image analysis (Fig. 19). We apply an existing edge detection method  to each weight map separately and merge the per-pigment response as the per-pixel max. Paint edge images can be used to adapt standard image processing routines to be paint-aware. For example, we do edge enhancement by thickening pigments near boundaries according to the edge response, which can visually emphasize painted objects in a different way than RGB edge enhancement.
We demonstrate a method that can recover plausible physical pigments from only an RGB image of a painting, and then recover the mixing proportion of those pigments at each pixel. We are able to accurately reconstruct the RGB values of the image, and even closely match multispectral reflectance per-pixel as well, though the underlying pigment coefficients may differ. We use this decomposition to enable a number of image editing operations that occur in “pigment space,” which creates results in a style more consistent with natural media imagery rather than digital RGB edits.
First, our approach requires users to choose a target number of primary pigments. While this is the only user interaction in our entire pipeline, it is still a decision that the user must make. As shown in Fig. 9, a large number of primary pigments will have lower RGB image reconstruction error at the cost of more tedious edits and additional processing time. Second, if a ground truth primary pigment can be mixed from other primary pigments, our technique will not find it. However, this is an ambiguous situation, since it is not needed for perfect reconstruction. Still, it may be important for applications like pigment identification. Third, we use the Okumura  dataset as a prior to help find initial values for our optimization, though our solutions are not limited to them. We do not have other pigment datasets to verify whether this prior causes us to overfit our recovered pigment parameters. For example, this prior information may be more helpful for finding acrylic or oil pigments than watercolor. Fourth, we assume constant pigment thickness. This is a simplifying assumption that speeds our optimization since the solution space is already under-constrained. However, if we allowed for varying thickness, darker and lighter tones of a pigment could be obtained without a black or white pigment. We could reduce our palette size by computing the 2D convex hull of the chromaticity of the pixels, ignoring brightness (Section 4.1). This is similar to the approach used by Aharoni-Mack et al. . We would also be better able to handle media like watercolor with varying translucency. Finally, our approach does not recover ground truth, just plausible results enabling paint-like editing.
In the future, we would like to extend our result to estimate pigment layers instead of just mixtures. We plan to use our decomposition to help extract brush stroke-level structure from images of paintings, to enable manipulation of the brush strokes in painting images. We predict that interpreting complex image structures with more appropriate models will have applications in many applications of computer graphics.
We thank the anonymous reviewers for their inspirational comments and suggestions. We are also grateful to Yağız Aksoy for providing comparison results, and to the artists whose paintings we analyzed. This work was supported in part by the United States National Science Foundation (IIS-1451198, IIS-1453018), a Google research award, and gifts from Adobe Systems, Inc.
-  P. Kubelka and F. Munk, “An article on optics of paint layers,” Zeitschrift für Technische Physik, vol. 12, no. 593–601, 1931.
-  P. Kubelka, “New contributions to the optics of intensely light-scattering materials. Part I,” Journal of the Optical Society of America, vol. 38, no. 5, pp. 448–448, 1948.
-  D. R. Duncan, “The colour of pigment mixtures,” Proceedings of the Physical Society, vol. 52, no. 3, p. 390, 1940.
-  C. J. Curtis, S. E. Anderson, J. E. Seims, K. W. Fleischer, and D. H. Salesin, “Computer-generated watercolor,” in SIGGRAPH, 1997, pp. 421–430.
-  W. V. Baxter, J. Wendt, and M. C. Lin, “IMPaSTo: A realistic, interactive model for paint,” in NPAR, 2004, pp. 45–56.
-  J. Lu, S. DiVerdi, W. A. Chen, C. Barnes, and A. Finkelstein, “RealPigment: Paint compositing by example,” in NPAR, 2014, pp. 21–30.
-  J. Tan, M. Dvorožňák, D. Sýkora, and Y. Gingold, “Decomposing time-lapse paintings into layers,” ACM Trans. Graph., vol. 34, no. 4, pp. 61:1–61:10, Jul. 2015.
-  G. L. Bernstein and W. Li, “Lillicon: Using transient widgets to create scale variations of icons,” ACM Trans. Graph., vol. 34, no. 4, pp. 144:1–144:11, Jul. 2015. [Online]. Available: http://doi.acm.org/10.1145/2766980
-  K. Kwok and G. Webster, “Project Naptha: highlight, copy, and translate text from any image,” https://projectnaptha.com/, 2016, accessed: 2017-01-15.
-  J. Tan, J.-M. Lien, and Y. Gingold, “Decomposing images into layers via RGB-space geometry,” ACM Trans. Graph., vol. 36, no. 1, pp. 7:1–7:14, Nov. 2016. [Online]. Available: http://doi.acm.org/10.1145/2988229
-  S. Lin, M. Fisher, A. Dai, and P. Hanrahan, “Layerbuilder: Layer decomposition for interactive image and video color editing,” arXiv preprint arXiv:1701.03754, 2017.
-  Y. Aksoy, T. O. Aydin, A. Smolić, and M. Pollefeys, “Unmixing-based soft color segmentation for image manipulation,” ACM Transactions on Graphics (TOG), vol. 36, no. 2, p. 19, 2017.
-  Q. Zhang, C. Xiao, H. Sun, and F. Tang, “Palette-based image recoloring using color decomposition optimization,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1952–1964, 2017.
-  H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein, “Palette-based photo recoloring,” ACM Trans. Graph., vol. 34, no. 4, Jul. 2015.
-  E. Aharoni-Mack, Y. Shambik, and D. Lischinski, “Pigment-based recoloring of watercolor paintings,” in NPAR, July 2017.
-  Y. Okumura, “Developing a spectral and colorimetric database of artist paint materials,” Master’s thesis, Rochester Institute of Technology, 2005.
-  I. Kauvar, S. J. Yang, L. Shi, I. McDowall, and G. Wetzstein, “Adaptive color display via perceptually-driven factored spectral projection,” ACM Trans. Graph., vol. 34, no. 6, pp. 165:1–165:10, Oct. 2015. [Online]. Available: http://doi.acm.org/10.1145/2816795.2818070
-  S. Xu, H. Tan, X. Jiao, F. C. Lau, and Y. Pan, “A generic pigment model for digital painting,” Computer Graphics Forum, 2007.
-  B. Boldrini, W. Kessler, K. Rebner, and R. Kessler, “Hyperspectral imaging: a review of best practice, performance and pitfalls for inline and online applications,” Journal of Near Infrared Spectroscopy, vol. 20, no. 5, pp. 438–508, 2012.
-  R. Berns, L. Taplin, and M. Nezamabadi, “Spectral imaging using a commercial colour-filter array digital camera,” in ICOM-CC, 2005, pp. 743–750.
-  M. Parmar, S. Lansel, and J. Farrell, “An LED-based lighting system for acquiring multispectral scenes,” in Digital Photography VIII, 2012. [Online]. Available: http://dx.doi.org/10.1117/12.912513
-  J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in ICCV, 2007, pp. 1–8.
-  A. Ibrahim, T. Horiuchi, S. Tominaga, and A. Ella Hassanien, “Spectral reflectance images and applications,” in Image Feature Detectors and Descriptors: Foundations and Applications, A. I. Awad and M. Hassaballah, Eds., 2016, pp. 227–254.
-  R. S. Berns, L. A. Taplin, F. H. Imai, E. A. Day, and D. C. Day, “A comparison of small-aperture and image-based spectrophotometry of paintings,” Studies in Conservation, vol. 50, no. 4, pp. 253–266, 2005. [Online]. Available: http://dx.doi.org/10.1179/sic.2005.50.4.253
-  H. Liang, K. Keita, B. Peric, and T. Vajzovic, “Pigment identification with optical coherence tomography and multispectral imaging,” 2008.
-  R. S. Berns and F. H. Imai, “The use of multi-channel visible spectrum imaging for pigment identification,” in ICOM-CC, September 2002, pp. 217–222.
-  Y. Zhao, R. S. Berns, Y. Okumura, and L. A. Taplin, “Improvement of spectral imaging by pigment mapping,” in Color and Imaging Conference, 2005, pp. 40–45.
-  A. Pelagotti, A. D. Mastio, A. D. Rosa, and A. Piva, “Multispectral imaging of paintings,” IEEE Signal Processing Magazine, vol. 25, no. 4, pp. 27–36, July 2008.
-  A. Cosentino, “Identification of pigments by multispectral imaging; a flowchart method,” Heritage Science, vol. 2, no. 1, p. 8, 2014. [Online]. Available: http://dx.doi.org/10.1186/2050-7445-2-8
-  Y. Zhao, R. S. Berns, L. A. Taplin, and J. Coddington, “An investigation of multispectral imaging for the mapping of pigments in paintings,” in Electronic Imaging, 2008.
-  J. K. Delaney, P. Ricciardi, L. D. Glinsman, M. Facini, M. Thoury, M. Palmer, and E. R. d. l. Rie, “Use of imaging spectroscopy, fiber optic reflectance spectroscopy, and x-ray fluorescence to map and identify pigments in illuminated manuscripts,” Studies in Conservation, vol. 59, no. 2, pp. 91–101, 2014.
-  F. M. Abed, Pigment identification of paintings based on Kubelka-Munk theory and spectral images. Rochester Institute of Technology, 2014.
-  N. Ohta and A. R. Robertson, CIE Standard Colorimetric System. John Wiley & Sons, Ltd, 2006, pp. 63–114.
-  C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “The quickhull algorithm for convex hulls,” ACM Trans. Math. Softw., vol. 22, no. 4, pp. 469–483, Dec. 1996.
-  J. T. Barron and B. Poole, “The fast bilateral solver,” ECCV, pp. 617–632, 2016.
-  C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, “Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization,” ACM Trans. Math. Softw., vol. 23, no. 4, pp. 550–560, Dec. 1997.
-  C. Rother, V. Kolmogorov, and A. Blake, ““GrabCut” — interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph., vol. 23, no. 3, pp. 309–314, 2004.
A. Telea, “An image inpainting technique based on the fast marching method,”JGT, vol. 9, no. 1, pp. 23–34, 2004.
-  C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “PatchMatch: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., vol. 28, no. 3, pp. 24:1–24:11, Aug. 2009.
-  Adobe, “Adobe Color CC,” https://color.adobe.com, 2016.
-  P. Isola, D. Zoran, D. Krishnan, and E. H. Adelson, “Crisp boundary detection using pointwise mutual information,” in ECCV, 2014, pp. 799–814.