Recoding Color Transfer as a Color Homography

08/04/2016 ∙ by Han Gong, et al. ∙ 0

Color transfer is an image editing process that adjusts the colors of a picture to match a target picture's color theme. A natural color transfer not only matches the color styles but also prevents after-transfer artifacts due to image compression, noise, and gradient smoothness change. The recently discovered color homography theorem proves that colors across a change in photometric viewing condition are related by a homography. In this paper, we propose a color-homography-based color transfer decomposition which encodes color transfer as a combination of chromaticity shift and shading adjustment. A powerful form of shading adjustment is shown to be a global shading curve by which the same shading homography can be applied elsewhere. Our experiments show that the proposed color transfer decomposition provides a very close approximation to many popular color transfer methods. The advantage of our approach is that the learned color transfer can be applied to many other images (e.g. other frames in a video), instead of a frame-to-frame basis. We demonstrate two applications for color transfer enhancement and video color grading re-application. This simple model of color transfer is also important for future color transfer algorithm design.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 6

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adjusting the color style of pictures/frames is one of the most common tasks in professional photo editing as well as video post-production. Artists would often choose a desired target picture and manipulate the other pictures to match their target color style. This process is called color transfer. An example of color transfer between a source image and a target image is shown in Figure 1. Typically, this color tuning process requires artists to delicately adjust for multiple properties such as exposure, brightness, white-point, and color mapping. These adjustments are also interdependent, i.ewhen aligning an individual property this may cause the others to become misaligned. For rendered images, some artifacts (e.gJPEG block edges) may appear after color adjustment. It is therefore desirable to automate this time-consuming task.

Example-based color transfer was first introduced by Reinhard et al [ReinhardTransfer]. Since then, much further research [Pitie1, Pitie2, Pouli, Nguyen] has been carried out. A recent discovery of the color homography theorem reveals that colors across a change in viewing condition (illuminant, shading or camera) are related by a homography [PICS2016, CIC2016]. In this paper, we propose a general model, based on the color homography theorem, to approximate different color transfer results. In our model, we decompose any color transfer into a chromaticity mapping component and a shading adjustment component. We also show that the shading adjustment can be reformulated by a global shading curve through which the shading homography can be applied elsewhere. Our experiments show that our model produces very close approximations to the original color transfer results. We believe that our color transfer model is useful and fundamental for developing simple and efficient color transfer algorithms. That is a trained model can be applied to every frame in a video fragment rather than needing frame-by-frame adjustment. This decomposition also enables users to amend the imperfections of a color transfer result or simply extract the desired effect.

Our paper is organized as follows. We review the popular color transfer methods and the color homography theorem in §2. Our color transfer decomposition is described in §3. We show our evaluation and applications in §4. Finally, we conclude in §6.

2 Background

2.1 Color transfer

Example-based color transfer was first introduced by Reinhard et al [ReinhardTransfer]. Their method assumes that the color distribution in

color space is a normal distribution. They map a source image to its target so that their color distributions have the same mean and variance in

color space. Pitie et al [Pitie1] proposed an iterative color transfer method that rotates and shifts the color distribution in 3D until the distributions of the two images are aligned. The rotation matrix is random over all possible angular combinations. This method was later improved by adding a gradient preservation constraint to reduce after-transfer artifacts [Pitie2]. Pouli and Reinhard [Pouli] adopted a progressive histogram matching in L*a*b* color space. In their color transfer method, users can specify the level of color transfer (i.e. partial color transfer) between two images. Their algorithm also addresses the difference in dynamic ranges of between two images. Nguyen et al [Nguyen] proposed an illuminant-aware and gamut-based color transfer. A white-balancing step is first performed for both images to remove color casts caused by different illuminations. A luminance matching is later performed by histogram matching along the “gray” axis of RGB. They finally adopt a 3D convex hull mapping, which contains scale and rotation operations, to ensure that the color-transferred RGBs are still in the space of the target RGBs. There are some other approaches (e.g[an2010user, wu2013content, tai2005local, kagarlitsky2009piecewise, chang2015palette]) that solve for several local color transfers rather than a single global color transfer. In this paper, we focus on global color transfer.

Pitie et al [MKL_ct]

proposed a color transfer approximation by a 3D affine mapping. The linear transform minimizes the amount of changes in color and preserves the monotonicity of intensity changes. However, it is based on the assumption that the color distributions of the two images are both normal distributions. It also does not well approximate the shading change of a color transfer.

2.2 Color homography

The color homography theorem [PICS2016, CIC2016] shows that chromaticities across a change in capture conditions (light color, shading and imaging device) are a homography apart. Let us map an RGB to a corresponding RGI (red-green-intensity) c using a full-rank matrix :

(1)

The and chromaticity coordinates are written as . We interpret the right-hand-side of Equation 1 as a homogeneous coordinate and we have . When the shading is fixed, it is well-known that across a change in illumination or a change in device, the corresponding RGBs are related by a linear transform M that where is the corresponding RGBs under a second light or captured by a different camera [MARIMONT.WANDELL, MALONEY86B]. Clearly, maps colors in RGI form between illuminants. Due to different shading, the RGI triple under a second light is written as , where denotes the unknown scaling. Without loss of generality let us interpret c as a homogeneous coordinate i.e. assume its third component is 1. Then, (rg chromaticity coordinates are a homography apart).

3 Recoding color transfer as a color homography

Color transfer can often be interpreted as re-rendering an image with respect to real physical scene changes (e.g. from summer to autumn) and/or illumination. Recent work [MKL_ct] approximates the effect of global color transfer by a 3D affine mapping. We propose that, in general, we can better approximate most global color transfer algorithms as a color homography transfer. The color homography theorem shows that the same scene under an illuminant (or camera) change will result in two images a homography apart. We propose that a global color transfer can be decomposed into a linear chromaticity mapping and a shading adjustment. This concise form enables us to efficiently replicate the originally slow color transfer and re-apply it to many other images (e.gthe frames of a video fragment).

Figure 1: Pipeline of color-homography-based color transfer decomposition. The red dashed line divides the pipeline into two steps: 1) Simple homography. The rg chromaticities of the source image and the original color transfer image (by [Pitie2]) are matched according to their chromaticity locations (e.g

the green lines), from which we estimate a color homography matrix

and use to transfer the source image. 2) Shading homography. The shadings are aligned between the simple homography result and the original color transfer result by a least-squares method. The per-pixel product of the simple homography result and the shading adjustment gives a close color transfer approximation.

Throughout the paper we denote the source image by and the original color transfer result by . Given and , the aim of color transfer decomposition is to find a general model that reproduces the color theme change from to . Figure 1 shows our two-step color transfer decomposition: 1) Chromaticity mapping estimation (simple homography). The source image is chromaticity transferred by applying a color homography transform estimated from the corresponding rg chromaticities of the source image and the original color transfer result. 2) Shading adjustment estimation (shading homography). A further shading adjustment is estimated by finding a least-squares solution that aligns the shadings of the original color transfer output and the chromaticity transferred image. These procedures are explained in detail in the following.

3.1 Color Homography Color Transfer Model

We start with the outputs of the prior-art algorithms. Assuming we relate to with a pixel-wise correspondence, we represent the RGBs of and as two matrices and respectively where is the number of pixels. These matrices can be reconstituted into the original image grids. The chromaticity mapping is modeled as a linear transform but because of the relative positions of light and surfaces there might also be per-pixel shading perturbations. Assuming the Lambertian image formation an accurate physical model,

(2)

where is an diagonal matrix of shading factors and is a chromaticity mapping matrix. A color transfer can be decomposed into a diagonal shading matrix and a homography matrix . The homography matrix is a global chromaticity mapping. The matrix can be seen as a change of surface reflectance or position of illuminant.

According to the color homography model, we define two color transfer decomposition models. In simple homography transfer, the output image is a homography from the input which only contains a chromaticity mapping. It preserves the shading of the source image and does not include the shading of the original color transfer result. In shading homography transfer, the output also incorporates the best shading factors which restore the shading of the original color-transfer output. By solving for the homography , the simple and shading homography transfers are defined as:

(3)
(4)

An example of the two color transfer models are shown in Figure 1. When recoding a color transfer, these two recoding models provide an additional flexibility as some users may only attempt to extract the chromaticity mapping.

3.2 Chromaticity mapping estimation

In color correction, Equation 2 is solved by using Alternating Least-Squares (ALS) [PICS2016, CIC2016, ALS] illustrated in Algorithm 1.

1, , ;
2 repeat
3       ;
4       ;
5       ;
6       ;
7      
8until  ;
Algorithm 1 Homography from alternating least-squares

The effect of the individual and can be merged into a single matrix and (where the product is taken by post-multiplying matrices). denotes the Frobenius norm and and are found using the closed form Moore-Penrose inverse. In color transfer, we choose to minimize the rg chromaticity (i.enormalized RG) difference between two images because any non-zero RGB can be mapped to the range of . To achieve this, we modify Equation 2 as

(5)

where is the RGB-to-RGI conversion matrix defined in Equation 1, is a function that converts each RGI intensity (matrix row) to their homogeneous coordinates (by dividing RGI by I which makes all elements of the 3 column 1), is the homography matrix that minimizes rg chromaticity difference. The homography matrix for color transfer is related with by . As the under-saturated pixels with zero RGBs contain no color information, they are excluded from the computation.

To reduce the computational cost, it is possible to estimate with down-sampled images. We find that image down-sampling barely affects our chromaticity mapping quality. An example is shown in Figure 2. Depending on the content of image, the minimum down-sampling factor for estimating may vary.

Figure 2: Down-sampled images for chromaticity mapping estimation (shading adjustment is still estimated using full-resolution images). The sizes of and are reduced by the corresponding factors. Image down-sampling barely affects the color transfer approximation quality such that even two thumbnails are sufficient for getting the approximation result shown in the right-most figure ( down-sampling).

3.3 Shading adjustment estimation

A chromaticity-transferred result may still not be close to the actual color transfer result because color transfer methods also adjust contrast and intensity mapping. In our approximation pipeline, the shading adjustment matrix can be directly obtained from the ALS procedure. When the chromaticity mapping matrix is estimated from down-sampled images, the estimated from ALS is not in full-resolution. In this case, according to Equation 4, can be solved for by a least-squares solution . We introduce the additional shading reproduction step as follows.

Shading adjustment reproduction – mapped shading homography

Figure 3: Shading reproduction. A function of brightness-to-shading mapping is fitted to the brightness of the simple homography result and the original shading adjustment. The function is used to reproduce a mapped shading adjustment from which a mapped shading homography result is generated.

A universal color transfer decomposition should be compatible with any input image in a similar color theme. Although the chromaticity mapping matrix is reusable for adjusting other images, the per-pixel shading adjustment derived from the ALS procedure only works for the source image. To resolve this issue, we propose a brightness-to-shading mapping to reproduce the shading adjustment for any input image. The mapping is estimated by fitting a smooth curve to the per-pixel brightness and shading data. A direct fitting for all data points is computationally costly. Instead, we uniformly divide the brightness range into slots and compute the center point (average brightness and shading) for each slot. The center point summarizes the point distribution of a range to reduce the amount of data for fitting. The smooth piece-wise curve

is modeled as a Piece-wise Cubic Hermite Interpolating Polynomial (PCHIP) 

[pchip] according to the summarized data sites.

Directly applying the mapped shading adjustment may lead to sharp gradient artifacts because the shading variations for some areas may not follow the global trend. When the shading is not smooth, the overall magnitude of image edges is expected to be large. Inspired by the Laplacian smoothness constraint adopted in [farbman2008edge], we solve this by minimizing the overall magnitude of Laplacian image edges of the shading image as shown in Equation 6:

(6)

where the first term ensures the similarity between the optimum shading and the -mapped per-pixel shading , the second term enforces the smoothness constraint, is a smoothness weight with a small value,

is the 2D shading image (reshaped from the vector of the diagonal elements of

) which is then convolved by a Laplacian kernel . See our supplementary material for the details about how to solve this minimization and determine adaptively. As is shown in Figure 3, the shading adjustment can be reproduced according to and the brightness of the simple homography transfer result. The result generated from mapped shading is visually close to the original shading homography result.

4 Results

We first show some visual results of color transfer approximations of [Nguyen, Pitie2, Pouli, ReinhardTransfer] in Figure 4. Global 3D affine mapping [MKL_ct] does not well reproduce the shading adjustments of color transfer. Our homography-based method offers a closer color transfer approximation. In Table 1, we also quantitatively evaluate the approximation accuracy of 3 candidates by a PSNR (Peak Signal-to-Noise Ratio) measurement. Acceptable values for wireless image transmission quality loss are considered to be over 20 dB (the higher the better) [PSNR_im]. The test is based on 7 classic color transfer image pairs and 4 color transfer methods. The original shading homography produces the best result overall. Mapped shading homography also generally produces higher PSNR scores compared with 3D affine mapping, esp. for [Pitie2, Pouli, ReinhardTransfer].

Nguyen [Nguyen] Pitie [Pitie2] Pouli [Pouli] Reinhard [ReinhardTransfer]
3D affine [MKL_ct] 26.85 26.04 26.92 28.49
Shading homography 31.51 31.06 36.55 35.48
Mapped shading homography 27.77 28.16 31.70 31.18
Table 1: PSNR measurement between the original color transfer result and its approximation (see our supplementary material for the complete table and their visual results).
Figure 4: Visual result of color transfer approximations (in the order of [Pitie2], [ReinhardTransfer], [Pouli], [Nguyen]). The corresponding PSNR error is shown on each approximation result. Our homography-based methods produce closer approximations to the original color transfer results.

5 Applications

In this section, we show that our color transfer decomposition can be applied to color transfer enhancement and video color grading re-application.

Color transfer with reduced artifacts

Figure 5: Imperfection fixing. The imperfections in a color transfer [Pouli] and its shading homography approximation are fixed by mapped shading adjustment.

In Figure 5, original shading homography approximation retains the artifacts of noise and over-saturation of the original color transfer result. These artifacts are fixed by mapped shading homography.

Video color grading re-application

Figure 6: Video color grading re-application (original video from Cry ©Jeffro). The color grading profile is extracted from two pairs of image samples. Grading profile 1 is applied to the same scene. Grading profile 2 is applied to a scene different from its image samples.

Color transfer methods [Pitie2, ReinhardTransfer, Pouli, Nguyen] cannot be directly applied to video color grading as per-frame color matching leads to temporal incoherence [farbman2011tonal, bonneel2013example]. The color homography model is a concise representation of the original complex video color grading adjustments. Compared with a prior art [bonneel2013example], our model generates stable results in one-go without the excessive steps for removing artifacts such as flickering and bleeding. As shown in Figure 6, given two sample images profiling the desired color grading adjustment, the complex steps of video color grading can be extracted as a mapped shading homography transfer. The extracted color grading effect can also be re-applied to a different video sequence of a similar color theme (see our supplementary video for more examples).

6 Conclusion

Based on the theorem of color homography, we have shown that a global color transfer can be approximated by a combination of chromaticity mapping and shading adjustment. Our experiment shows that the proposed color transfer decomposition approximates very well many popular color transfer methods. We also demonstrate two applications for fixing the imperfections in a color transfer result and video color grading re-application. We believe that this verified model of color transfer is also important for developing simple and efficient color transfer algorithms.

Acknowledgment

This work was supported by EPSRC Grant EP/M001768/1.

References