Portrait photographs sometimes get undesirable color due to a background color reflection, so that we need to make the skin color and brightness uniform for each photograph. In such a situation, a professional cameraman usually arranges lighting conditions with special equipment. Meanwhile, for an amateur, more effort is required to achieve the same quality. This leads to a demand for an automatic and simple method that does not require any special equipment for unifying the color and brightness between multiple images. The undesirable color on portrait photographs is caused by the following.
Each camera has its own camera response sensitivity. Thus, the distribution of skin color in a color space also depends on the camera used.
Background color is usually reflected to faces resulting in distorted colors.
If each subject wears clothes with more than one color, color correction for the whole image region distorts the skin color, and skin color correction will discolor the clothes.
Therefore, a color correction scheme that affects only the face region is required.
In this paper, we attempt to correct an image with various color and brightness conditions with a guide image. Our method has various applications, e.g., the creation of a photo name list such as a yearbook with high quality facial images without much effort or cost. If one tries to gather photographs taken in different lighting conditions with different cameras, correcting the images becomes a very difficult task even using off-the-shelf image processing software. In the situation considered, we need to unify several attributes of all images: the size of cropped facial image, facial skin color, brightness, and background color. Automatic correction of the attributes requires
Face detection and Facial skin color extraction.
Facial skin color correction.
A simple combination of these techniques gives boundary artifacts between the corrected region and the uncorrected region because the color correction only corrects the extracted region. This is described in Sec. 2.
With the algorithm described in this paper, correction of facial skin color and brightness is performed using a hybrid guided image filtering (GIF) method. Because region extraction and color transfer are individual procedures, color transfer on part of an image generates a color gap between the original color part and the color transformed part. For this problem, we propose the hybrid GIF (the red box in Fig. 1), which performs color transfer and segmentation. Only the face region is extracted (Fig. 1(1)), followed by adjustment with grading, and then the hybrid GIF corrects the face region of the input image by using the corrected image as a guide image while keeping colors in other regions (Fig. 1(2)). In other words, the hybrid GIF transforms a part of the image as an object based color transfer. In contrast to the fact that the original GIF 
needs a perfectly-aligned guide image, our GIF only needs roughly aligned pixels. As for non-aligned regions, the filtering is achieved by aligned region propagation such as colorization and matting . Our method carries out nonlinear correction only on the face region and achieves better results than conventional methods. A preliminary version of this study without several improvements or new applications appeared in conference proceedings .
The rest of this paper is organized as follows. Sec. 2 introduces related work on color correction and facial image correction. Sec. 3 describes the proposed guide facial skin color correction method. The proposed algorithm is evaluated in experiments by comparison with various conventional methods in Sec. 4. Finally, this paper is concluded in Sec. 5.
2 Related work
Reference based color correction is one of the essential research issues for image editting. Color grading methods [23, 11, 12], which match the color attributes (tone curve and color distribution for each color) of a target image with those of another target image, are effective for color correction among multiple images. However, these existing techniques are global image correction operators and assume that subjects wear the same kinds of clothes. Other related techniques are image matting [17, 13]. By using these techniques, one can change colors by specifying desired colors to represent pixels. However, to obtain natural colorization, much coloring information is needed. Another possible approach is to apply color transfer based on GIF with guide images [22, 8, 24, 14], where coloring information is given in a block as a guide image. However, the guide image used in the methods is limited to perfectly aligned images without any position gap, which is often not the case for the situation considered.
As for facial image correction methods, the authors of  propose a wrinkle removal method but the purpose is different from color grading of facial skin color. The method proposed in  can also transfer the style of the original image to the target one. However, it also edits clothes and performs well only for headshot photos.
3 Proposed method
Our algorithm transforms colors of a part of an input image using a target image color unlike general color transfer methods and corrects surrounding colors of the transformed part. Fig. 1 illustrates the flow chart of our method. Our method mainly consists of two parts.
Face detection and Facial skin color extraction (the yellow box in Fig. 1) : It detects the face part and extracts its facial skin color.
Skin color correction (green box) : After extracting the skin region111We distinguish region and area. Area indicates the detected face area in Sec. 3.1, while region indicates the extracted facial skin color region in Sec. 3.2. in Step (1), the distribution of the facial skin color is rectified using the image (a). The color of the face region is modified by using the image (b) as the guide image.
Note that this paper uses the two phrases, skin color and facial skin color. Skin color refers to the skin color of the whole body, while facial skin color is the color of the skin in the face region only.
The novelty of the proposed method lies in (2), where the color correction with grading affects only a part of the image. We use conventional methods for the other steps such as the face detection with some modifications. Each procedure is described in more detail hereafter.
3.1 Face detection
In the first step, we detect an area from head to shoulder (Fig. 2), where we use Haar-cascade detection in OpenCV, also known as the Viola-Jones algorithm , for the face detection. The face area is described by rectangle windows, where is the number of candidates, with a set of barycentric coordinates and the sizes of the detected candidate rectangles Fig. 2 (center). The blue boxes in Fig. 2 indicate the detected candidate rectangles. We find the median of as
where the subscript is the index of the candidate. The rectangle with the median values of the barycentric coordinate is adopted as the final face area. Since the size of the face area depends on the image, the size of the face area is adjusted by:
where is the prescribed scale factor.
3.2 Skin color extraction
Rough extraction of the skin color region near the face is performed by a common approach using morphological labeling in the image space and clustering in the color space:
We classify the color of each pixel by clustering the color distribution of the entire image in the HSV color space, and for each pixel we allocate the label of the cluster that each pixel belongs to.
Some regions are generated by concatenating neighboring pixels with the same labels in the image space, and then we extract regions mainly in the detected face area (Eq. (2)) and assign them to the facial skin region.
Note that, since the skin color in the detected face area (Sec. 3.1) is known, we may obtain satisfactory results by using simpler approaches such as the k-means method  with stable seed cluster selection  (i.e., k-means++) for clustering color distributions.
In this paper, for simplicity, we apply k-means clustering  to the 1-D distribution of the hue values of the entire image, and we perform the above-mentioned procedures (i) and (ii) to detect the face region . Then we perform thresholding on the region for the saturation and value components of every pixel. We define the skin color region and the condition for the skin color at pixel experimentally as follows:
where is the median of the saturation in the face area. The saturation and the value are normalized to and the threshold values are set to excellently extract the facial skin color region on our dataset, which is available on the project page. The number of clusters depends on the photographic environment. We set in this paper.
For our guide image filtering, we define and using the dilation function, which is a type of morphological operation. They are given as
where is the dilation function with a structuring element, which consists of a circle with a 20 pixel radius. The overline is the complement and the operator is the set difference. An example of each region can be seen in Fig. 3.
3.3 Color grading by 
Color grading (Fig. 4) is performed so as to bring the skin color of the extracted region (more specifically the shape of the color distribution222Note that we use the RGB color space instead of the HSV color space used in the preceding section. The reason is that the color grading method  has been developed for the RGB color space and the color-line image feature used in the next section is a feature related to the RGB color space.) close to the target facial skin color in Fig. 4
(a). Color grading transforms the shapes of color distributions (r,g,b 3D coordinates). One may use simpler techniques such as the estimation of a tone curve to modify each RGB component. However, most of the methods need correspondence between the two images, which is not suitable in our case where one transfers the color between the faces of different persons. In our method we use Pitiéet al.’s method , in which pixel correspondences are not necessary. This method considers the width of the color distribution along a particular axis, iteratively changes it so as to match the specified axis, and then realizes nonlinear reshaping of the color distribution. Since the method yields some artifacts such as blurring artifacts around the edges, we need to address this problem, which is described hereafter.
3.4 Guide image filtering via optimization
This section expresses our GIF, which uses a guide image to correct the input image, as an optimization based method. Our energy function actually resembles in part that of the image matting method  because we focus on and use its data fidelity term (in  that is also known as the local linear model), but the design objective is different. Additionally, a similar method has been proposed in high dynamic range imaging .
Our guide image filtering reconstructs an image where the facial skin color region has a corrected color by using the input image and the color grading result, where is a number of pixels. In the RGB color space (), let the pixel value at pixel of the color corrected image to be solved be , the input image be , and the guide image be which are given by the color grading in Sec. 3.3. Figure 5 shows each image. Using them, we formulate our guide image filtering as the following convex optimization problem
are a scaling matrix and an offsetting vector to approximateto for each pixel , is a square window around pixel , and and are given by
Terms in the top row correspond to , where and denote the norm and Frobenius norm respectively, and we use them as a data fidelity that reflects textures and local contrasts of onto . The second and third terms are constraints expressed by the following indicator function:
The second term brings the facial skin color close to that of the guide image in the facial skin color region . The third term keeps the color of the background the same as the original image in . For reducing undesirable artifacts arising from the guide image, we add a constraint that the color difference at each pixel does not exceed .
Note that we purposefully represent the second and third terms as constraint formulation over unconstrained one, e.g., the second term can be replaced with a regularizer where is a balancing parameter. This is because can be controlled more intuitively than , and it allows us to adaptively change depending on the area of (see the next Sec. 4). Such advantages of the formulation have been addressed in the literature of image restoration based on convex optimization [7, 9, 1, 28, 19, 6, 20].
Among convex optimization algorithms, we adopt a monotone version of the fast iterative shrinkage-thresholding algorithm (MFISTA)  to solve (5) because it is a first-order method that achieves a sublinear global rate of convergence, namely, computationally efficient and very fast. The algorithm solves an optimization problem of the form:
where is a differentiable convex function with a Lipschitz continuous gradient and is a proper lower semicontinuous convex function.
The problem (8) is calculated by the proximity operator
The proximity operator function is defined by
where “prox” is the proximity operator and is the step size.
In order to apply the MFISTA to our problem (5), we set
where and are mean value vectors of each color in a square window , is the number of pixels in , , is a covariance matrix, and
is an identity matrix.
The proximity operator in (9) consists of two functions which are the second and third terms of (5) which handle different regions, and , respectively that satisfies, i.e., . Therefore, can be calculated using
This process corresponds to a -ball projection with a region constraint.
Finally, we update using the procedure in (9), then the solution becomes our GIF result.
4 Results and discussion
In this section, we show the results obtained through the proposed process 444 The experiment tests our algorithm on various sets, which are available from our website http:vig.is.env.kitakyu-u.ac.jp/MIF/. . The range of RGB values is normalized to be in the range . The prescribed scale factor in Sec. 3.1 is set to . The filter window sizes used in Sec. 3.4 and Sec. Appendix A: Fore/Background segmentation by matting are and , respectively. and are used, where indicates the number of pixels contained in the region . in MFISTA is set to 500.
Our filtering often flattens the gradation of the input images caused by shadows. It yields unnatural images as shown in the top of Fig. 6, where the input image is input to our algorithm and the filtered image is a result of our guide image filtering. For the luminance correction, each pixel color value of the image is decomposed into a color component and an intensity component as follows
where the superscripts ,, and indicate each color component. This decomposition procedure is same as .
The input image and the filtered image are decomposed into the two components by (15), and then the intensity component of the input image and the color component of the filtered image are combined as follows
Figure 6 shows this procedure and its effectiveness.
Figure 8 (hereafter indexes are swapped due to figure positions) shows the result of our skin color correction. The area around the face in the input image has color distortion due to the lighting conditions and background color. Our result has a white balanced facial skin color similar to the target facial skin color. Sometimes when we take a photograph in dark surroundings, the results sometimes have unnatural face colors (Fig. 8(b)) due to the camera flash. Hence, we apply our proposed method to flash images in dark surroundings to reduce undesirable effects of artificial lights. Figure 8(c) shows the flash image editing results using our method. One can see that unnatural colors of the original image are corrected to natural colors by our method.
4.1 Automatic yearbook style photo generation
This section presents automatic yearbook style photo generation using our guided facial skin correction method and pre/post-processing procedure. It takes a long time to manually process a large number of images. Our algorithm generates a yearbook style photo in a short amount of time.
We first crop a photo with our face detection procedure as a pre-processing procedure (the red box in Fig. 9), and then correct the facial skin color. Finally, the noisy background is replaced to clear the background as a post-processing procedure (the blue boxes in Fig. 9).
4.1.1 Face area cropping
In Sec. 3.1, we detect the face area of a photo. Then we unify the sizes of the cropped images by expansion and reduction. The crop size is roughly adjusted according to the image size. The image size after cropping and resizing is () in this experiment.
4.1.2 Background replacement by alpha blending
We separate the image information of the foreground and background regions and assign a value to them. Using the values as labels, the relationship between the foreground , background (different from that in (5)), and original image at each pixel is given as follows:
The label at each pixel is the blending rate, and called alpha-mat. Replacement of the background with another background is performed by
An estimation of the alpha-mats is described in Appendix A.
|(a) Original||(b) Guide||(c) Colorization ||(d) JBU ||(e) Ours|
Figure 10 shows the results of our automatic yearbook style photo generation. Our algorithm generates the yearbook style photos from the original images using the target image.
We implement the whole algorithm in MATLAB and OpenCV (C++), respectively, and the total execution time is within 11 sec. on a 3.20 GHz Core i5 CPU, i.e., face detection (Sec. 3.1) is 5 sec., facial color extraction (Sec. 3.2) is 1 sec., color grading (Sec. 3.3) is 2 sec., GIF (Sec. 3.4) is 2 sec., and matting (Sec. 4.1.2) is 1 sec., respectively.
4.2 Comparison with various conventional method
|(a) Target||(b) Original||(c)  with ||(d) ||(e) ||(f) Ours|
Figure 11 compares our method with supplement methods [16, 15]. For the methods, we process each RGB color layer because they are proposed for color components in YUV color space. Since colorization  just spreads colors to boundaries, the result does not have the details of the original image. Joint Bilateral Upsampling  computes pixel values in hole regions by joint-filtering. We can see a contrast reduction in the top and the middle rows in Fig. 11. The blue arrows and the red arrows indicate artifacts of each method. Meanwhile our guide image filtering outputs sharp details of the original image while keeping the guide image color.
|(a) Target||(b) Each region||(c) Source||(d) Each region||(e) Result|
Figure 12 shows a comparison with existing color transfer methods. For the results (c), Pitié et al.’s method  transforms the color distribution of the input image to the target image one, then gain noise artifacts of the color transformed input image are removed by Rabin et al.’s method . The gain noise artifacts can be seen in . In the results of the color grading  with, the face and the background of the resultant image hava a similar color since this method is a uniform color transfer method. In addition, this method takes a long time because it needs iterative bilateral filtering. In the non-rigid dense correspondence (NRDC) method  and Jaesik et al. , the improvement in facial color is small, and the regions around the clothes are discolored. However, in our method (e), color correction is successfully done over the entire face, and look more natural. From the above results, the proposed process gives satisfactory results despite the automatic processing.
Figure 13 shows a comparison with a background replacement result of the original image and . The background replacement result has a distorted facial skin color due to the background color. Shin et al.’s method transfers a style from an input image to a target image. Although the facial skin color of the result is almost the same as the target image one, this method cannot correct skin color of a person wearing glasses to the skin color of the target image. Our algorithm generates an image which has the target facial skin color even if a person wears glasses.
4.3 Semi-automatic color correction
Our method can correct other images, such as animal photographs. Our color correction method requires some regions, which are as a foreground and as a background. Many methods have been proposed to detect face regions, but accurate object detection in natural scenes is still a challenging problem.
For color correction in natural scenes such as Fig. 14(c), we make each region manually as (b) and (d), where the colors indicate the region corresponding to Fig. 3, respectively. Finally, our method adjusts the object color of the source image to the target one automatically. Figure14(e) shows natural scene color correction results and the red box shows that our method corrects the colors of main object without boundary artifacts. By implementating an automatic object detection method, this application will be an auto-application.
|(a) Original images||(b) Extracted skin||(c) Result|
Figure 15 shows example images where our method does not work well. The face and hair are a similar color, hence, the facial skin color extraction process gets regions other than the face region (Fig. 15(b)). As a result, our method outputs an image with a similar color between the face and hair (Fig. 15(c)). Actually, the facial skin color extraction process seldom gets other regions even if the subject does not have white skin, so the guide image filtering results in the same color between the face and the other extracted regions.
This paper presents a guided facial skin color correction method. The color grading method used in the procedure can be thought of as a correction method using a target (guide) image in a color space. On the other hand, our GIF can be thought of as a correction method using a guide image in an image space. In future work, we will consider combining both methods to extend the range of constraint expressions to color correction so as to obtain more natural results.
Appendix A: Fore/Background segmentation by matting
To obtain the alpha-mats that give natural blending results, we employ a closed form matting method  because the algorithm is also used in Sec. 3.4 and can be diverted. The method is based on continuous optimization in which labels are obtained as real numbers (soft labels) in the range of . However this method requires semantics (user assisted information) for fore/background regions around their boundaries. In order to avoid user assistance, we also employ a region growing method used in an earlier matting method  and perform matting iteratively. The details are shown in Fig. 16 and described as follows:
As the initial foreground , in addition to the skin color region , the hair region above the face (a black large region is roughly selected), and the clothes region below the face are used. As the initial background , two rectangular regions on the left and right sides of the face are used.
Matting  with the preconditioning conjugate gradient method is performed. A soft label is obtained at each pixel.
The pixels that are strongly regarded as foreground or background are added to the initial region, which is generated in (a), for the next iteration:
, and then
Steps (b) and (c) are repeated a few times (4 times, in our experiment). The radius of the window size is halved to implement a coarse-to-fine approach.
To reduce neutral colors and enhance them to be close to 0 or 1, a sigmoid function is applied to the alpha-mat:
-  M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo. An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. (TIP), 20(3):681–695, Mar. 2011.
-  David Arthur and Sergei Vassilvitskii. K-means++: The advantages of careful seeding. In Proc. Annual ACM-SIAM Symp. Discrete Algorithms, pages 1027–1035. Society for Industrial and Applied Mathematics, 2007.
Tatsuya Baba, Paul Perrotin, Yusuke Tatesumi, Keiichiro Shirai, and Masahiro
An automatic yearbook style photo generation method using color
grading and guide image filtering based facial skin color.
Proc. IAPR Asian Conf. Pattern Recognit. (ACPR), pages 1–6, 2015.
-  Narze Batool and Rama Chellappa. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling. IEEE Trans. Image Process. (TIP), 23(9):3773–3788, 2014.
-  Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. (TIP), 18(11):2419–2434, 2009.
-  G. Chierchia, N. Pustelnik, J.-C. Pesquet, and B. Pesquet-Popescu. Epigraphical projection and proximal tools for solving constrained convex optimization problems. Signal, Image Video Process., 9(8):1737–1749, Nov. 2015.
-  P. L. Combettes and J. C. Pesquet. Image restoration subject to a total variation constraint. IEEE Trans. Image Process. (TIP), 13(9):1213–1222, Sept. 2004.
-  Elmar Eisemann and Frédo Durand. Flash photography enhancement via intrinsic relighting. ACM Trans. Graph. (TOG), 23(3):673–678, 2004.
-  J. M. Fadili and G. Peyré. Total variation projection with first order schemes. IEEE Trans. Image Process. (TIP), 20(3):657–669, Mar. 2011.
-  Peter Vincent Gehler, Carsten Rother, Andrew Blake, Tom Minka, and Toby Sharp. Bayesian color constancy revisited. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 1–8, 2008.
-  Yoav HaCohen, Eli Shechtman, Dan B Goldman, and Dani Lischinski. Non-rigid dense correspondence with applications for image enhancement. ACM Trans. Graph. (TOG), 30(4):70:1–70:10, 2011.
-  Yoav HaCohen, Eli Shechtman, Dan B. Goldman, and Dani Lischinski. Optimizing color consistency in photo collections. ACM Trans. Graph. (TOG), 32(4):38:1–38:10, 2013.
-  Kaiming He, Jian Sun, and Xiaoou Tang. Fast matting using large kernel matting Laplacian matrices. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2165–2172, 2010.
-  Kaiming He, Jian Sun, and Xiaoou Tang. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intelli. (TPAMI), 35(6):1397–1409, 2013.
-  Johannes Kopf, Michael F. Cohen, Dani Lischinski, and Matt Uyttendaele. Joint bilateral upsampling. ACM Trans. Graph. (TOG), 26(3):96:1–96:5, 2007.
-  Anat Levin, Dani Lischinski, and Yair Weiss. Colorization using optimization. ACM Trans. Graph. (TOG), 23(3):689–694, 2004.
-  Anat Levin, Dani Lischinski, and Yair Weiss. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intelli. (TPAMI), 30(2):228–242, 2008.
-  Stuart P. Lloyd. Least squares quantization in PCM. IEEE Trans. Info. Theory, 28(2):129–137, 1982.
-  Shunsuke Ono and Isao Yamada. Second-order total generalized variation constraint. In Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), pages 4938–4942, May 2014.
-  Shunsuke Ono and Isao Yamada. Signal recovery with certain involved convex data-fidelity constraints. IEEE Trans. Signal Process., 63(22):6149–6163, Nov. 2015.
-  Jaesik Park, Yu-Wing Tai, Sudipta Sinha, and In So Kweon. Efficient and robust color consistency for community photo collections. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 430–438, 2016.
-  Georg Petschnigg, Richard Szeliski, Maneesh Agrawala, Michael Cohen, Hugues Hoppe, and Kentaro Toyama. Digital photography with flash and no-flash image pairs. ACM Trans. Graph. (TOG), 23(3):664–672, 2004.
-  François Pitié, Anil C. Kokaram, and Rozenn Dahyot. Automated colour grading using colour distribution transfer. Comput. Vis. Image Underst., 107(1–2):123–137, 2007.
-  Julien Rabin, Julie Delon, and Yann Gousseau. Regularization of transportation maps for color and contrast transfer. In Proc. IEEE Int. Conf. Image Process. (ICIP), pages 1933–1936, 2010.
-  Qi Shan, Jiaya Jia, and Michael S. Brown. Globally optimized linear windowed tone mapping. IEEE Trans. Visuali. Comupt. Graph., 16(4):663–675, 2010.
-  YiChang Shih, Sylvain Paris, Connelly Barnes, William T. Freeman, and Frédo Durand. Style transfer for headshot portraits. ACM Trans. Graph. (TOG), 33(4):148:1–148:14, July 2014.
-  Jian Sun, Jiaya Jia, Chi-Keung Tang, and Heung-Yeung Shum. Poisson matting. ACM Trans. Graph. (TOG), 23(3):315–321, 2004.
-  T. Teuber, G. Steidl, and R. H. Chan. Minimization and parameter estimation for seminorm regularization models with -divergence constraints. Inverse Problems, 29(3), 2013.
-  Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), volume 1, pages I–511–I–518, 2001.