Epitome for Automatic Image Colorization

10/08/2012 ∙ by Yingzhen Yang, et al. ∙ 0

Image colorization adds color to grayscale images. It not only increases the visual appeal of grayscale images, but also enriches the information contained in scientific images that lack color information. Most existing methods of colorization require laborious user interaction for scribbles or image segmentation. To eliminate the need for human labor, we develop an automatic image colorization method using epitome. Built upon a generative graphical model, epitome is a condensed image appearance and shape model which also proves to be an effective summary of color information for the colorization task. We train the epitome from the reference images and perform inference in the epitome to colorize grayscale images, rendering better colorization results than previous method in our experiments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Colorization adds color to grayscale images by assigning color values to images which only contain a grayscale channel. It not only increases the visual appeal, but also enhances the information conveyed by scientific images. For example, the grayscale images acquired by scanning electron microscopy (SEM) can be made more illustrative by adding different colors to different parts of the images. However, the manual colorization is tedious and time consuming, so it is not suitable for batch process. To overcome this problem, we propose an automatic colorization method by epitome. Figure 4 shows the colorization result for the Nano Mushroom-like image. We train the epitome from one manually colorized Nano Mushroom-like image, and use that epitome to automatically colorize the other Nano Mushroom-like image, which eliminates the need for human labor and makes batch colorization process possible.

Based on the source of the color information used to colorize the grayscale images, existing colorization techniques fall into two main categories: user scribble based methods and color transfer methods. The user scribble based method in [8] asked users to draw color scribbles in the grayscale image, and the algorithm propagated the user-provided color to the whole image requiring that similar neighboring pixels should receive similar color. Later, L. Qing et al. [9] proposed a method which required less human intervention. The user scribbles were employed for texture segmentation and user-provided color was propagated within each segment. Using a similar color image as a reference, the color transfer methods such as [11] performed colorization by transferring the color from the reference image to the grayscale image, either automatically or with user intervention. However, the pixel-level matching based on luminance value and neighborhood statistics adopted by [11] suffered from spatial inconsistency and the user-provided swatches were required to guide the matching process in many cases. [5] improved the spatial consistency by an image space voting scheme. Their method first transferred color to a few pixels in the target image with high confidence, then applied the method in [8] to colorize the whole image, treating the colorized pixels in the first step as the scribbles. However, their method required a robust segmentation of the reference image, which was difficult in many cases without user intervention.

Similar to [11], our automatic colorization method transfers the color information from the reference image to the target grayscale image. Since most of existing colorization methods need user interactions for color selection or segmentation, a robust and automatic colorization algorithm is preferable. In order to approach this problem, it is worthwhile to exploit the biological characteristics of human visual system. The average human retina contains much more rods than cones [3] (92 million rods versus 4.6 million cones). Rods are more sensitive to cones but they are not sensitive to color, so that most of visually significant variation arises only from luminance differences. This fact suggests that we do not need to search the whole reference image for the color patches to colorize the target image, instead we can reduce the search space for color patches, or equivalently find an effective color summary of the reference image, to improve the efficiency and alleviate color assignment ambiguity. In [11], such summary is a set of source color pixels randomly sampled, which is, however, subject to noise in the raw pixels.

In order to find an effective and compact summary of the color information in the reference image, we adopt the condensed image appearance and shape representation, i.e. epitome [6]

. Epitome consolidates self-similar patches in the spatial domain, and the size of the epitome is much smaller than that of the image it models. By virtual of the generative graphical model, epitome can be interpreted as a tradeoff between template and histogram for image representation and it has been applied to many computer vision tasks such as object detection, location recognition and synthesis

[10, 2]. Epitome summarizes a large number of raw patches in the reference image by only representing the most constitutive elements. In our epitomic colorization scheme the color patches used to colorize the target grayscale image are retrieved from the epitome trained with the reference image, rather than from the raw image patches. Epitome proves to an effective summary of the color information in the reference image, which produces more satisfactory colorization results than [11] in the experiments.

The paper is arranged as follows. Section 2 describes the process of automatic colorization by epitome as well as the detailed formulation of training the epitome and inference in the epitome graphical model, especially on how epitome summarizes the raw image patches of the reference image into a condensed representation and how inference is performed in epitome to automatically colorize the target grayscale image. Section 3 shows the colorization results, and we conclude the paper in Section 4.

2 Formulation

2.1 Description of Automatic Colorization by Epitome

Given a reference color image and the target grayscale image , we aim to automatically colorize with the color information from . We achieve this goal by first training an epitome from the reference image, then performing inference in so as to transfer the color information of the color patches of to the corresponding grayscale patches of . Note that the grayscale channel of is retained as the luminance channel after the color transfer process. We will illustrate the training and inference process in detail in the following subsections.

2.2 Training the Epitome

Epitome is a latent representation of an image, which comprises hidden variables and parameters required to generate the image patches according to the epitome graphical model. Epitome summarizes a large set of raw image patches into a condensed representation of a size much smaller than the original image, and it approaches this goal in a manner similar to Gaussian Mixture Model with overlapping means and variances.

The epitome of an image of size is a condensed representation of size where and . The epitome contains two parameters: . and represent the Gaussian mean and variance respectively and both of them are of size . Suppose patches are sampled from the reference image, i.e. , and each patch contains pixels with image coordinates . Similar to [6], the patches are square and we use fixed patch size throughout this paper. These patches are densely sampled and they can be overlapping with each other to cover the entire image. We associate each patch with a hidden mapping which maps the image coordinates to the epitome coordinates, and all the patches are generated independently from the epitome parameters and the corresponding hidden mappings as below:

(1)

and

(2)

where is the pixel with image coordinates from the -th patch. Since is independent of the patch number , we simply denote it as in the following text.

represents a Gaussian distribution with mean

and variance

Figure 1: The mapping maps the image patch to its corresponding epitome patch with the same size, and can be mapped to any possible epitome patch according to .
Figure 2: The epitome graphical model

Based on (1), the hidden mapping can be interpreted as a hidden variable that indicates the location of the epitome patch from which the observed image patch is generated, and it behaves similar to the hidden variable in the traditional Gaussian mixture models that specifies the Gaussian component from which a specific data point is generated. Also, maps the image patch to its corresponding epitome patch, and the number of possible mappings that each can take, denoted as , is determined by all the discrete locations in the epitome ( in our setting). Figure 1 illustrates the role that the hidden mapping variables play in the generative model, and Figure 2 shows the epitome graphical model, which again demonstrate its similarity to Gaussian mixture models. indicates the prior distribution of the hidden mapping. Suppose is the -th mapping that can take, then

which holds for any . is an indicator function and equals to when its argument is true, and otherwise.

Our goal is to find the epitome that maximizes the log likelihood function:

(3)

Given the epitome , the likelihood function for the complete data, i.e. the image patches and the hidden mappings , is derived below according to the epitome graphical model:

(4)

We use the Expectation-Maximization algorithm

[4] to maximize the likelihood function (3) and learn the epitome , following the procedure introduced in [1].

The E-step: The posterior distribution of the hidden variables, i.e. the hidden mapping is

We observe that corresponds to the responsibility in Gaussian mixture models.

The M-step: We obtain the expectation of the log-likelihood function for the complete data with respect to the posterior distribution of the hidden mapping from the E-step as below:

(6)

Maximizing (2.2) with respect to , we get the following update of the parameters of the epitome and :

(7)
(8)
(9)

The index indicates the epitome coordinates in (7) and (8). We alternate between E-step and M-step until convergence or the maximum number of iterations (20 in our experiments) is achieved, and then obtain the resultant epitome from the reference image .

Note that the above training process is applicable for a single type of feature of . We use two types of feature to train the epitome, i.e. the YIQ hannels and the dense sift feature [7]. We convert from the RGB color space to the YIQ color space where Y channel represents the luminance and IQ channels represent chrominance information. Moreover, dense sift feature is computed for each sampled patch. A patch is evenly divided into grids, and the orientation histogram of the gradients with 8 bins is calculate for each grid, which results in a

-dimensional dense sift feature vector for each patch.

is typically set as 3 or 4. We then train the epitome for the YIQ channels and the dense sift feature, and the epitome for YIQ channels () share the same hidden mapping with the epitome for the dense sift feature () in the inference process [10]:

(10)

where and represent the YIQ channel and the dense sift feature of patch respectively, and represent the epitome trained from the YIQ channels and dense sift feature of respectively. is a parameter balancing the preference between color and dense sift feature.

2.3 Colorization by Epitome

With the epitome learnt from the reference image, we colorize the target grayscale image by inference in the epitome graphical model. Similar to the epitome training process, we densely sample patches from (these patches cover the entire ). With the hidden mapping associated with patch denoted as

, the most probable mapping of the patch

, i.e. , is formulated as below:

(11)

which is essentially the same as the E-step (2.2). We take the grayscale channel of as the luminance channel (Y channel) of itself. Since the color information (IQ channels) is absent in , we only use the epitomes corresponding to the Y channel and the dense sift feature to evaluate the right hand side of (12). The color information is then transferred from the epitome patch, whose location is specified by , to the grayscale patch . We denote the target image after colorization as . Since can be overlapping with each other, the final color (the value of IQ channels) of a pixel in image is averaged according to:

(12)

where is the image coordinates of patch , and represents the value of the IQ channels in the epitome at location .

3 Experimental Results

We show colorization results in this section. As mentioned in section 2, we use square patches of size , and the size of epitome is half of the size of the reference image. We densely sample patches with horizontal and vertical gap of pixels, where is a parameter between and it controls the number of sampled patches.

Figure 3 shows the result of colorization for the dog image. We convert the original image to grayscale as the target image. The patch size is and the parameter balancing between the color and the dense sift feature is 0.5. We compare our method to [11] which transfers color from the reference image to the target image by pixel-level matching. The result produced by [11] lacks spatial continuity and we observe small artifacts throughout the whole image. On the contrary, our method renders a colorized image very similar to the ground truth. This example also demonstrate that the learnt epitome, which is a summary of a large number of sampled patches, contains sufficient color information for colorization.

Figure 4 and  5 shows the colorization result for the Nano Mushroom-like images and the cheetah. The patch size is chosen as and respectively, and is set to be 0.8 for both cases. [11] still generates artifacts around the top and bottom of the Mushroom-like structure, while our method produce a much more spatially coherent result. Moreover, we transfer the correct color for the cheetah to the target image, which results in a more natural colorization result than that of [11].

4 Conclusion

We present an automatic colorization method using epitome in this paper. While most of existing colorization methods require tedious and time consuming user intervention for scribbles or segmentation, our epitomic colorization method is automatic. Epitomic colorization exploits the color redundancy by summarizing the color information in the reference image into a condensed image shape and appearance representation. Experimental results shows the effectiveness of our method.

Figure 3: The result of colorizing the dog. From left to right: the reference image, the target image (obtained by converting the reference image to the grayscale), the result by [11], and our result.
Figure 4: The result of colorizing the Nano Mushroom-like images
Figure 5: The result of colorizing the cheetah

References

  • [1] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.
  • [2] X. Chu, S. Yan, L. Li, K. L. Chan, and T. S. Huang. Spatialized epitome and its applications. In CVPR, pages 311–318, 2010.
  • [3] C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson. Human photoreceptor topography. Journal of Comparative Neurology, 292(4):497–523, Feb. 1990.
  • [4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B, 39(1):1–38, 1977.
  • [5] R. Irony, D. Cohen-Or, and D. Lischinski. Colorization by example. In Rendering Techniques, pages 201–210, 2005.
  • [6] N. Jojic, B. J. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In ICCV, pages 34–43, 2003.
  • [7] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2169 – 2178, 2006.
  • [8] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. ACM Trans. Graph., 23(3):689–694, 2004.
  • [9] Q. Luan, F. Wen, D. Cohen-Or, L. Liang, Y.-Q. Xu, and H.-Y. Shum. Natural image colorization. In Rendering Techniques, pages 309–320, 2007.
  • [10] K. Ni, A. Kannan, A. Criminisi, and J. Winn. Epitomic location recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(12):2158 –2167, dec. 2009.
  • [11] T. Welsh, M. Ashikhmin, and K. Mueller. Transferring color to greyscale images. ACM Trans. Graph., 21(3):277–280, 2002.