CAMS: Color-Aware Multi-Style Transfer

06/26/2021 ∙ by Mahmoud Afifi, et al. ∙ 11

Image style transfer aims to manipulate the appearance of a source image, or "content" image, to share similar texture and colors of a target "style" image. Ideally, the style transfer manipulation should also preserve the semantic content of the source image. A commonly used approach to assist in transferring styles is based on Gram matrix optimization. One problem of Gram matrix-based optimization is that it does not consider the correlation between colors and their styles. Specifically, certain textures or structures should be associated with specific colors. This is particularly challenging when the target style image exhibits multiple style types. In this work, we propose a color-aware multi-style transfer method that generates aesthetically pleasing results while preserving the style-color correlation between style and generated images. We achieve this desired outcome by introducing a simple but efficient modification to classic Gram matrix-based style transfer optimization. A nice feature of our method is that it enables the users to manually select the color associations between the target style and content image for more transfer flexibility. We validated our method with several qualitative comparisons, including a user study conducted with 30 participants. In comparison with prior work, our method is simple, easy to implement, and achieves visually appealing results when targeting images that have multiple styles. Source code is available at https://github.com/mahmoudnafifi/color-aware-style-transfer.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 6

page 7

page 10

page 11

Code Repositories

color-aware-style-transfer

Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Style transfer aims to manipulate colors and textures of a given source image to share a target image’s “look and feel”. The source and target images are often so-called “content” and “style” images, respectively, where style transfer methods aim to generate an image that has the content of the source image and the style from the target image.

One of the first deep learning methods to produce images with artistic style was Deep Dream

[deepdream]

. Deep Dream works by reversing a convolutional neural network (CNN), trained for image classification via image-based optimization. The process begins with a noise image, which is iteratively updated through an optimization process to make the CNN predicts a certain output class. Inspired by Deep Dream, Gatys

et al,  [gatys2015neural]

proposed a neural style transfer (NST) method to minimize statistical differences of deep features, extracted from intermediate layers of a pre-trained CNN (

e.g, VGG net [simonyan2014very]), of content and style images. After the impressive results achieved by the NST work in [gatys2015neural], many methods have been proposed to perform style transfer leveraging the power of CNNs (e.g, [gatys2016preserving, gupta2017characterizing, li2017universal, luan2017deep, snelgrove2017high, ruder2018artistic, shen2018neural, Tesfaldet2018, park2019arbitrary, yang2019controllable, svoboda2020two, wang2020collaborative, wang2020diversified, liu2021learning]).

The work presented in this paper extends the idea of image-optimization NST to achieve multi-style transfer through a color-aware optimization loss (see Figure 1). We begin with a brief review of image-optimization NST in Section 2, then we will elaborate our method in Section 3.

2 Background

NST-based methods use similarity measures between CNN latent features at different layers to transfer the style statistics from the style image to the content image. In particular, the methods of  [gatys2015neural, berger2016incorporating] utilize the feature space provided by the 16 convolutional and 5 pooling layers of the 19-layer VGG network [simonyan2014very]

. The max-pooling layers of the original VGG were replaced by average pooling as it has been found to be useful for the NST task. For a pre-trained VGG network with fixed weights and a given content image, the goal of NST is to optimize a generated image so that the difference of feature map responses between the generated and content images is minimized.

Formally, let and be the content and generated images, respectively. Both and share the same image dimensions, and each pixel in is initialized randomly. Let and be the feature map responses at VGG-layer for and , respectively. Then is optimized by the minimizing the content loss as follows:

(1)

where is squared Frobenius norm. Gatys et al,  [gatys2015neural] leveraged the Gram matrix to calculate the correlations between the different filter responses to build a style representation of a given feature map response. The Gram matrix is computed by taking the inner product between the feature maps:

(2)

where and are the and vectorized feature maps of VGG-layer , is the number of elements in each map of that layer, and denotes the inner product. To make the style of the generated matches the style of a given style image , the difference between the Gram matrices of and is minimized as follows:

(3)

where is the style loss, and are the Gram matrices of and , respectively, and refer to scalar weighting parameters to determine the contribution of each layer in .

To generate an image such that the general image content is preserved from and the texture style statistics are transferred from

, the optimization process is jointly performed to minimize the final loss function:

(4)

where and are scale factors to control the strength of content reconstruction and style transfer, respectively.

From Equation 2, it is clear that Gram matrix measures the correlation of feature channels over the entire image. As a result, Gram-based loss in Equation 3 results in transferring the averaged global image style statistics to the generated image. That means if the style image has multiple styles, the traditional Gram-based optimization often fails to sufficiently convey all styles from the style image to the generated image; instead, it would generate images with mixed styles. Figure 2 illustrates this limitation. As shown in Figure 2-(A), the style image has more than a single style, which results in unpleasing mixed-style when transferring using traditional Gram-based NST optimization (Figure 2-[B]). Our style transfer result is shown in Figure 2-(C).

Figure 2: Traditional Gram matrix optimization does not consider the correlation between style image’s colors and their styles. Therefore, if the style image has more than a single style, like the case shown in (A), this optimization often results in a mixed style as shown in (B). Our method, in contrast, considers the color-style correlation in both images, as shown in (C). Dashed lines in purple refer to the goal of the optimization process.

Most existing image-optimization NST methods introduce different variations under this main idea of transferring the averaged global image style statistics to the generated one [gatys2015neural, gatys2016preserving, berger2016incorporating, li2017demystifying, gupta2017characterizing]. However, these methods are restricted with a single average style statistics per content and style image pair, and lack artistic controls. While the method of [risser2017stable] proposed a procedure to control the style of the output image artistically, their procedure requires a tedious human effort that asks the users to annotate semantic segmentation masks and correspondences in both the style and content images.

Unlike other existing methods, we introduce the first Color-Aware Multi-Style (CAMS) transfer method that enables style transfer locally based on nearest colors, where multiple styles can be transferred from the style image to the generated one. Our proposed method extracts a color palette from both the content and style images, and automatically constructs the region/color associations. The CAMS method performs style transfer, in which the texture of a specific color in the style image is transferred to the region that has the nearest color in the content image. Figure 1 shows multiple examples of the generated images (bottom row) from a single input content image with different style images (top row). The regions highlighted in yellow and blue indicate two example styles that were transferred from the style image to regions in the generated image based on the nearest color in the content image.

Our proposed framework allows multi-style transfer to be applied in a meaningful way. In particular, styles are transferred with associated with colors. By correlating style and colors, we offer another artistic dimension in preserving the content color statistics together with the transferred texture. To further allow artistic controls, we show how our method allows the users to manually select the color associations between the reference style and content image for more transfer options. We believe that our proposed framework and the interactive tool are useful for the research community and enable more aesthetically pleasing outputs. Our source code will be publicly released upon acceptance.

3 Our Method

Figure 3:

Our method proposes to extract color palette from both the content and style images. This color palette is then used to generate color weighting masks. These masks are used to weight the extracted deep feature of both style and input images. This color-aware separation results in multiple Gram matrices, which are then used to compute our color-aware style loss. This loss along with the content loss are used for optimization.

Figure 3 illustrates an overview of our method. As shown in the figure, we use color palettes to guide the optimization process. Given two color palettes extracted from the content and style images, respectively. We merge them to generate a single input color palette, , which is then used to generate a set of color masks. We use these masks to weight deep features of both input and style images from different layers of a pre-trained CNN. This color-aware separation of deep features results in multiple Gram matrices used to compute our style loss during image optimization. In this section, we will elaborate on each step of our algorithm.

3.1 Mask Generation

Given an image, , and a target palette, , our goal is to compute a color mask, , for each color in our color palette , such that the final mask reflects the similarity of each pixel in to color . We generate

by computing a radial basis function (RBF) between each pixel in

and our target color as follows:

(5)

where is the RBF fall-off factor, is the pixel in and is the target color. Next, we blur the generated mask by applying a

Gaussian blur kernel, with a standard deviation of

pixels. This smoothing step is optional but empirically was found to improve the final results in most cases.

For each of and , we generate a set of masks, each of which is computed for a color in our color palette, . Here, refers to the current value of the image we optimize, not the final generated image. Note that computing our color mask is performed through differentiable operations and, thus, can be easily integrated into our optimization.

3.2 Color-Aware Loss

Figure 4: In many scenarios, the style image could include multiple styles. Traditional Gram matrix-based optimization (e.g, Gatys et al, [gatys2015neural]) cannot capture these styles and as a result it may result in noisy images. In contrast, our color-aware optimization produces more pleasing results while persevering the style-color matching.

After computing the color mask sets and for and , respectively, we compute two sets of weighted Gram matrices, for and . According to the given mask weights, each set of weighted Gram matrices captures the correlation between deep features (extracted from some layers of the network) of interesting pixels in the image. This weighted Gram matrix helps our method to focus only on transferring styles between corresponding interesting pixels in the style and our generated image during optimization.

For a color in our color palette , this weighted Gram matrix, , is computed as follows:

(6)
(7)

where and are the and vectorized feature maps of the network layer after weighting, is the number of elements in each map of that layer, is the Hadamard product, and represents the computed mask for

after the following processing. First, we linearly interpolate the width and height of our computed mask, for color

, to have the same width and height of the original feature map, , before vectorization. Second, we duplicated the computed mask to have the same number of channels in .

For each layer in the pre-trained classification network and based on Equation 7, we compute and for our style and generated image, respectively. Finally, our color-aware sytle loss is computed as follows:

(8)

By generating different weighted Gram matrices, our method is able to convey different styles present in the style image, which is not feasible using classic Gram matrix optimization. As shown in Figure 4, the style image includes different style and textures. NST using Gram matrix (e.g, Gatys et al, [gatys2015neural]) fails to capture these multiple styles in the reference style image and produces an unpleasing result as shown in the third column in Figure 4. In contrast, our color-aware loss considers these styles and effectively transfers them in the generated image as shown in the last column in Figure 4. For example, the text transferred from the letter (white background) in the style image to the man’s white beard in the generated image.

3.3 Optimization and Implementation Details

The flow of our method is shown in Algorithm 1. First, we initialize each pixel in with the corresponding one in . Afterward, we generate two color palettes for and , respectively. We used the algorithm proposed in [chang2015palette]

to extract the color palette of each image. The number of colors per palette is a hyperparameter that could be changed to get different results. In our experiments, we extracted color palettes of five colors of each of our content and style images. Then, we merge them to generate the final color palette,

. After merging, the final color palette has at most ten colors, as we exclude redundant colors after merging.

Input: Style image , content image , a pre-trained network, , for image classification, layer indices and for style feature and content features, respectively, and loss term weighting factors, and
Result: Generated image that shares styles in and content in .
,
,
while not converged do
       
       
       
       
       
       
       
       
end while
Algorithm 1 Color-aware optimization.

After constructing , we generate color masks and for and , respectively. Then, we extract deep features from and , which represent our target style latent representation and content latent representation, respectively. We adopted VGG-19 net [simonyan2014very] as our backbone to extract such deep features, where we used the and conv layers to extract deep features for the content loss, and the first 5 conv layers to extract deep features for our color-aware style loss. We then construct the weighted Gram matrices, as described in Section 3.2, using the deep features of style and generated images. The weighted Gram matrices of both generated and style images, and the deep features of generated and content images are used to compute our color-aware style loss (Equation 8) and the content loss (Equation 1), respectively. Then, the final loss computed as:

(9)

where and are set to and , respectively. After each iteration, we update the color masks of our generated image to track changes in during optimization. To minimize Equation 9, we adopted the L-BFGS algorithm [liu1989limited] for 300 iterations with a learning rate of 0.5.

To generate our masks, we have a hyperparameter, , that can be interactively used to control the RBF fall-off, which consequently affects the final result of optimization. Our experiments found that works well in most cases.

A nice feature of our method is that it allows more transfer flexibility by enabling the user to determine color associations between our style and content images. To that end, we follow the same procedure explained in Algorithm 1 with the following exception. We do not update color masks of the generated image, , to keep considering the user selection. Figure 5 shows two user cases that reflect the benefit of having our manual user selection tool. As shown in the first case (top row), the user associates the reddish color in the content image’s color palette to different colors in the style image’s color palette. Based on this selection, our generated image has transferred styles based on this color-style correlation. In particular, the change happened only to the style of pixels associated with reddish pixels in the face region. As can also be seen in the top row, the transferred style is constrained to those styles associated with selected colors in the style image’s color palette.

For the second user case in Figure 5 (bottom row), the auto mode struggled to transfer multiple styles due to the fact that the given style image has a limited range of colors (i.e., only gray-color variations). Such style images, that have limited style options to offer, may result in less appealing outputs. Nevertheless, our manual color-association tool gives the user the flexibility to modify the generated image for more aesthetically pleasing outputs by associating the colors and restricting the modified region as shown in Figure 5 (bottom row).

Figure 5: Our method allows artistic controls, where the user can manually select color association or discard some colors from the generated palettes. In this figure, we present our results of the auto and user-selection modes for two different reference style images.
Figure 6: Qualitative comparisons between our method and other style transfer methods, which are: neural style transfer (NST) [gatys2015neural], adaptive instance normalization (AdaIN) [huang2017arbitrary], avatar net [sheng2018avatar], linear style transfer (LST) [li2018learning], relaxed optimal transport (ROT) [kolkin2019style]. See supplementary materials for a high quality version.

4 Evaluation

Evaluating NST techniques is a challenging problem facing the NST community, as indicated in [li2017universal, jing2019neural]. With that said, user studies have been widely adopted in the literature to evaluate subjective results. For the sake of completeness, we conducted a user study to evaluate our proposed color-aware NST compared with other relevant techniques. For each image pair (style/content), we used six different NST methods, which are: neural style transfer (NST) by Gatys et al,  [gatys2015neural], adaptive instance normalization (AdaIN) [huang2017arbitrary], avatar net [sheng2018avatar], linear style transfer (LST) [li2018learning], and relaxed optimal transport (ROT) [kolkin2019style]. The subjects were asked to give a score from one to five for the result of each method anonymously — higher score indicates more appealing result. We evaluated these methods on eight different image pairs (shown in Figure 6). We collected answers from 30 subjects; including 60% females and 40% males.

Table 1 shows the results of the user study. As can be seen, 38% of the total subjects rated the results of our method as high appealing results (score of five). On the other hand, the second best method (i.e., NST [gatys2015neural]) obtained only 16% of the votes as five. The study emphasizes the superiority of our proposed method compared with other methods, especially in capturing multiple styles from the style images (see Figure 6).

Method Rating
1 2 3 4 5
NST [gatys2015neural] 34% 13% 23% 14% 16%
AdaIN [huang2017arbitrary] 16% 27% 32% 19% 6%
Avatar net [sheng2018avatar] 50% 27% 17% 4% 2%
LST [li2018learning] 23% 35% 22% 15% 5%
ROT [kolkin2019style] 35% 23% 20% 16% 6%
CAMS (ours) 10% 12% 24% 16% 38%
Table 1: Results of user study conducted on 30 subject to evaluate different NST methods.Five represents the best aesthetic appealing results.

5 Conclusion

We have shown that Gram matrix-based optimization methods often fail to produce pleasing results when the target style image has multiple styles. To fix this limitation, we have presented a color-aware multi-style loss that captures correlations between different styles and colors in both style and generated images. Our method is efficient, simple, and easy to implement, achieving pleasing results while capturing different styles in the given reference image. We also have illustrated how our method could be used in an interactive way by enabling the users to manually control the way of transferring styles from the given style image. Finally, through a user study, we showed that our method achieves the best visually appealing results compared to other alternatives for style transfer.

References