Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.
Image style transfer aims to manipulate the appearance of a source image, or "content" image, to share similar texture and colors of a target "style" image. Ideally, the style transfer manipulation should also preserve the semantic content of the source image. A commonly used approach to assist in transferring styles is based on Gram matrix optimization. One problem of Gram matrix-based optimization is that it does not consider the correlation between colors and their styles. Specifically, certain textures or structures should be associated with specific colors. This is particularly challenging when the target style image exhibits multiple style types. In this work, we propose a color-aware multi-style transfer method that generates aesthetically pleasing results while preserving the style-color correlation between style and generated images. We achieve this desired outcome by introducing a simple but efficient modification to classic Gram matrix-based style transfer optimization. A nice feature of our method is that it enables the users to manually select the color associations between the target style and content image for more transfer flexibility. We validated our method with several qualitative comparisons, including a user study conducted with 30 participants. In comparison with prior work, our method is simple, easy to implement, and achieves visually appealing results when targeting images that have multiple styles. Source code is available at https://github.com/mahmoudnafifi/color-aware-style-transfer.READ FULL TEXT VIEW PDF
This note presents an extension to the neural artistic style transfer
Makeup transfer is the task of applying on a source face the makeup styl...
Computing the gradient of an image is a common step in computer vision
There are numerous approximate color transforms reported in the literatu...
Style transfer algorithms strive to render the content of one image usin...
We introduce a new technique that automatically generates diverse, visua...
Transferring artistic styles onto everyday photographs has become an
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.
Style transfer aims to manipulate colors and textures of a given source image to share a target image’s “look and feel”. The source and target images are often so-called “content” and “style” images, respectively, where style transfer methods aim to generate an image that has the content of the source image and the style from the target image.
. Deep Dream works by reversing a convolutional neural network (CNN), trained for image classification via image-based optimization. The process begins with a noise image, which is iteratively updated through an optimization process to make the CNN predicts a certain output class. Inspired by Deep Dream, Gatyset al, [gatys2015neural]
proposed a neural style transfer (NST) method to minimize statistical differences of deep features, extracted from intermediate layers of a pre-trained CNN (e.g, VGG net [simonyan2014very]), of content and style images. After the impressive results achieved by the NST work in [gatys2015neural], many methods have been proposed to perform style transfer leveraging the power of CNNs (e.g, [gatys2016preserving, gupta2017characterizing, li2017universal, luan2017deep, snelgrove2017high, ruder2018artistic, shen2018neural, Tesfaldet2018, park2019arbitrary, yang2019controllable, svoboda2020two, wang2020collaborative, wang2020diversified, liu2021learning]).
NST-based methods use similarity measures between CNN latent features at different layers to transfer the style statistics from the style image to the content image. In particular, the methods of [gatys2015neural, berger2016incorporating] utilize the feature space provided by the 16 convolutional and 5 pooling layers of the 19-layer VGG network [simonyan2014very]
. The max-pooling layers of the original VGG were replaced by average pooling as it has been found to be useful for the NST task. For a pre-trained VGG network with fixed weights and a given content image, the goal of NST is to optimize a generated image so that the difference of feature map responses between the generated and content images is minimized.
Formally, let and be the content and generated images, respectively. Both and share the same image dimensions, and each pixel in is initialized randomly. Let and be the feature map responses at VGG-layer for and , respectively. Then is optimized by the minimizing the content loss as follows:
where is squared Frobenius norm. Gatys et al, [gatys2015neural] leveraged the Gram matrix to calculate the correlations between the different filter responses to build a style representation of a given feature map response. The Gram matrix is computed by taking the inner product between the feature maps:
where and are the and vectorized feature maps of VGG-layer , is the number of elements in each map of that layer, and denotes the inner product. To make the style of the generated matches the style of a given style image , the difference between the Gram matrices of and is minimized as follows:
where is the style loss, and are the Gram matrices of and , respectively, and refer to scalar weighting parameters to determine the contribution of each layer in .
To generate an image such that the general image content is preserved from and the texture style statistics are transferred from
, the optimization process is jointly performed to minimize the final loss function:
where and are scale factors to control the strength of content reconstruction and style transfer, respectively.
From Equation 2, it is clear that Gram matrix measures the correlation of feature channels over the entire image. As a result, Gram-based loss in Equation 3 results in transferring the averaged global image style statistics to the generated image. That means if the style image has multiple styles, the traditional Gram-based optimization often fails to sufficiently convey all styles from the style image to the generated image; instead, it would generate images with mixed styles. Figure 2 illustrates this limitation. As shown in Figure 2-(A), the style image has more than a single style, which results in unpleasing mixed-style when transferring using traditional Gram-based NST optimization (Figure 2-[B]). Our style transfer result is shown in Figure 2-(C).
Most existing image-optimization NST methods introduce different variations under this main idea of transferring the averaged global image style statistics to the generated one [gatys2015neural, gatys2016preserving, berger2016incorporating, li2017demystifying, gupta2017characterizing]. However, these methods are restricted with a single average style statistics per content and style image pair, and lack artistic controls. While the method of [risser2017stable] proposed a procedure to control the style of the output image artistically, their procedure requires a tedious human effort that asks the users to annotate semantic segmentation masks and correspondences in both the style and content images.
Unlike other existing methods, we introduce the first Color-Aware Multi-Style (CAMS) transfer method that enables style transfer locally based on nearest colors, where multiple styles can be transferred from the style image to the generated one. Our proposed method extracts a color palette from both the content and style images, and automatically constructs the region/color associations. The CAMS method performs style transfer, in which the texture of a specific color in the style image is transferred to the region that has the nearest color in the content image. Figure 1 shows multiple examples of the generated images (bottom row) from a single input content image with different style images (top row). The regions highlighted in yellow and blue indicate two example styles that were transferred from the style image to regions in the generated image based on the nearest color in the content image.
Our proposed framework allows multi-style transfer to be applied in a meaningful way. In particular, styles are transferred with associated with colors. By correlating style and colors, we offer another artistic dimension in preserving the content color statistics together with the transferred texture. To further allow artistic controls, we show how our method allows the users to manually select the color associations between the reference style and content image for more transfer options. We believe that our proposed framework and the interactive tool are useful for the research community and enable more aesthetically pleasing outputs. Our source code will be publicly released upon acceptance.
Figure 3 illustrates an overview of our method. As shown in the figure, we use color palettes to guide the optimization process. Given two color palettes extracted from the content and style images, respectively. We merge them to generate a single input color palette, , which is then used to generate a set of color masks. We use these masks to weight deep features of both input and style images from different layers of a pre-trained CNN. This color-aware separation of deep features results in multiple Gram matrices used to compute our style loss during image optimization. In this section, we will elaborate on each step of our algorithm.
Given an image, , and a target palette, , our goal is to compute a color mask, , for each color in our color palette , such that the final mask reflects the similarity of each pixel in to color . We generate
by computing a radial basis function (RBF) between each pixel inand our target color as follows:
where is the RBF fall-off factor, is the pixel in and is the target color. Next, we blur the generated mask by applying a
Gaussian blur kernel, with a standard deviation ofpixels. This smoothing step is optional but empirically was found to improve the final results in most cases.
For each of and , we generate a set of masks, each of which is computed for a color in our color palette, . Here, refers to the current value of the image we optimize, not the final generated image. Note that computing our color mask is performed through differentiable operations and, thus, can be easily integrated into our optimization.
After computing the color mask sets and for and , respectively, we compute two sets of weighted Gram matrices, for and . According to the given mask weights, each set of weighted Gram matrices captures the correlation between deep features (extracted from some layers of the network) of interesting pixels in the image. This weighted Gram matrix helps our method to focus only on transferring styles between corresponding interesting pixels in the style and our generated image during optimization.
For a color in our color palette , this weighted Gram matrix, , is computed as follows:
where and are the and vectorized feature maps of the network layer after weighting, is the number of elements in each map of that layer, is the Hadamard product, and represents the computed mask for
after the following processing. First, we linearly interpolate the width and height of our computed mask, for color, to have the same width and height of the original feature map, , before vectorization. Second, we duplicated the computed mask to have the same number of channels in .
For each layer in the pre-trained classification network and based on Equation 7, we compute and for our style and generated image, respectively. Finally, our color-aware sytle loss is computed as follows:
By generating different weighted Gram matrices, our method is able to convey different styles present in the style image, which is not feasible using classic Gram matrix optimization. As shown in Figure 4, the style image includes different style and textures. NST using Gram matrix (e.g, Gatys et al, [gatys2015neural]) fails to capture these multiple styles in the reference style image and produces an unpleasing result as shown in the third column in Figure 4. In contrast, our color-aware loss considers these styles and effectively transfers them in the generated image as shown in the last column in Figure 4. For example, the text transferred from the letter (white background) in the style image to the man’s white beard in the generated image.
The flow of our method is shown in Algorithm 1. First, we initialize each pixel in with the corresponding one in . Afterward, we generate two color palettes for and , respectively. We used the algorithm proposed in [chang2015palette]
to extract the color palette of each image. The number of colors per palette is a hyperparameter that could be changed to get different results. In our experiments, we extracted color palettes of five colors of each of our content and style images. Then, we merge them to generate the final color palette,. After merging, the final color palette has at most ten colors, as we exclude redundant colors after merging.
After constructing , we generate color masks and for and , respectively. Then, we extract deep features from and , which represent our target style latent representation and content latent representation, respectively. We adopted VGG-19 net [simonyan2014very] as our backbone to extract such deep features, where we used the and conv layers to extract deep features for the content loss, and the first 5 conv layers to extract deep features for our color-aware style loss. We then construct the weighted Gram matrices, as described in Section 3.2, using the deep features of style and generated images. The weighted Gram matrices of both generated and style images, and the deep features of generated and content images are used to compute our color-aware style loss (Equation 8) and the content loss (Equation 1), respectively. Then, the final loss computed as:
where and are set to and , respectively. After each iteration, we update the color masks of our generated image to track changes in during optimization. To minimize Equation 9, we adopted the L-BFGS algorithm [liu1989limited] for 300 iterations with a learning rate of 0.5.
To generate our masks, we have a hyperparameter, , that can be interactively used to control the RBF fall-off, which consequently affects the final result of optimization. Our experiments found that works well in most cases.
A nice feature of our method is that it allows more transfer flexibility by enabling the user to determine color associations between our style and content images. To that end, we follow the same procedure explained in Algorithm 1 with the following exception. We do not update color masks of the generated image, , to keep considering the user selection. Figure 5 shows two user cases that reflect the benefit of having our manual user selection tool. As shown in the first case (top row), the user associates the reddish color in the content image’s color palette to different colors in the style image’s color palette. Based on this selection, our generated image has transferred styles based on this color-style correlation. In particular, the change happened only to the style of pixels associated with reddish pixels in the face region. As can also be seen in the top row, the transferred style is constrained to those styles associated with selected colors in the style image’s color palette.
For the second user case in Figure 5 (bottom row), the auto mode struggled to transfer multiple styles due to the fact that the given style image has a limited range of colors (i.e., only gray-color variations). Such style images, that have limited style options to offer, may result in less appealing outputs. Nevertheless, our manual color-association tool gives the user the flexibility to modify the generated image for more aesthetically pleasing outputs by associating the colors and restricting the modified region as shown in Figure 5 (bottom row).
Evaluating NST techniques is a challenging problem facing the NST community, as indicated in [li2017universal, jing2019neural]. With that said, user studies have been widely adopted in the literature to evaluate subjective results. For the sake of completeness, we conducted a user study to evaluate our proposed color-aware NST compared with other relevant techniques. For each image pair (style/content), we used six different NST methods, which are: neural style transfer (NST) by Gatys et al, [gatys2015neural], adaptive instance normalization (AdaIN) [huang2017arbitrary], avatar net [sheng2018avatar], linear style transfer (LST) [li2018learning], and relaxed optimal transport (ROT) [kolkin2019style]. The subjects were asked to give a score from one to five for the result of each method anonymously — higher score indicates more appealing result. We evaluated these methods on eight different image pairs (shown in Figure 6). We collected answers from 30 subjects; including 60% females and 40% males.
Table 1 shows the results of the user study. As can be seen, 38% of the total subjects rated the results of our method as high appealing results (score of five). On the other hand, the second best method (i.e., NST [gatys2015neural]) obtained only 16% of the votes as five. The study emphasizes the superiority of our proposed method compared with other methods, especially in capturing multiple styles from the style images (see Figure 6).
|Avatar net [sheng2018avatar]||50%||27%||17%||4%||2%|
We have shown that Gram matrix-based optimization methods often fail to produce pleasing results when the target style image has multiple styles. To fix this limitation, we have presented a color-aware multi-style loss that captures correlations between different styles and colors in both style and generated images. Our method is efficient, simple, and easy to implement, achieving pleasing results while capturing different styles in the given reference image. We also have illustrated how our method could be used in an interactive way by enabling the users to manually control the way of transferring styles from the given style image. Finally, through a user study, we showed that our method achieves the best visually appealing results compared to other alternatives for style transfer.