Artistic style transfer is a technique used to create art by taking a style image and recomposing a content image by synthesizing global and local style patterns from the style image evenly over the content image, while maintaining the content image’s original structure. Recently, The seminal work of Gatys et al. 5] is flexible enough to combine content and style of arbitrary images while it is prohibitively slow due to iterative optimization process.
Significant efforts have been made to reduce the computational cost. Several approaches [1, 8, 12, 22, 3, 14, 19, 26, 29] have been developed based on feed-forward networks. The feed-forward methods can synthesize stylized images efficiently, but are limited to a fixed number of styles or an insufficient visual quality.
simply adjusts the mean and variance of the content image to match those of the style image. Although the AdaIN effectively combines the structure of the content image and the style pattern by transferring feature statistics, its output suffers in quality due to the over-simplified nature of this method. The WCT transforms the content features into the style feature space through a whitening and coloring process with the covariance instead of the variance. By embedding these stylized features within a pre-trained encoder-decoder module, the style-free decoder synthesizes the stylized image. However, in the case that the feature has a large dimension, the WCT will accordingly require computationally-expensive operations. The AvatarNet  proposed a patch-based style decorator module that maps the content features with the characteristics of the style patterns, while maintaining the content structure. The AvatarNet considers not only the holistic style distribution, but also the local style patterns. However, despite valuable efforts, these methods still could not reflect the detailed texture of the style image, distort content structures or fail balancing the local and global style patterns.
In this work, we propose a novel arbitrary style transfer algorithm, which synthesize high-quality stylized images in real-time while preserving the content structure. This is achieved by a new style-attentional network (SANet) and a novel identity loss function. For arbitrary style transfer, our feed-forward network, composed of SANets and decoders, learns the semantic correlations between the content features and the style features by spatially rearranging the style features according to the content features.
Our SANet is closely related to the style feature decorator of the AvatarNet 
. However, the biggest difference between both approaches is that the SANet can flexibly decorate the style features by learning through the conventional style reconstruction loss and identity loss. In addition, our identity loss function helps the SANet maintain as much of their original content structure as possible through the keeping diversity of global and local style patterns. The main contributions of our work are:
We propose a new style-attentional network to flexibly match the semantically nearest style features onto the content features.
We present a learning approach of a feed-forward networks composed of style-attentional networks and decoders, which can be optimized using a conventional style reconstruction loss and a new identity loss.
Our experiments show that our method highly efficient (about 100150 fps) at synthesizing high-quality stylized images by balancing the global and local style patterns while preserving content structure.
2 Related Work
Arbitrary Style Transfer. The ultimate goal of arbitrary style transfer is to simultaneously achieve and preserve generalization, quality and efficiency. Despite recent advances, existing works [5, 4, 1, 8, 12, 22, 3, 6, 10, 11, 23, 24, 28, 18] present a trade-off among generalization, quality and efficiency. Recently, several methods [13, 20, 2, 7] have been proposed to achieve arbitrary style transfer. The AdaIN algorithm simply adjusts the mean and variance of the content image to match those of the style image transferring global feature statistics. The WCT performs a pair of feature transforms, whitening and coloring for feature embedding within a pre-trained encoder-decoder module. The AvatarNet proposed the patch-based feature decorator, which transfers the content features to semantically nearest style features simultaneously minimizing the difference between their holistic feature distributions. In many cases, we observe that the results of the WCT and the AvatarNet fail to sufficiently represent the detailed texture or maintain the content structure. We carefully guess that WCT and AvatarNet fail to synthesize the detailed texture style due to pre-trained general encoder-decoder networks, which is learned from general images such as MS COCO datasets with large differences in style characteristics. As a result, these methods consider mapping the style feature onto content feature in the feature space, but there is no way to control the global statistics or content structure of the style. Although the AvatarNet can obtain the local style patterns through the patch-based style decorator, the scale of style patterns in the style images depends on the patch size. Therefore, the global and local style patterns can not both be taken into consideration. On the other hand, the AdaIN transforms texture and color distribution well, but does not represent local style patterns well. There exists another trade-off between content and style due to a combinational scale-adapted content and style loss. In this paper, we try to solve these problems by using the SANets and the identity loss. In this way, the proposed style transfer network can represent global and local style patterns, and maintain the content structure without losing the richness of the style.
Self-Attention Mechanism. Our style-attentional module is related to the recent self-attention methods [25, 30] for image generation and machine translation. These models calculate the response at a position in a sequence or an image by attending to all positions and taking their weighted average in an embedding space. The proposed style-attentional network learns the mapping between the content features and the style features by slightly modifying the self-attention mechanism.
In this paper, a novel style transfer network is proposed. The style transfer network is composed of a encoder-decoder module and a style-attentional module, as shown in Fig. 2. Proposed feed-forward network effectively generates high-quality stylized images, which appropriately reflect global and local style patterns. Our new identity loss function helps to maintain the detailed structure of the content while reflecting the style sufficiently.
3.1 Network Aritecture
Our style transfer network takes a content image and an arbitrary style image as inputs, and synthesizes a stylized image with the semantic structures from the former and characteristics from the latter. In this work, the pretrained VGG-19 network  is employed as encoder and a symmetric decoder and two SANets are jointly trained for arbitrary style transfer. Our decoder follows the setting of .
To combine global style patterns and local style patterns adequately, we integrate two SANets by taking VGG feature maps encoded from different layers (Relu_4_1 and Relu_5_1) as inputs and combining both ouput feature maps. From a pair of content image and style image , we first extract their VGG feature maps and at a certain layer (e.g., Relu_4_1) of the encoder.
After encoding the content and style images, we put both feature maps to a SANet module that map correspondences between the content feature maps and the style feature maps , producing the output feature maps :
After applying 1x1 convolution to the and applying the element-wise sum of two matrices, we obtain :
where ”” denotes element wise sum.
We combine two ouput feature maps from two SANets as
where and are the output feature maps obtained from two SANets, denotes 33 convolution to combine two feature maps and is added to after upsampling.
Then the stylized output image is synthesized by feeding into the decoder,
3.2 Style-Attentional Network for Style Feature Embedding
Fig. 3 shows style feature embedding using SANet module. Content feature maps and style feature maps from encoder are normalized and then transformed into two feature space to calculate the attention between and as:
where , , and denotes a channel wise normalized version of the . The response is normalized by a factor . Here is the index of an output position and is the index that enumerates all possible positions. In the above formulation, , , are the learned weight matrices, which are implemented as 11 convolutions like 
Our SANet has a network structure similar to the existing non-local block structure , but the number of input data is different (input of the SANet are and ). The SANet module can appropriately embed a local style pattern in each position of content feature maps by mapping the relationship (such as affinity) between content feature maps and style feature maps through learning.
3.3 Full Objective
where is composer of content loss , style loss and identity loss and , are the weights of different losses.
Similar to , the content loss is the Euclidean distance between the channel wise normalized target features, and and the channel wise normalized features of the output image VGG features, , :
The style loss is defined as:
where each denotes a layer in the encoder used to compute the style loss. We use Relu_1_1, Relu_2_1, Relu_3_1, Relu_4_1, Relu_5_1 layers with equal weights. We have applied both the Gram matrix loss  and the AdaIN style loss , but the result is that the AdaIN style loss is more satisfactory.
When , , are fixed as the identity matrices, each position in content feature maps can be transformed into the semantically nearest feature in the style feature maps. In this case, it cannot parse sufficient style features. In the SANet, although , , are learnable matrices, our style transfer model can be trained by considering only global statistics by the style loss .
In order to consider both the global statistics and the semantically local mapping between the content features and the style features, we define a new identity loss function as:
where (or ) means the output image synthesized from two same content(or style) images, each denotes a layer in the encoder and, and is an identity loss weight. The weighting parameters are simply set as , , and in our experiments.
The content loss and the style loss play a role of controlling trade-off between the structure of the content image and the style patterns. Unlike both loss, the identity loss is calculated from same input images having no gap of style characteristics. Therefore, the identity loss concentrates on keeping the structure of the content image rather than changing style statistics. As a result, the identity loss makes it possible to maintain the structure of the content image and style characteristics of the reference image at the same time.
|Method||Time (256px)||Time (512px)|
|gatys et al.||15.863||50.804|
4 Experimental Results
shows an overview of our style transfer network based on the proposed SANets. Code and pretrained models (in Pytorch) will be made available to the public.
4.1 Experimental Settings
We train the network using MS-COCO  as content images and WikiArt  as style images. Both dataset contain roughly 80,000 training images. We use the adam optimizer  with a learning rate of 0.0001 and a batch size of 5 content-style image pairs. During training, we first rescale the smaller dimension of both images to 512 while preserving the aspect ratio, then randomly crop a region of size 256256 pixels. In testing time, our network can handle any input size because it is fully convolutional.
4.2 Comparison with Prior Works
To evaluate the our method, we compared it with three types of arbitrary style transform methods: the iterative optimization , the feature transformation based methods [13, 7] and the patch-based method .
Qualitative examples. In Fig. 11 we show example style transfer results synthesized by the state-of-the-art methods and more results in supplementary materials. Note that all the test style images are never observed during the training of our model. The optimization-based method  allows arbitrary style transfer but is likely to encounter a bad local minimum (e.g., row 2, 4) and it is computationally expensive due to iterative optimization process (see Table 1).
The AdaIN  simply adjusts the mean and variance of the content features to synthesize the stylized image. However, its results are less appealing and often retain some of the color distribution of the content due to the trade-off between content and style (e.g., row 1, 2, 8 ). Also, both AdaIN  and WCT  sometimes result in distorted local style patterns because of holistically adjusting the content features to match the second-order statistics of the style features, as shown in Fig. 11. Although the AvatarNet  decorates the style patterns according to the semantic spatial distribution of the content image and applies the multi-scale style transfer, it frequently cannot represent the local and global style patterns at the same time due to the dependency of the patch size. Also it cannot keep the content structure in most cases (see the 4th column in Fig. 11). In contrast, our method can parse diverse style patterns such as global color distribution, texture and local style patterns while maintain the structure of content in most examples, as shown in Fig. 11.
Unlike other algorithms, our learnable SANets flexibly can parse sufficient style features without maximally aligning between the content and style features and regardless of large domain gap (see the row 1 ,6 in Fig. 11). The proposed SANet semantically distinguishes the content structure and transfers similar style patterns onto the regions with same semantic meaning. Our method transfers different styles for each semantic content. In Fig. 11 (row 3), our stylized image show that the sky and buildings are stylized by different style patterns, respectively while the other results show that the style boundaries between the sky and building are ambiguous.
We also provide result close-ups in Fig. 4. Our results show well the multi-scale style patterns (e.g., color distribution, bush strokes and white and red patterns of rough textures in style image). AvatarNet and WCT distort the brush strokes, have blurry hair textures and the face appearances are not maintained. AdaIN cannot even maintain the color distribution.
User study. We use 14 content images and 70 style images to synthesize 980 images in total. We randomly select 30 content and style combinations to each subject. We show stylized images by 5 compared methods side-by-side in a random order. And we ask the subject to vote his/her one favorit result for each style. We collect the 2400 votes from 80 users and show the percentage of votes for each method in Fig. 5. The study shows that our method is favored for better stylized results.
Efficiency. Tab. 1 shows the run time performance of the proposed method and other methods on two image scales: 256, 512. We measured the run time performance, including the time for style encoding. Our algorithm runs at 165 FPS and 125 FPS for the single-scale (only Relu_4_1) and multi-scale models (Relu_4_1 and Relu_5_1), respectively, and the performance for 256 x 256 and 512 x 512 is nearly same. Therefore our method is possible to process realtime style transfer. Our model is 40 times faster than the matrix computation based methods (WCT  and AvatarNet ).
4.3 Ablation Studies
Loss analysis. In this section, we show the influence of content-style loss and identity loss. Fig. 6 (a) shows the results obtained by fixing , and at 0, 0 and 5, respectively, and increasing from 1 to 50. Fig. 6 (b) shows the results obtained by fixing and at 0 and 5, respectively, and increasing and from 1 to 100 and from 50 to 5000, respectively. Without the identity loss, if we increase the weight of the content loss, the content structure is preserved, but the characteristics of style patterns disappear, due to trade-off between the content loss and the style loss. On the other hand, increasing the weights of identity loss without content loss preserves the content structure as much as possible while maintaining style patterns. However, distortion of the content structure can not be avoided. We applied a combination of content-style loss and identity loss to maintain the content structure while enriching style patterns.
Multi-level features embedding. Fig. 7 shows two stylized outputs obtained from Relu_4_1 and Relu_5_1, respectively. When the Relu_4_1 is only used for style transfer, the global statistics of the style features and the content structure are maintained well. However, the local style patterns do not appear well. In other hand, the Relu_5_1 helps add the local style patterns such as circle patterns because the receptive field is more wide relatively. However, the content structures are distorted and the textures such as brush strokes are disappear. In our work, to enrich the style patterns, we integrate two SANets by taking VGG feature maps encoded from different layers (Relu_4_1 and Relu_5_1) as inputs and combining both ouput feature maps
4.4 Runtime Controls
In this section, we present the flexibility of our method through several applications.
Content-style trade-off. The degree of stylization can be controlled during training by adjusting the style weight in Equ. 6 or during test time by interpolating between feature maps that are fed to the decoder. For runtime control, we adjust the stylized features , . is obtained by taking two content images as input for our model. The network tries to reconstruct the content image when , and to synthesize the most stylized image when (result shown in Fig. 8).
Style interpolation. To interpolate between several style images, a convex combination of the feature maps from different styles is feeded into the decoder (result shown in Fig. 9).
Spatial control. Fig. 10 shows an example of spatially controlling the stylization. A set of masks (Fig. 10 column 3) is additionally required as input to map the spatial correspondence between content regions and styles. We can assign the different styles in each spatial region by replacing with where is a simple mask-out operation.
In this work we propose a new arbitrary style transform algorithm that is consist of the style-attentional networks and decoders. Our algorithm is effective and efficient. Unlike the patch-based style decorator in , our SANet can flexibly decorate the style features by learning using the conventional style reconstruction loss and the identity loss. Futhermore, proposed identity loss helps the SANet keep their content structure, enriching the local and global style patterns. Experimental results demonstrate that the proposed method synthesizes favorably against the state-of-the-art algorithms on arbitrary style transfer.
-  D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. In Proc. CVPR, volume 1, page 4, 2017.
-  T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016.
-  V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. Proc. of ICLR, 2017.
L. Gatys, A. S. Ecker, and M. Bethge.
Texture synthesis using convolutional neural networks.In Advances in Neural Information Processing Systems, pages 262–270, 2015.
-  L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In , pages 2414–2423, 2016.
-  L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  X. Huang and S. J. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, pages 1510–1519, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In European Conference on Computer Vision, pages 694–711. Springer, 2016.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  C. Li and M. Wand. Combining markov random fields and convolutional neural networks for image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2479–2486, 2016.
-  C. Li and M. Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision, pages 702–716. Springer, 2016.
-  Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Diversified texture synthesis with feed-forward networks. In Proc. CVPR, 2017.
-  Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pages 386–396, 2017.
-  Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural style transfer. arXiv preprint arXiv:1701.01036, 2017.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
A. Paszke, S. Chintala, R. Collobert, K. Kavukcuoglu, C. Farabet, S. Bengio,
I. Melvin, J. Weston, and J. Mariethoz.
Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration, may 2017.
-  F. Phillips and B. Mackintosh. Wiki art gallery, inc.: A case for critical thinking. Issues in Accounting Education, 26(3):593–608, 2011.
-  E. Risser, P. Wilmot, and C. Barnes. Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893, 2017.
-  F. Shen, S. Yan, and G. Zeng. Meta networks for neural style transfer. arXiv preprint arXiv:1709.04111, 2017.
-  L. Sheng, Z. Lin, J. Shao, and X. Wang. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8242–8250, 2018.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, pages 1349–1357, 2016.
-  D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: the missing ingredient for fast stylization. corr abs/1607.0 (2016).
-  D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, volume 1, page 3, 2017.
-  A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
-  H. Wang, X. Liang, H. Zhang, D.-Y. Yeung, and E. P. Xing. Zm-net: Real-time zero-shot image manipulation network. arXiv preprint arXiv:1703.07255, 2017.
-  X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. arXiv preprint arXiv:1711.07971, 10, 2017.
-  X. Wang, G. Oxholm, D. Zhang, and Y.-F. Wang. Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, page 7, 2017.
-  H. Zhang and K. Dana. Multi-style generative network for real-time transfer. arXiv preprint arXiv:1703.06953, 2017.
-  H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena. Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018.