1 Introduction
A variety of methods on neural style transfer have been proposed since the seminal work of Gatys [8]. These methods can be roughly categorized into image optimization and model optimization [13]. Methods based on image optimization directly obtain the stylized output by minimizing the content loss and style loss. The style loss can be defined by Gram matrix [8], histogram [24], or Markov Random Fields (MRFs) [15]
. Contrary to that, methods based on model optimization try to train neural networks on large datasets like COCO
[21]. The training loss can be defined as perceptual loss [14] or MRFs loss [16]. Subsequent works [3, 6, 31] further study the problem of training one network for multiple styles. Recently, [12] proposes to use AdaIN as feature transform to train one network for arbitrary styles. Apart from image and model optimization, many other works study the problems of semantic style transfer [22, 20, 1], video style transfer [11, 2, 25, 26], portrait style transfer [27], and stereoscopic style transfer [4]. [13] provides a thorough review of the works on style transfer.In this paper, we study the problem of universal style transfer [18]. Our motivation is to explicitly minimize the losses defined by Gatys [8]. Therefore, our approach does not require training on any predefined styles. Similar to WCT [18], our method is also based on a multiscale encoderfeature transformdecoder framework. We use different layers of VGG network [30] as the encoders and train the decoders to invert features into images. The effect of style transfer is achieved by feature transform between encoder and decoder. Therefore, the key to universal style transfer is feature transform. In this work, we focus on the theoretical analysis of feature transform and propose a new closedform solution.
Although AdaIN [12]
trains its decoder on a large dataset of style images, AdaIN itself is also a feature transform method. It considers the feature of each channel as a Gaussian distribution and assumes the channels are independent. For each channel, AdaIN first normalizes the content feature and then matches it to the style feature. Therefore, it only matches the diagonal elements of the covariance matrices. WCT
[18]proposes to use whitening and coloring as feature transform. Compared with AdaIN, WCT improves the results by matching all the elements of covariance matrices. Since the channels of Deep Neural Networks (DNNs) are correlated, the nondiagonal elements are essential to represent the style. However, WCT only matches the covariance matrices, which shares similar spirits with minimizing the style loss of Gatys. It does not consider the content loss and cannot wellpreserve the image structure. Moreover, multiplying an orthogonal matrix between the whitening and coloring matrices can also match the covariance matrices, which has been pointed out by
[17].[19] shows that matching Gram matrices is equivalent to minimizing the Maximum Mean Discrepancy (MMD) with the second order polynomial kernel. However, it does not give a closedform solution. Instead, our work reformulates style transfer as an optimal transport problem. Optimal transport tries to find a transformation that matches two highdimensional distributions. For neural style transfer, considering the neural feature in each activation as a high dimension sample, we assume the samples of content and style images are from two Multivariate Gaussian (MVG) distributions. Style transfer is equivalent to transform the content samples to fit the distribution of style samples. Assuming the transformation is linear, we find that both AdaIN and WCT are special cases of our formulation. Although [17] also assumes the transformation is linear, it still follows the whitening and coloring pipeline and trains two meta networks for whitening and coloring matrices. Contrary to that, we directly find the transformation under the optimal transport formulation.
As we have described above, there are still infinite transformations, for example, multiplying an orthogonal matrix between the whitening and coloring matrices can also be the solution. Therefore, we seek for a transformation, which additionally minimizes the difference between transformed feature and original content feature. This shares similar spirits with minimizing the content loss of Gatys [8]. We prove that a unique closedform solution can be found, once considering the content loss. We show the detailed proof in the method part. Since the closedform solution further considers the content loss, it can preserve better structures compared with WCT.
Our contributions can be concluded as follows:
1. We present a novel interpretation of neural style transfer by treating it as an optimal transport problem and demonstrate the theoretical relations of our interpretation with former works on feature transform, for example, AdaIN and WCT.
2. We find the unique closedform solution under the optimal transport interpretation by additionally considering the content loss.
3. Our closedform solution preserves better structures and achieves visually pleasing results.
2 Related Work
Image Optimization Methods based on image optimization directly obtain the stylized output by minimizing the content loss and style loss defined in the feature space. The optimization is usually based on backpropagation. [7, 8] propose to use Gram matrix to define the style of an example image. [15]
improves the results by combining MRFs with Convolutional Neural Networks.
[1] uses the semantic masks to define the style losses within corresponding regions. In order to improve the results for portrait style transfer, [27] proposes to modify the feature maps to transfer the local color distributions of the example painting onto the content image. This is similar to the gain map proposed by [29]. [9] studies the problem of controlling the perceptual factors during style transfer. [24] improves the results of neural style transfer by incorporating the histogram loss. [25] incorporates the temporal consistency loss into the optimization for video style transfer. Since all the above methods solve the optimization by backpropagation, they are intrinsically timeconsuming.Model Optimization In order to solve the speed bottleneck of backpropagation, [14, 16] propose to train a feedforward network to approximate the optimization process. Instead of optimizing the image, they optimize the parameters of the network. Since it is tedious to train one network for each style, [3, 6, 31] further study the problem of training one network for multiple styles. Later, [5] presents a method based on patch swap for arbitrary style transfer. First, the content and style images are forwarded through the deep neural network to extract features. Then the style transfer is formulated as neural patch swap to get the reconstructed feature map. This feature map is inverted by the decoder network to image space. Since then, the framework of encoderfeature transformdecoder has been widely explored for arbitrary style transfer. [12] uses AdaIN as the feature transform and trains the decoder over large collections of content and style images. [17] trains two meta networks for the whitening and coloring matrices, following the formulation of WCT [18]. Many other works also extend neural style transfer to video [2, 11, 26] and stereoscopic style transfer [4]. These works usually jointly train additional networks apart from the style transfer network.
Universal Style Transfer Universal style transfer [18] is also based on the framework of encoderfeature transformdecoder. Unlike AdaIN [12], it does not require network training on any style image. It directly uses different layers of VGG network as the encoders and train the decoders to invert the feature into image. The style transfer effect is achieved by feature transform. [5]
replaces the patches of content feature by the most similar patches of style feature. However, the nearest neighbor search achieves less transfer effect since it tends to preserve the original appearance. AdaIN considers the activation of each channel as a Gaussian distribution and matches the content and style images through mean and variance. However, since the channels of CNN are correlated, AdaIN cannot achieve visually pleasing transfer effect. WCT
[18] proposes to use feature whitening and coloring to match the covariance matrices of style and content images. However, as pointed out by [17], WCT is not the only approach to matching the covariance matrices. [28] proposes a method to combine patch match with WCT and AdaIN. Instead of finding the nearest neighbor by the original feature, [28] finds the nearest neighbor by the projected feature. These projected feature can be generated by AdaIN or WCT. However, above methods all fail to give a theoretical analysis of feature transform. The key observation of current works like WCT is matching the covariance matrices, which is not enough to find a good solution.3 Motivation
The pipeline of our method can be concluded as shown in Figure 1. It is similar to WCT [18]. We use different layers of the pretrained VGG network as the encoders. For every encoder, we train the corresponding decoder to invert the feature into image as illustrated by Figure 1 (b,c). Although [10, 28] propose to train the decoder to invert the feature to its bottom layer’s feature, which might be more efficient, we use the image decoder [18] in this work since the framework is not our contribution.
We start to study the problem of feature transform by reformulating neural style transfer as the optimal transport problem. We denote the content image as and the style image as . For the features of content image and style image, we denote them as and separately, where and are the numbers of activations and is the number of channels. We view the columns of and as samples from two Multivariate Gaussian distributions and , where
are the mean vectors and
are the variance matrices. We further denote the sample from content distribution as and the sample from style distribution as . Therefore, and . Assuming the optimal transformation is linear, we can represent it as follows.(1) 
where is the transformation matrix. Since we assume the features are from two MVG distributions, must meet the following equation to match two MVG distributions.
(2) 
where is the transpose of . When Eq. 2 is satisfied, we can obtain . We then demonstrate the relations of our formulation with AdaIN [12] and WCT [18]. We denote the diagonal matrices of and as and separately. For AdaIn, the transformation matrix , where denotes the elementwise division. Therefore, AdaIN does not satisfy Eq. 2 since it ignores the correlation of channels. Only the diagonal elements are matched by AdaIN. As for WCT, we can find that the transformation matrix . Since both and are symmetric matrices, WCT satisfies Eq. 2. However, WCT is not the only solution to Eq. 2 because , where is a unite orthogonal matrix, is another solution to Eq. 2. This has also been pointed out by [17]. Theoretically, there are infinite solutions, considering only Eq. 2. We show the style transfer results of multiplying a random unite orthogonal matrix to the whitening matrix in Figure 2. As can been seen, although satisfies Eq. 2, the style transfer results vary significantly.
Our motivation is to find a unique solution by additionally considering the content loss of Gatys. Therefore, our formulation can be represented as follows, where represents the expectation.
(3)  
4 Method
In this part, we derive the closedform solution to Eq. 3. We substitute Eq. 1 to the expectation of Eq. 3 and obtain Eq. 4
(4) 
We denote , and . Therefore, we can get and . Besides, is a constant Cdimensional vector. Using , and , we can rewrite Eq. 4 as Eq. 5:
(5) 
(6)  
Since , and is a constant Cdimensional vector, we can get and . Besides, is also constant. Therefore, minimizing Eq. 6 is equivalent to minimizing Eq. 7:
(7) 
Using the representation of matrix trace, Eq. 7 can be rewritten as follows.
(8) 
Where means the trace of a matrix. Since , and , where denotes the covariance matrix of and . Therefore, the solution to Eq. 3 can be reformulated as follows.
(9) 
Next, we introduce a lemma, which has been proved by [23]. We do not repeat the proof due to limited space. The lemma can be concluded as follows.
Lemma 4.1
Given two highdimensional distributions and , where and , we define the distribution of as , where can be represented as follows.
(10) 
The problem of has a unique solution, which can be represented as Eq. 11.
(11) 
With the above lemma, let , , and , we can obtain the solution to Eq. 9, which can be represented as . We rewrite the covariance matrix as . Therefore, we can get . Then the final can be represented as Eq. 12.
(12) 
Remarks: The final solution of our method is very simple although the proof is a bit complicated. Since our method additionally considers the content loss, we can preserve better structure compared with WCT. Contrary to former works, we provide a complete theoretical proof of the proposed method. The relations of our method with former works are also demonstrated. We believe both the closedform solution and the theoretical proof will inspire future works in neural style transfer.
5 Results
In this section, we first qualitatively compare our method with Gatys [8], Patch Swap [5], AdaIN (with our decoder) [12], AdaIN+ (with their decoder) [12], and WCT [18] in Section 5.1. Then we provide a quantitative comparison of our method against Gatys, Patch Swap, AdaIN, AdaIN+ and WCT in Section 5.2
. Following former works, we also show results of linear interpolation and semantic style transfer in Section
5.3. Finally, we discuss the limitations of our method in Section 5.4.Parameters: We train the decoders on the COCO dataset [21]. The weight to balance the feature loss and reconstruction loss is set to 1 as [18]. For the results in this work, the resolution of the input is fixed as .
Performance: We implement the proposed method on a server with an NVIDIA Titan Xp graphics card. The performance evaluation is listed in Table 1 under the input resolution of . We do the performance evaluation with the published implementations on our server, which might result in slight differences with performances in the papers.
Gatys  Patch Swap  AdaIn  AdaIn+  WCT  Ours 
207.12s  13.15s  0.49s  0.16s  3.47s  4.06s 
5.1 Qualitative Results
Our Method versus Gatys: Gatys [8] is the pioneering work of neural style transfer and it can handle arbitrary styles. Although it uses backpropagation, which is very timeconsuming, to minimize the content loss and style loss, we still compare with it since its formulation is the foundation of our method. As shown in Figure 3, Gatys can usually achieve reasonable results, however, these results are not so stylized since the iterative solver cannot reach the optimal solution in limited iterations. Instead, our method tries to find the closedform solution, which explicitly minimizes the style loss and content loss. As shown in Figure 3, compared with Gatys, our result is more stylized and they also wellpreserve the structures of content images.
Our Method versus Patch Swap: As far as we know, Patch Swap [5] is the first work to use the encoderfeature transformdecoder framework. It chooses a certain layer of VGG network as the encoder and trains the corresponding decoder. The feature transform is formulated as neural patch swap. However, neural patch swap using the original feature tends to simply reconstruct the feature, thus the results are not stylized. Besides, Patch Swap only transfers the style in a certain layer, which also reduces the style transfer effect. [28] proposes to match the neural patch in the projected domains, for example, the whitened feature [18]. Apart from this, [28] uses multiple layers to transfer the style, achieving more stylized results. Our work does not use the idea of neural patch match, instead, we focus on the theoretical analysis to deliver the closedform solution. As can be seen in Figure 3, our result is more stylized compared with Patch Swap.
Our Method versus AdaIN and AdaIN+: As discussed in the motivation, AdaIN [12] assumes the channels of CNN feature are independent. For each channel, AdaIN matches two onedimensional Gaussian distributions. However, the channels of CNN feature are actually correlated. Therefore, using AdaIN as the feature transform cannot achieve visually stylized results. Instead of using AdaIN as the feature transform method, AdaIN+ [12] trains a decoder on large collections of content and style images. Although AdaIN+ only transfers the feature in a certain layer, it trains the decoder with style losses defined in multiple layers. We conduct the comparisons with both AdaIN and AdaIN+. As illustrated by Figure 3, the results of AdaIN and AdaIN+ are similar and both of them fail to achieve visually pleasing transfer results. Therefore, we believe the reason why AdaIN and AdaIN+ fail is because they ignore the correlation between channels of CNN feature. Instead, our work considers the correlation and we achieve more stylized results as shown in Figure 3.
Our Method versus WCT: WCT [18] proposes to use feature whitening and coloring as the solution to style transfer. It chooses ZCA whitening in the paper and we test some other whitening methods with the feature of ReLU3_1 as shown in Figure 4. As can be seen, only ZCA whitening achieves reasonable results. This is because ZCA whitening is the optimal choice, which minimizes the difference between content feature and the whitened feature. Although the ZCAwhitened image can preserve the structure of content image, there is none constraint on the final transformed feature. Contrary to that, we consider to minimize the difference between content feature and the final transformed feature. As we have analyzed in the motivation section, WCT satisfies Eq. 2. Therefore, it perfectly matches two highdimension Gaussian distributions. However, it ignores the content loss of Gatys. Instead, we seek the closedform solution, which additionally minimizes the content loss. As can be seen in Figure 3, our transformation can preserve better structures (see the red rectangles).
We also notice that the final feature can be the linear combination of original content feature and the transformed feature as shown in Eq. 13. Where is the weight of transformed feature.
(13) 
We show the results of different in Figure 5. As illustrated by Figure 5, adjusting the weight can change the degree of style transfer. With smaller , WCT can preserve more structure of content image. However, there is still obvious artifact even with small . Instead, our method consistently achieves visually pleasing results.
5.2 Quantitative Results
Gatys  Patch Swap  AdaIN  AdaIN+  WCT  Ours 
2.17  1.05  2.00  1.94  2.67  3.07 
User Study: Style transfer is a very subjective research topic. Although we have theoretically proved the advantages of our method, we further conduct a user study to quantitatively compare our work against Gatys, Patch Swap, AdaIN, AdaIN+ and WCT. This study uses 16 content images and 35 style images collected from published implementations, thus 560 stylized images are generated by each method. We show the content, style and stylized images to testers. We ask the testers to choose a score from 1 (worst)  5 (best) for the purpose of evaluating the quality of style transfer. We do this user study with 50 testers online. The average scores are listed in the Table 2. This study shows that our method improves the results of former works.
Gatys  Patch Swap  AdaIN  AdaIN+  WCT  Ours 

0.096  0.086  0.167  0.151  0.296  0.255 
23.77  100.8  15.85  15.9  3.89  3.60 
Content Loss and Style Loss: In addition to user study, we also evaluate the content loss and style loss defined by Gatys [8]. We calculate the average content loss and style loss with the images of user study for each method. We normalize the content loss with the number of neural activations. We only use the style loss of first layer for the sake of simplicity. The average losses are listed in Table 3. As can be seen, compared with WCT, our method achieves lower content loss and similar style loss. As for Gatys, Patch Swap, AdaIN and AdaIN+, they fail to achieve stylized results as we have analyzed in the qualitative comparison part.
5.3 More Results
We show more results to demonstrate the generalization of our method in Figure 6, where is set as 1. To further evaluate the linear interpolation, we show two samples with different in Figure 7. We also combine our method with semantic style transfer as shown in Figure 7. Although we assume the neural features are sampled from MVG distributions in the proof, these results are all visually pleasing, which demonstrate the generalization ability of the proposed method.
5.4 Limitations
Our method still has some limitations. For example, we evaluate the framebyframe results of video style transfer. Although our method can preserve better structure compared with former works, the framebyframe results still contain obvious jittering. We find that the temporal jittering is not only caused by feature transform but also caused by the information loss of encoder networks. Deep encoder network will cause obvious temporal jittering even without feature transform.
Besides, style transfer is a very subjective problem. Although the Gram matrix representation proposed by Gatys has been widely used, mathematically modeling of what people really feel about style is still an unsolved problem. Exploring the relation between deep neural network and image style is an interesting topic.
6 Conclusion
In this paper, we first present a novel interpretation of neural style transfer by treating it as an optimal transport problem. Then we demonstrate the theoretical relations between our interpretation and former works, for example, AdaIN and WCT. Based on our formulation, we derive the unique closedform solution by additionally considering the content loss. Our solution preserves better structure compared with former works due to the minimization of content loss. We hope this paper can inspire future works in style transfer.
References
 [1] A. J. Champandard. Semantic style transfer and turning twobit doodles into fine artworks. arXiv preprint arXiv:1603.01768, 2016.

[2]
D. Chen, J. Liao, L. Yuan, N. Yu, and G. Hua.
Coherent online video style transfer.
In
Proceedings of the IEEE International Conference on Computer Vision
, pages 1105–1114, 2017. 
[3]
D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua.
Stylebank: An explicit representation for neural image style
transfer.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 1897–1906, 2017.  [4] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stereoscopic neural style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6654–6663, 2018.
 [5] T. Q. Chen and M. Schmidt. Fast patchbased style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016.
 [6] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style.
 [7] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advances in neural information processing systems, pages 262–270, 2015.
 [8] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.
 [9] L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3985–3993, 2017.

[10]
S. Gu, C. Chen, J. Liao, and L. Yuan.
Arbitrary style transfer with deep feature reshuffle.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8222–8231, 2018.  [11] H. Huang, H. Wang, W. Luo, L. Ma, W. Jiang, X. Zhu, Z. Li, and W. Liu. Realtime neural style transfer for videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 783–791, 2017.
 [12] X. Huang and S. Belongie. Arbitrary style transfer in realtime with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pages 1501–1510, 2017.
 [13] Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, and M. Song. Neural style transfer: A review. arXiv preprint arXiv:1705.04058, 2017.

[14]
J. Johnson, A. Alahi, and L. FeiFei.
Perceptual losses for realtime style transfer and superresolution.
In European conference on computer vision, pages 694–711. Springer, 2016.  [15] C. Li and M. Wand. Combining markov random fields and convolutional neural networks for image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2479–2486, 2016.
 [16] C. Li and M. Wand. Precomputed realtime texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision, pages 702–716. Springer, 2016.
 [17] X. Li, S. Liu, J. Kautz, and M.H. Yang. Learning linear transformations for fast arbitrary style transfer. arXiv preprint arXiv:1808.04537, 2018.
 [18] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.H. Yang. Universal style transfer via feature transforms. In Advances in neural information processing systems, pages 386–396, 2017.
 [19] Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural style transfer. arXiv preprint arXiv:1701.01036, 2017.
 [20] J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang. Visual attribute transfer through deep image analogy. arXiv preprint arXiv:1705.01088, 2017.
 [21] T.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
 [22] M. Lu, H. Zhao, A. Yao, F. Xu, Y. Chen, and L. Zhang. Decoder network over lightweight reconstructed feature for fast semantic style transfer. In Proceedings of the IEEE International Conference on Computer Vision, pages 2469–2477, 2017.
 [23] I. Olkin and F. Pukelsheim. The distance between two random vectors with given dispersion matrices. Linear Algebra and its Applications, 48:257–263, 1982.
 [24] E. Risser, P. Wilmot, and C. Barnes. Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893, 2017.
 [25] M. Ruder, A. Dosovitskiy, and T. Brox. Artistic style transfer for videos. In German Conference on Pattern Recognition, pages 26–36. Springer, 2016.
 [26] M. Ruder, A. Dosovitskiy, and T. Brox. Artistic style transfer for videos and spherical images. International Journal of Computer Vision, 126(11):1199–1219, 2018.
 [27] A. Selim, M. Elgharib, and L. Doyle. Painting style transfer for head portraits using convolutional neural networks. ACM Transactions on Graphics (ToG), 35(4):129, 2016.
 [28] L. Sheng, Z. Lin, J. Shao, and X. Wang. Avatarnet: Multiscale zeroshot style transfer by feature decoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8242–8250, 2018.
 [29] Y. Shih, S. Paris, C. Barnes, W. T. Freeman, and F. Durand. Style transfer for headshot portraits. ACM Transactions on Graphics (TOG), 33(4):148, 2014.
 [30] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [31] H. Zhang and K. Dana. Multistyle generative network for realtime transfer. In European Conference on Computer Vision, pages 349–365. Springer, 2018.
Comments
There are no comments yet.