1 Introduction
A style transfer method takes a content and a style image as inputs to synthesize an image with the look from the former and feel from the latter. In recent years, numerous style transfer methods have been developed. The method by Gatys et al. [11]
iteratively optimizes content and style reconstruction losses between the target image and input images. To reduce the computational cost, a few approaches have since been developed based on feedforward networks
[18, 37]. However, these approaches cannot generalize to arbitrary style images with a single network.For universal style transfer, a number of methods explore the second order statistical transformation from reference image onto content image via a linear multiplication between content image features and a transformation matrix [17, 25, 26]. The AdaIn method is developed [17]
by matching the mean and variance of intermediate features between content and style images. The WCT
[25] algorithm further explores the covariance instead of the variance, by embedding a whitening and coloring process within a pretrained encoderdecoder module. However, these approaches directly compute these matrices from features. On one hand, they do not explore the general solutions to this problem. On the other hand, such matrix computation based methods can be expensive when the feature has a large dimension.Generalized from these two methods [17, 25], in this work we present theoretical analysis of such linear style transfer frameworks. We derive the form of transformation matrix and draw connections between this matrix to the style reconstruction objective (squared Frobenius norm of the difference between Gram matrices) widely used in style transfer [11, 18, 37, 17]. Based on the analysis, we learn the transformation matrix with two lightweighted CNNs. We show that the learningbased transformation matrix can be controlled by different levels of style losses, and is highly efficient. The contributions of this work are:

We present general solutions to the linear transformation approach and show that optimizing it can also minimize the style reconstruction losses being used by the majority existing approaches.

We propose a transformation matrix learning approach that is highly efficient ( fps), flexible (allows combination of multilevel styles in a single transformation matrix) and preserves content affinity during transformation process.

With the flexibility, we show that the proposed method can be applied to many tasks, including but not limited to: artistic style transfer, video and photorealistic style transfer, as well as domain adaptation, with comparisons to stateoftheart methods.
2 Related Work
Style transfer has been studied in computer vision with different emphasis for more than two decades. Early methods tackle this problem by nonparametric sampling
[8], nonphotorealistic rendering [20] and image analogy [16]. Most of these approaches rely on finding lowlevel image correspondence and do not capture highlevel semantic information well. Recently, Gatys et al. [11]propose an algorithm that matches statistical information between features of content and style images extracted from a pretrained convolutional neural network. Numerous methods have since been developed to improve artistic style transfer quality
[22, 39, 40], add user control [40, 12], introduce more diversity [38, 24] or include semantic information [2, 10].One main drawback with the method by Gatys et al. [11] is the heavy computational cost due to the iterative optimization process. Fast feedforward approaches [18, 37, 23]
address this issue by training a feedforward neural network that minimizes the same feature reconstruction loss and style reconstruction loss as
[11]. However, each feedforward network is trained to transfer exactly one fixed style. Dumoulin et al. [6] introduce an instance normalization layer that allows 32 styles to be represented by one model, and Li et al. [24] encode 1,000 styles by using a binary selection unit for image synthesis. Nevertheless these models are not able to transfer an arbitrary style onto a content image.Recently, several methods [13, 17, 34]
are proposed to match mean and variance of content and style feature vectors to transfer an arbitrary style onto a content image. However, their methods do not model the covariance of the features and could not synthesize satisfying results in certain circumstances. Li et al.
[25] resolve this problem by applying a whitening and coloring scheme with pretrained image reconstruction autoencoders. However, this method is costly due to the need of large matrix decomposition, and autoencoder cascading in order to combine multiple levels of styles. Shen et al.[9] proposes to train a meta network that generates a 14 layer network for each content/style image pair – with expensive cost of extra memory for each content/style image pair, it does not explicitly model the covariance shift.Different from existing approaches, we replace the transformation matrix computation in the intermediate layers with feedforward networks, which can be more flexible and efficient. We demonstrate the proposed algorithm can be effectively applied in four different tasks: artistic style transfer, video and photorealistic style transfer, as well as domain adaptation.
3 Style Transfer by Linear Transformation
The proposed model contains two feedforward networks, a symmetric encoderdecoder image reconstruction module and a transformation learning module, as shown in Figure 2. The encoderdecoder is trained to reconstruct any input image faithfully. It is then fixed and serves as a base network in the remaining training procedures. The transformation learning module contains two small CNNs which takes features of the content and style images from the top of the encoder respectively, and outputs a transformation matrix . The image style is transferred through linear multiplication between the content features and the transformation matrix in the same layer. We use a pretrained and fixed VGG19 network to compute style losses at multiple levels and one content loss in a way similar to the prior work [18, 17]. The proposed model is a pure feedforward convolutional neural network, which is able to transfer arbitrary styles efficiently ( fps)
3.1 Objectives for Arbitrary Style Transfer
We denote the feature map of the topmost encoder layer as , where is an input image, is the number of pixels and is the number of channels. For presentation clarity, the feature maps of a content and style image pair are denoted as row vectors and . We model the image style transfer problem as a linear transformation between the content feature and the learned matrix , with the transformed feature vector being . We use to denote a “virtual” feature map that provides a desired level of style. For several matrix computation based methods [25, 17], . In our work, has multiple choices, which is controlled via different configurations of style losses in the loss module during training. Thus, can be described as a nonlinear mapping of as . We denote as the vectorized feature map with zero mean.
The objective of learning the style transformation is equivalent to minimizing the difference of centered covariance between and :
(1)  
By substituting the linear constraint into , the minima is obtained when
(2) 
The centered covariance of is
, and the corresponding singular value decomposition (SVD) is
. It is easy to show that(3) 
is one set of solutions to (2) where is a dimensional orthogonal group. In other words, is solely determined by the covariance of the content and style image feature vectors. Given , the transformed feature is obtained by , which simultaneously aligns to the mean and the covariance statistics of the target style, and thus models the style reconstruction being used by the majority existing approaches. In the following, we discuss how to select a proper model for learning .
3.2 Model for learning transformation .
Given that is only conditioned on the content and style images, one feasible approach is to use a network that takes both images to directly output a matrix. According to (3) where the terms of content and style are independent, we use two CNNs for the content/style inputs to isolate their interaction and make the structure more suitable.
The formulation in (2) suggests three choices of input to the CNNs: (i) images ( and ), (ii) feature maps ( and ), and (iii) covariance matrix ( and ). In this work, we use the third option where each CNN takes the covariance of feature vectors and outputs a intermediate matrix. These two matrices are then multiplied to formulate the (see Figure 2). We explain the design choices in the following paragraph.
First, is irrelevant to the dimension of the input, e.g., a arbitrary region rather than the whole image needs to be transferred (e.g., photorealistic stylization in Section 5), when the covariance matrix is used. However, this property does not hold when the model input is the image or feature map. For example, it is easy to prove that , another set of solutions, requires that the content and style feature input to the transformation module has the same dimensions. Second, we need to use a fullyconnected layer at the top of the two CNNs, since only contains global information – in this case, using covariance matrix as the inputs is much more flexible, since using images or features indicates that all the input should have a fixed dimension under any circumstance.
Moreover, the above example for also suggests that using individual images or feature maps as model input leads to larger solution spaces, which potentially results in unstable local minima. The ablation study in Section 5 shows that using a covariance matrix as model input leads to better results for general style transfer (see Figure 4).
3.3 Efficient Model
Our method achieves significant advantages in terms of computational efficiency (see Table 1). We show why the proposed model is more efficient and flexible as follows:
Replacing matrix decomposition with a feedforward network. The key to the speedup is that our model removes all timeconsuming matrix computation on GPUs. Instead of using the SVD decomposition, e.g., in WCT [25], which is not as GPUfriendly as CNN, we use a feedforward network to approximate the matrix. We show in our experiments that can be easily trained with a factorized CNN block and one fullyconnected layer (the transformation module in Figure 2).
On the other hand, the AdaIn [17] method partially addressed this issue by replacing the covariance matrix with the variance vector, but at the expense of image quality. We show that while AdaIn only produces favorable results with a deeper autoencoder, ours can achieve similar effects with much shallower autoencoder, since the covariance shift is accurately modeled – this brings a further speedup of the proposed algorithm.
Learning multilevel style transfers via a single . As analyzed in [11], Gram matrices of features from different layers capture different details for style transfer. Gram matrices using features from lower layers (e.g., relu1_1 and relu2_1) capture color and textures while those based on features from higher layers (e.g. relu3_1 and relu4_1) capture common patterns. Since the learnable is not necessarily equal to , a single can express rich styles by using a combination of multiple style reconstruction losses in the loss module, e.g., via {relu1_1, relu1_2, relu1_3, relu1_4 } in a way similar to [17, 18] as shown in Figure 5. This demonstrates a clear advantage over the matrix computationbased methods [25, 17], in which the transform can only be computed via a certain layer in the encoder.
To address this issue, the WCT method [25]
transfers richer styles by cascading several autoencoders with different layers, but at the expense of additional computational loads. In contrast, the proposed method only needs to use different loss networks (see the bottom row in Figure
5) – it achieves the same goal without any computational overhead during the inference stage.3.4 Undistorted Style Transfer
We show that the general linear transformation based style transfer methods enjoy the affinity preservation property, as introduced below. By utilizing the property and a shallower autoencoder, our method can be generalized to undistorted style transfer.
Affinity preserving for linear transformation models.
Affinity describes the pairwise relation of pixels, features, or other image elements. Aside from applying a learned style transformation, in some applications such as photorealistic style transfer,the resulting images or videos also should not contain visually unpleasant distortions or artifacts. As such, preserving the affinity of content images, which indicates that the dense pairwise relations among pixels in the content image is preserved well in the result, is equally important for style transfer. Many methods, such as guided filtering [15, 29], matting Laplacian [21, 30, 26] and nonlocal means [1], aim to preserve the affinity of a guidance image, such that the output can be wellaligned to its input image structures (e.g., contours).
In principle, style transfer and manipulating affinity belong to different forms of matrix multiplications: the former is premultiplication while the latter is postmultiplication to the feature vector (see (1)). We show that general linear transformation method is capable of preserving the feature affinity of the content image. Given (1), and the normalized affinity for a vectorized feature as:
(4) 
It is easy to prove by the following equations that the affinities of and are equal to each other:
(5)  
Utilizing of a shallower model.
There are three more factors that could cause distortion in the stylized image including (i) the decoder in which more nonlinear layers usually cause more distortions, (ii) that affects , which is determined by how the loss is enforced (see similar analysis in [14]), and (iii) the spatial resolution of the top layer which determines at what scale the affinity is preserved. Therefore, we propose to use a shallower model with up to relu3_1 that better preserves the affinity. Specifically, our shallower model can still express rich stylization when enforcing a relatively deeper style loss network (e.g., using up to relu4_1 in the loss module), which however, can not be implemented in the same way both in AdaIn [17] and WCT [26], as shown in Figure 3. In other words, both AdaIn and WCT cannot effectively make use of the affinity preservation properties due to the limitation that they do not generalize well to a shallower network. Aside from using a shallow encoderdecoder and a proper loss configuration, our method requires no extra manipulation, e.g., optical flow warpping [14] and matting Laplacian alignment [30], to render stylized images and videos. We show that the stylized results contain fewer distortions and more stable sequences than existing methods [11, 18] that are not based on the formulation in (1) (see Figure 7 and 9). Note that for a shallower encoder, the corresponding (e.g., relu2_1) is not expressive enough to transfer more abstract styles.
(a) Content  (b) Style  (c) WCT  (d) AdaIn  (e) Ours 
4 Network Implementation
We describe the main modules in the proposed model as shown in Figure 2.
Encoderdecoder module. Our encoder contains the first few layers from the VGG19 model [35]
pretrained on the ImageNet dataset
[5], and a symmetric decoder. We train the decoder on MSCOCO [28] from scratch to reconstruct any input image. This module is then fixed throughout the rest of the network training procedure. We show various design options of the encoderdecoder architectures with respect to different applications in Section 5 where the differences lie in model depth.Transformation module. Our transformation module comprises of two CNNs, each of which takes either content or style feature as input, and outputs a transformation matrix respectively. These two transformation matrices are then multiplied to yield the final transformation matrix . We produce the transferred feature vector by multiplying a content feature vector with , then feed into the decoder to generate the stylized image.
Within each CNN in the transformation module, as discussed in Section 3, we compute transformation matrix from feature covariance to handle images of any size during inference. In order to learn a nonlinear mapping from feature covariance to transformation matrix, a group of fullyconnected layers are preferred. However, this leads to a large memory requirement and model size since the covariance matrix usually has high dimensions, i.e., in relu4_1. Thus, we factorize the model by first encoding the input features to reduced dimensions (e.g., ) via three consecutive convolutional units (denoted as “CONVs” in Figure 2), where each is equipped with a conv and a relu layer. The covariance matrix of the encoded feature is then fed into one fc unit to produce the transformation matrix. We further adopt a pair of convolutional layers to “compress” the content feature, and “uncompress” the transformed feature to the corresponding dimensions. Overall, our transformation module is compact, efficient and easy to converge for any combination of styles.
Loss module. Similar to existing methods [11, 18, 37], we adopt a pretrained VGG19 as our loss network. We compute the style loss at different layers with the squared Frobenius norm of the difference between the Gram matrices, and the content loss as the Frobenius norm between content features and transferred features:
(6) 
We train all models using features from relu4_1 to compute the content loss. The final loss is a weighted sum of content loss and style loss : . We present detailed analysis for the combination of style losses in Sec. 5.
5 Experiment Results
We discuss the experimental settings and present ablation studies to understand how the main modules of the proposed algorithm contribute. We evaluate the proposed algorithm against the stateoftheart methods on three style transfer tasks (artistic style transfer, video style transfer and photorealistic style transfer) and domain adaptation.
5.1 Experimental Settings
We use the MSCOCO dataset [28] as our content images and the WikiArt database [31] as our styles. Both datasets contain roughly 80,000 images. We keep the image ratio and rescale the smaller dimension of each training image to 300 pixels. We then apply patchbased training by randomly cropping a region of 256
256 pixels from it as one training sample and randomly flip this sample with the probability of
. On the inference stage, our model is able to handle any input size for both content and style images, as discussed in Section 3.We train our network using the Adam solver [19] with a learning rate of and a batch size of 8 for
iterations. The training roughly takes 2 hours on a single Titan XP GPU for each model in our Pytorch
[32] implementation. The source code, trained models and realtime demos will be made available to the public.5.2 Ablation Studies
Architecture. We discuss in Section 3.1 that for learning , using two separate CNNs should be more suitable than sharing a single CNN for content and style images, according to (3). To verify this, we train a single CNN to output the transformation matrix, which takes both content and style images as inputs. However, this model does not converge during training, which is consistent with our discussion.
Inputs to the learning transformation module. We discuss in Section 3 that the formulation in (2) suggests three input choices to CNNs in the transformation module. We implement three models for the three different input choices, with the same architectures as the proposed model, except for the image CNN, which utilizes five convolutional units instead of three. Note that both image and feature CNN do not have the matrix multiplication between the “CONVs” and the “fc” modules, as shown in Figure 2. We show comparisons in Figure 4. In general, the image CNN does not produce faithful stylized results, e.g., the patches in the second row of the Figure 4(c) still present color from the original content image. On the other hand, the covariance CNN (our proposed model) presents more similar stylized details (see patches in the closeups in Figure 4(e)) than all the other methods.
Combining multilevel style losses. Features from different layers capture different scales of style. For instance, lowerlayer features capture simpler styles such as color while higherlayer features are able to capture more complex styles such as textures or patterns. We show our algorithm allows flexible combination of multiple levels of styles within a single transformation matrix , by making use of different style reconstruction losses in the loss module. We apply two types of autoendocders – encoders up to and in the VGG19 model, and equip them with different style reconstruction losses – single loss layer on , , , and multiple loss layers , as shown in Figure 5.
The results show that both transferring content features of lower layers (relu3_1 in row 1) and using one single style reconstruction loss from lower layers (relu1_1, relu2_1 in column 2 and 3) lead to more photorealistic visual effect. On the other hand, using style reconstruction loss from higher layers (e.g., column 3 and 4 of the Figure 5) produces more stylized images. Specifically, the autoencoder with the bottleneck as (the first column in Figure 5) produces more undistorced results for all types of style losses, than those with the bottleneck as (the second column in Figure 5).
We also show stylized results by singleencoder WCT in the last column of Figure 5 (relu3_1 on top and relu4_1 on bottom), where neither of them show satisfying style transfer result (e.g., the color and the texture is not well aligned, and the edges are blurry). In contrast, our method produces visually pleasant style transfer results for richer styles, as shown in the fourth column in Figure 5, where no cascading of modules is required in contrast to the WCT method. Thanks to the flexibility of our algorithm, we are able to choose different settings for different tasks without any computational overheads. Specifically, we transfer content image features from relu4_1 layer for artistic style transfer, while for photorealistic and video style transfer, we transfer content image features from relu3_1 to avoid distortions and instabilities. For all tasks, we compute style reconstruction losses using features from relu1_1, relu2_1, relu3_1 and relu4_1 to capture different scales of style.
Content  Style  Gatys [11]  Huang [17]  Li [25]  Sheng [34]  Ours 
Frames  Gatys [11]  Johnson [18]  Li [25]  Ours  
Frame1 


Frame5 

Difference 
Image Size  256  512  1024 

Ulyanov et al. [37]  0.013  0.028  0.092 
Gatys et al. [12]  16.51  59.45  N/A 
Huang et al. [17]  0.019  0.071  N/A 
Li et al. [25]  0.922  1.080  N/A 
Ours (relu3_1)  0.007  0.025  0.100 
Ours (relu4_1)  0.010  0.036  0.146 
5.3 Artistic Style Transfer
We evaluate the proposed algorithm with three stateoftheart methods for artistic style transfer: optimization based method [11], fast feedforward network [18] and feature transformation based approaches [17, 25].
Qualitative results. We present stylized results of the evaluated methods in Figure 6 and more results in supplementary materials.
The proposed algorithm performs favorably against the stateoftheart methods.
Although the optimization based method [11] allows arbitrary style transfer, it is computationally expensive due to iterative optimization process (see Table 1).
In addition, it encounters local minima issues which lead to over exposure images (e.g., row 5).
The fast feed forward methods [18] improve the efficiency of optimization based method but it requires training one network for each style and requires adjusting style weights for best performance.
The AdaIn method presents an efficient solution to resolve arbitrary style transfer by matching mean and variance between style image and target image. However, it generates less appealing results (e.g., row 2, 3, 5) because it replaces covariance matrix with the variance vector, which is less effective in capturing second order statistics.
The WCT method largely improves the visual quality of the transferred image (see column 4, 5 in Fig 6) by modeling the divergence of the covariance matrices, but at the expense of inference speed (see Table 1).
In contrast, our method models the covariancebased transformation and shows favorable transferring results for arbitrary style (see the last column in Fig 6), while still maintains the high efficiency (see Table 1), and can apply to much wider applications due to its flexibility (see the following part in this section).
User study. We conduct a user study to evaluate the proposed algorithm against the stateoftheart style transfer methods [11, 38, 17, 25]. We use 6 content images and 40 style images to synthesize 240 images in total, and show 15 randomly chosen content and style combinations to each subject. For each combination, we present 5 synthesized images by each aforementioned method in a random order and ask the subject to select the most visually pleasant one. We collect 540 votes from 36 users and show the percentage of votes for each method in Figure 8(a). Overall, the proposed algorithm is favored among all evaluated methods.
Efficiency. Our method is efficient thanks to the feedforward architecture. Table 1 shows the run time performance of the proposed algorithm and other stateoftheart methods on three input image scales: , , . All methods listed in this table allow arbitrary style transfer except for the algorithm proposed by Ulyanov et al. [37] (row 1). Even the slower variant of the proposed algorithm (trained to transfer content features from relu4_1) runs at 100 FPS and 27 FPS for and images, thereby making realtime style transfer feasible. Furthermore, our model has clear efficiency advantage over the optimizationbased method [11] (about 3 orders of magnitude faster) as well as WCT [25] (about 2 orders of magnitude faster) and is comparable to the fast feedforward methods [37, 17].
5.4 Video and Photorealistic Style Transfer
Another crucial property of our algorithm is its ability to preserve content affinity during style transfer, when proper architectures are selected. This is particularly important for tasks such as video and photorealistic style transfer since these tasks prefer that both the global structures and the detailed contours in the content video or images can be preserved during style transfer process. We discuss the results in the following part.
Content  Style  Gatys [11]  Johnson [18]  Li [25]  Huang [17]  Ours 
Video style transfer. Video style transfer is conducted between a content video and a style image in a framewise manner, by using a shallower autoencoder with as the bottleneck. This computation can be more efficient than image based style transfer, since the transformation output in the style branch can be computed only once during initialization (see Figure 2), and then directly applied to the rest of the frames. Since our approach can preserve the affinity of content videos, which are naturally consistent and stable, our stylized videos are also visually stable without the aid of any auxiliary techniques, such as optical flow warping [14]. Figure 7 shows our direct framebased style transfer results compared with existing style transfer methods [11, 18, 25]. To visualize stability of synthesized video clip, we draw heat maps of differences between two frames (i.e., row 3). The heat map difference by the proposed algorithm is closest to that of the original frames, which implies that our algorithm is able to preserve content affinity during style transfer process. To further evaluate the stability of our algorithm in video style transfer, we conduct a user study with 5 video clips synthesized framewise by our method and 4 stateoftheart methods [11, 37, 17, 25]. For each group of videos, we ask each subject to select the most stable video clip. We collect 106 votes from 27 subjects and show percentage of votes for each method in Figure 8(b). Overall, the proposed algorithm performs well against the other methods, which indicates the ability of our method to preserve affinity during style transfer. More video transfer results can be found in the supplementary material.
Photorealistic style transfer. Our algorithm can also produce undistorted image style transfer results, due to the affinity preservation, which is particularly useful for photorealistic style transfer. We transfer the corresponding regions w.r.t a given mask for every content/style pairs. This is done by separating the masked regions only within the transformation module (see Figure 2), and then combining the transformed features of different regions, according to the mask. We show the qualitative comparison with several recent work [30, 26] in Figure 10. When applying an autoencoder with as the bottleneck, the model is able to generate undistorted stylized images (see column 5 in Figure 10), which have comparable visual qualities to those methods that are particularly targeting on photorealistic stylization [30, 26]. Note that both [30] (186 seconds for 512256 images) and [26] (2.95 seconds for 512256 images) are more than 2 orders of magnitude slower than the proposed method (0.025 seconds for 512256 images), due to the timeconsuming propagation modules that are designed to explicitly preserve the image affinity. Our results can be further processed by a simple filter such as a joint bilateral filter or bilateral guided upsampling [3] method to render final photorealistic results with better object boundaries (see Figure 10).
Content  Style  Luan [30]  Li [26]  Ours  +Bilateral 
Game  Game seg  Transfer  Transfer seg  Ground truth 
5.5 Domain Adaptation: from Game to Real.
The linear transformation based arbitrary style transfer shares a lot of resemblance with typical highlevel ideas of domain adaptation (e.g., by Sun et al. [36]), in which both are proposed to align the secondorder statistics between two different domains, e.g., style and content images [27]. A pioneer work proposed by Liu et.al [7] successfully utilized the WCTbased style transfer technique in the gametoreal domain adaptation with an unsupervised setting. In this section, we validate that the proposed model can significantly narrow down the domain gap for semantic segmentation between the game images and the real images.
First, we apply the PSPNet semantic segmentation model [41], which is welltrained on the Cityscapes [4] dataset, to segment the images from the GTA [33] which contains similar traffic scenes, but with different image quality of computer games. We then transfer the game images to “real” images, based on the proposed method. Specifically, we implement the task as photorealistic style transfer by randomly taking one image from the GTA dataset as content input, and another image from the Cityscapes dataset as style inputs. The transformation is conducted w.r.t the corresponding semantic mask, where all nonshared labels from both image are considered as belonging to an additional class. We then process semantic segmentation using the same PSPNet model on the transferred results, as shown in Figure 11
. The segmentation results on column 2 (segmentation results before domain adaptation) and column 4 (segmentation results after domain adaptation) demonstrate that the proposed algorithm is effective to adapt the domain of game images to real images in the semantic space. We note that narrowing the gaps between these two datasets (especially adapting games images to real world scenes) is practical for increasing training samples for semantic/instancelevel segmentation, flow and depth estimation, to name a few, due to the fact that the dense annotations of these tasks in the realworld scenes are difficult to obtain. Our algorithm provide an efficient and lowcost solution to generate numerous training data, which could potentially benefit for various vision problems.
6 Conclusions
In this work, we develop a theoretical foundation for analyzing arbitrary style transfer frameworks and present an effective and efficient algorithm by learning linear transformations. Our algorithm is computationally efficient, flexible for numerous tasks, and effective for stylizing images and videos. Experimental results demonstrate that the proposed algorithm performs favorably against the stateoftheart methods on style transfer of images and videos.
References
 [1] A. Buades, B. Coll, and J.M. Morel. A nonlocal algorithm for image denoising. In CVPR, 2005.
 [2] A. J. Champandard. Semantic style transfer and turning twobit doodles into fine artworks. arXiv preprint arXiv:1603.01768, 2016.
 [3] J. Chen, A. Adams, N. Wadhwa, and S. W. Hasinoff. Bilateral guided upsampling. ACM Transactions on Graphics (TOG), 2016.

[4]
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.
In CVPR, 2016.  [5] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. Imagenet: A largescale hierarchical image database. In CVPR, 2009.
 [6] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. arXiv preprint arXiv:1610.07629, 2016.
 [7] A. Dundar, M.Y. Liu, T.C. Wang, J. Zedlewski, and J. Kautz. Domain stylization: A strong, simple baseline for synthetic to real image domain adaptation. arXiv preprint arXiv:1807.09384, 2018.
 [8] A. A. Efros and W. T. Freeman. Image quilting for texture synthesis and transfer. In SIGGRAPH, 2001.
 [9] S. Y. Falong Shen and G. Zeng. Neural style transfer via meta networks. In CVPR2018, 2018.
 [10] O. Frigo, N. Sabater, J. Delon, and P. Hellier. Split and match: Examplebased adaptive patch sampling for unsupervised style transfer. In CVPR, 2016.
 [11] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.
 [12] L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In CVPR, 2017.
 [13] G. Ghiasi, H. Lee, M. Kudlur, V. Dumoulin, and J. Shlens. Exploring the structure of a realtime, arbitrary neural artistic stylization network. In BMVC, 2017.
 [14] A. Gupta, J. Johnson, A. Alahi, and L. FeiFei. Characterizing and improving stability in neural style transfer. In ICCV, 2017.
 [15] K. He, J. Sun, and X. Tang. Guided image filtering. TPAMI, 2013.
 [16] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. Image analogies. In SIGGRAPH, 2001.
 [17] X. Huang and S. Belongie. Arbitrary style transfer in realtime with adaptive instance normalization. In ICCV, 2017.

[18]
J. Johnson, A. Alahi, and L. FeiFei.
Perceptual losses for realtime style transfer and superresolution.
In ECCV, 2016.  [19] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICML, 2015.
 [20] J. E. Kyprianidis, J. Collomosse, T. Wang, and T. Isenberg. State of the” art”: A taxonomy of artistic stylization techniques for images and video. TVCG, 2013.
 [21] A. Levin, D. Lischinski, and Y. Weiss. A closedform solution to natural image matting. TPAMI, 2008.
 [22] C. Li and M. Wand. Combining markov random fields and convolutional neural networks for image synthesis. In CVPR, 2016.
 [23] C. Li and M. Wand. Precomputed realtime texture synthesis with markovian generative adversarial networks. In ECCV, 2016.
 [24] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.H. Yang. Diversified texture synthesis with feedforward networks. In CVPR, 2017.
 [25] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.H. Yang. Universal style transfer via feature transforms. In NIPS, 2017.
 [26] Y. Li, M.Y. Liu, X. Li, M.H. Yang, and J. Kautz. A closedform solution to photorealistic image stylization. arXiv preprint arXiv:1802.06474, 2018.
 [27] Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural style transfer. In IJCAI, 2017.
 [28] T.Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár. Microsoft coco: Common objects in context. In ECCV, 2014.
 [29] S. Liu, S. D. Mello, J. Gu, G. Zhong, M.H. Yang, and J. Kautz. Learning affinity via spatial propagation networks. In NIPS, 2017.
 [30] F. Luan, S. Paris, E. Shechtman, and K. Bala. Deep photo style transfer. In CVPR, 2017.
 [31] K. Nichol. Painter by numbers, wikiart. https://www.kaggle.com/c/painterbynumbers, 2016.
 [32] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
 [33] S. R. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
 [34] L. Sheng, Z. Lin, J. Shao, and X. Wang. Avatarnet: Multiscale zeroshot style transfer by feature decoration. In CVPR, 2018.
 [35] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. CoRR, abs/1409.1556, 2014.
 [36] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. In AAAI, 2016.
 [37] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feedforward synthesis of textures and stylized images. In ICML, 2016.
 [38] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Improved texture networks: Maximizing quality and diversity in feedforward stylization and texture synthesis. In CVPR, 2017.
 [39] X. Wang, G. Oxholm, D. Zhang, and Y.F. Wang. Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer. In CVPR, 2017.
 [40] P. Wilmot, E. Risser, and C. Barnes. Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893, 2017.
 [41] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In CVPR, 2017.
Comments
There are no comments yet.