Style transfer is a typical technique to stylize a content image in the style of another input. Recently, image-to-image translation methods based on conditional generative adversarial networks have played a pivotal role in addressing the problem of style transfer. While these pioneering techniques have shown promising results for generating a single stylized output from a reference image, two interesting problems have been raised, namely, multi-domain and multimodal stylization. Multi-domain stylization methods seek for better controllability during the style transfer process, i.e., to generate different styles based on guidance from multiple domains. Multimodal methods, on the other hand, focus on the diversity of generated stylization results, i.e., to synthesize multiple different results which are all consistent with the same reference style.
The majority of existing multi-domain methods, e.g., StarGAN , are inherited from unpaired conditional image-to-image translation , which learns a one-to-one mapping between two domains, and thus loses the ability of synthesizing multimodal results. Multimodal methods such as MUNIT , on the other hand, are usually limited to handling only two domains at one time and do not support multi-domain stylization. Although several methods [29, 38]
have been proposed to address both multimodal and multi-domain stylization simultaneously, they are restricted to generating images with only random sampling (i.e. generating results by random sampling from the style space) or suffer the issue of mode collapse due to the use of Kullback-Leibler divergence.
The key of addressing these problems is to construct a style embedding space that (1) preserves style information from reference style images for exemplar-based multimodal stylization; (2) is smooth enough for random sampling based multimodal stylization; (3) has a uniformly covered style distribution for avoiding mode collapse; and (4) provides flexible control using domain labels for multi-domain stylization.
To overcome these challenges, we propose a unified framework that achieves multi-domain and multimodal stylization simultaneously with both exemplar-based guidance and random sample guidance as well as reducing the possibility of mode collapse. The aligned space has the following properties: (1) a reference style can be extracted from a trained encoder to support exemplar-based stylization; (2) each conditioned space can support multimodal stylization via random sampling; (3) style features sufficient cover the sampling space; and (4) the space is conditioned on domains so that multi-domain stylization is available as well. We demonstrate the strength of our method on painting style transfer with a variety of artistic styles and genres. Both qualitative and quantitative comparisons with state-of-the-art methods indicate that our approach can generate high-quality results of multi-domain and multimodal stylization.
Ii Related Work
Gatys et al. 
first adopt convolution neural network to deal with a single image stylization problem by an iterative optimization procedure. For more diversity, several arbitrary style transfer methods are proposed. WCT progressively repeats whitening and coloring operations at multiple scales to alter any style patterns. Huang et al.  use the adaptive instance normalization (AdaIN) layer to align with the feature statistics of content and style images. AvatarNet  proposes a style decorator to semantically make up the content feature with the style feature in multi-scale layers. Li et al.  introduce a transformation matrix to transfer style across different levels flexibly and efficiently. Sanakoyeu et al.  emphasize the style-aware content constraint to achieve real-time HD style transfer. Kotovenko et al.  use a content transformation module to focus on details. Kotovenko et al.  exchange the content and style of stylized images to disentangle the two elements for better style mix. Moreover, several attention-aware fashions [37, 28] are proposed, where the models learn to adjust the influencing factor of the style feature for the content feature. However, none of these above methods support multi-domain style transfer, which are short of the controllability.
Closely related to style transfer, image-to-image (I2I) translation addresses a more general synthesis problem which shifts the style distribution from one domain to another while maintaining semantic features between images. CGAN  renders primitive translation process with a noise condition. Pix2Pix  uses conditional generative adversarial networks to transfer images between two domains. Their methods are further improved by CycleGAN  which uses a dual-learning approach and eliminates the requirement of paired data. While showing promising results, these methods are intrinsically limited to learn a mapping between two domains.
Based on these explorations, several methods attempt to address either multi-domain or multimodal I2I translation. ACGAN 
proposes to append an auxiliary classifier in the discriminator to support multi-domain generation. For multi-domain translation, ComboGAN leverages multiple encoder-decoders for altering between different styles. StarGAN  uses a unified conditional generator for multi-domain synthesis. SGN  explores the influence of mixed domains. For multimodal generation, MSGAN  introduces a new constraint which emphasizes on the ratio of the distances between images and their corresponded latent codes. EGSC-IT  controls the AdaIN parameters of image generator by a style coding branch. FUNIT  deals with multi-domain translation in a few-shot setting. However, all of them cannot perform multimodal and multi-domain translation simultaneously.
Recently, several methods [29, 38, 35, 21, 36, 15] propose to achieve multi-domain and multimodal synthesis within a single framework. SMIT  uses a combination of random noise and domain condition as guidance for image translation. DMIT  separates content, style, and domain information with different encoders. However, the limitations of guidance way (only support random sampling) and the difficulty of controlling style space by KL-divergence become their obstacles. Concurrent to our work, StarGAN v2  uses a mapping network to transform a latent code to style code from multiple domains. The multi-branch strategy is also adopted by the discriminator. As a result, the number of parameters will inevitably increase with adding more domains.
Disentangled latent representations.
Building the mapping between the latent space and the image space promotes the quality and controllability of synthesized output. VAE  uses the reparametrization tricks to construct the relationships between the two spaces. CVAE  takes the one-hot label as the condition to construct multiple clusters in the latent space. AAE  proposes an adversarial strategy to force the latent space distribution to be close to the prior distribution. UNIT 
adopts double VAEs to encode the latent vectors into a shared latent space. MUNIT and DRIT  further disentangle the content feature and style feature into disparate manifolds. To disentangle multi-domain features, UFDN  aligns domain representations by an adversarial domain classifier. Kotovenko et al.  use fixpoint loss to decouple the content and style space. Similarly, we also encode the two properties by respective encoders.
Table I summarizes different properties of our method and other related techniques. Most existing methods focus on either multi-domain or multimodal synthesis. Few of them have explored both with limited support of style guidance.
Iii Our Method
The input of our method includes a natural content image that user wants to stylize, as well as a style code associated with its domain label , i.e., a one-hot vector indicating its style domain. The style code can be either generated from a reference style image of a certain style domain label
, or directly sampled from a normal Gaussian distribution. In case of a style image is provided, the corresponding style code is extracted from our style encoder (Section III-A). The style code and the style domain label are then converted to parameters which control the AdaIN layers of our image generator (Section III-A). The output image is finally synthesised by our image generator based on the content image (Section III-B) and the above style information. Figure 1 illustrates our entire framework.
Iii-a Style Space Embedding
The key to integrate multimodal and multi-domain style transfer into a unified framework, without losing ability for either exemplar-guided or random sampling, is an embedded style space that can be both controlled at the inter-domain level and randomly traversed at the intra-domain level. In other words, the style space should (1) be clearly separated between different styles via domain label control , and (2) form a smooth space that can be interpolated within a given style domain. To this end, inspired by CVAE-GAN, we design a style alignment module for style space embedding. The embedded style code is then further converted into parameters that control AdaIN layer of our image generation network as in .
Style alignment module.
Our style alignment module is an encoder-decoder network which constructs the embedded style space from style images of multiple domains. As shown in Figure 1(a), we feed a style image and its corresponding domain label into the style encoder together to form a one-dimensional style embedding . The style encoder consists of multiple down-sampling blocks, and global average pooling (GAP) is applied to the final layer to squeeze output style features.
Unlike CVAE-GAN, our style alignment module does not have to accurately reconstruct a given style image. Instead, the goal of our style alignment module is to eliminate the explicit distribution gaps among various style domains and align them, i.e. different artist styles are controlled by , and style space w.r.t each domain are aligned to Gaussian Distribution to enable sampling and smooth interpolation. Thus, we avoid the reconstruction loss in  since pixel-level reconstruction loss will cause more content-related information encoded into the style code, which obstructs our goal of extracting style information only.
(b) show two situations when KL-divergence loss is under an inappropriate weight: (1) A weak KL constraint makes the variance of style feature promptly converges to zero and results in inadequate coverage. (2) A strong KL constraint makes the style feature to be indistinguishable fromand results in excessive coverage. That is, the style space is destroyed in both situations. Thus, we also remove the KL-divergence term and train our style alignment module with only style adversarial loss :
where is randomly sampled from a Gaussian distribution . The style alignment discriminator determines whether the unknown style feature points are from a Gaussian distribution or generated by .
The adversarial loss tends to align the joint distribution of all domain styles (i.e. the unconditioned style space) to a Gaussian distribution. Consequently, each conditioned space is arranged accordingly to cover different regions, and their union spans the full Gaussian distribution, as illustrated in Figure2(c). A real case of the aligned feature distribution of our trained style space is illustrated in Figure 3(a) via t-SNE visualization and in Figure 3(b) via the distance distribution of the L1-distance between random sampled style feature pairs. Apparently, our style space have a complete and disjoint coverage.
Controllable image synthesis.
Our style alignment module provides two possible ways to parse a style code, i.e., exemplar guided and randomly sampled. As shown in Figure 1(b), exemplar-guided style code is extracted from the style alignment encoder, providing precise control information. And the corresponding stylized result is expected to achieve the same color distribution and texture appearance as the style image . Randomly sampled style code enables multimodal stylization in a certain domain via sampling the style code and an arbitrary domain label . Similar to , we use the style code and its corresponding domain label for stylized image generation by controlling the parameters of AdaIN layers. The style code is concatenated with the style label and transformed into channel-wise feature scale and bias for the AdaIN layer by a multi-layer perception network.
Iii-B Stylized Image Generation
Given a content image and controlling parameters from and , the output image is synthesized by the stylized image generator . Figure 1(c) illustrates our generator framework. Inspired by CycleGAN , our network is constructed by an encoder-decoder architecture which contains several down-sampling layers, residual blocks and up-sampling layers. Different from other image-to-image translation and stylization methods [8, 14] which use consistent normalization methods for most layers, we employ Instance Normalization (IN)  in down-sampling layers and first half of residual blocks, AdaIN in second half of residual blocks and Layer Normalization (LN)  in up-sampling layers to avoid irregular artifacts.
The goal of our stylized image generator network is to generate images that both preserve fidelity with the original content image and consistency with the style code . In order to ensure the stylized output image preserves the semantic content of the content image , we use a content preserving loss which constrains the output stylized image to achieve same encoded content feature as input content image:
where the output is a stylized image . distance is used as the metric.
To ensure the consistency between style image and the synthesis output during the training process, we apply a style preserving loss, which computes the distance of the gram matrix on multi-scale feature layers of a pre-trained VGG-16 classification network:
where indicates the gram matrix of the
-th feature map of the VGG network pretrained on ImageNet.
Furthermore, we introduce a conditional identity loss to preserve the content fidelity without affecting the output quality. Specifically, we constrain an identity mapping when using same style image both as style and content input:
where and encode features from the content image and the style image, respectively. Conditioned by the style label , reconstructs the style image under metric.
where is the multi-scale discriminator network. LSGAN  loss is used for adversarial training. The auxiliary classification loss is applied to constrain the output stylized image and input style guidance into the same style domain:
where is the cross-entropy loss.
The final objective for our generator, discriminator, style alignment module and auxiliary classifier is formulated as:
where denotes the relative importance among these objectives. We set , , , .
Iv-a Experimental Setup
We implement the proposed framework using PyTorch. The input resolution to our network is. We set the dimension of style code to . For network training, we use Gaussian weight initialization and Adam  optimizer. The learning rate, and are set to , , and , respectively. The full model is trained with iterations.
Random sampling guided stylization results of different domains (artists). Random style codes are sampled from a standard normal distribution. (a): Content images; (b)-(d): Our stylization results using the styles of three artists.
Training and test data.
For multi-domain training, we collect a total of paintings from five artists on Wikiart111https://www.wikiart.org/, including Monet (458), Van Gogh (184), Czanne (257), Gauguin (245), and Picasso (159) as style reference images. Each artist corresponds to one style domain. The content image sets for training are from the photo2art dataset of CycleGAN, with a total number of images. The test images are collected from Pexels222https://www.pexels.com/ using keywords landscape and nature. We prepare natural photos in total as input content images.
To demonstrate the controllability and diversity of our method, we compare with recent 7 style transfer methods (i.e., Gatys , AdaIN , WCT , AvatarNet , AAMS , SANet , and LinearST ), as well as 5 image-to-image translation methods (i.e., CycleGAN , MUNIT , DRIT , StarGAN , and UFDN 
). For a fair comparison, we use author released source code whenever possible and train all methods with default configurations on same training set with same number of iterations, except CycleGAN. We train CycleGAN with dropout layers of probability 0.5 to make it to be feasible for multimodal image generation. The new model is denoted asCycleGAN_D. We evaluate the performance of all models on the same Pexels test set as mentioned above.
Iv-B Qualitative Evaluation
Figure 4 shows the comparison of our exemplar-guided style transfer results with the ones of other approaches. Each row corresponds to one artist’s style (domain) and different columns represent different methods. The corresponding content image and style image are shown in leftmost columns. Overall, our method achieves more visually plausible results than others. For example, Gatys et al. (Figure 4c) fails to preserve content semantic information well and also to reproduce sky in first and third content image (1st and 3rd rows). AdaIN (Figure 4d) achieves high fidelity w.r.t input content images, but the results are often over-blurred. WCT (Figure 4e) cannot get satisfactory results with severely distorted contents and less consistent style w.r.t style exemplars. AvatarNet (Figure 4f) and AAMS (Figure 4h) tend to generate either blurry results or images with granular artifacts. MUNIT (Figure 4g) performs better than WCT, but also suffers blurring issues and dirty appearance artifacts (e.g. 2nd and 3rd rows). Finally, while SANet (Figure 4i) and LinearST (Figure 4j) present balanced appearance between content and style, they still suffer from content distortions (Figure 4i, 2nd row) and color deviations (Figure 4j, 4th row). Compared to these approaches, our results have less artifacts while achieve better visual quality, i.e., both content similarity and style consistency are well preserved.
Our method can generate diverse multimodal results in different ways for style guidance. To demonstrate this advantage, for each domain representing the style of an artist, we generate multiple stylized images from the same content image (1) by random sampling in our learned style embedding space (see Figure 5), and (2) by using difference reference images of the corresponding artist (see Figure 6). In both cases, our method can generate vivid stylized images which are consistent with the unique style of each artist. For example, the results guided by Picasso’s artwork are composed of large color blocks in an abstract style, while the ones guided by Monet’s work appear to be vague and are full of subtle strokes.
Our style space decouples multi-domain control from multimodal generation. Figure 7 shows style transfer results with fixed style code and different artist domain labels. In general, our method can generate images of different styles which preserve unique brush strokes and customary color collocations of each artist. For instance, the stylized photos from Picasso are more vivid and abstract while the ones from Van Gogh generates many tiny strokes.
To validate the smoothness of our latent style space, we present interpolation results using different guidance methods. Figure 8 shows image sequences generated by linear interpolation of two randomly sampled style codes in latent space. We can observe smooth and plausible style change as the interpolate weight varies. Figure 9 demonstrates interpolation results between multiple styles defined using reference images shown in the four corners. We obtain satisfactory intra-domain (vertically) and inter-domain (horizontally) interpolation results.
Iv-C Quantitative Evaluation
|Method||Photo2Monet (P2M)||Photo2Van Gogh (P2V)||Photo2Czanne (P2C)||Photo2Gauguin (P2G)||Photo2Picasso (P2P)|
|Ours + Sample||0.76||0.26||0.41||0.33||0.31||0.414|
|Ours + Exemplar||0.47|
To evaluate our stylized image quality, we conduct quantitative comparison using two different metrics, i.e., Learned Perceptual Image Patch Similarity (LPIPS) score and Inception score (IS). The LPIPS metric  is defined as a perceptual distance between image pairs and is calculated as a weighted difference between their embeddings on a pretrained VGG16 network, where the weights are fit so that the metric agrees with human perceptual similarity judgments. The Inception score is the expectation of KL divergence distance between two sets of generated images, both in feature space extracted by a pretrained Inception-V3  network, which measures generation quality and diversity. To measure the quality for each artist style, we finetune the specific embedding network for each domain separately on our training data. We select photos from test set and stylize them in specific domain with randomly sampled style codes for the IS metric and different style code pairs for the LPIPS metric. We report the mean and standard variation of both LPIPS and IS in Table II. As indicated by the scores, our method outperforms other methods in most test cases. Specifically, CycleGAN_D gets the lowest LPIPS score and cannot generate diverse enough results, since the dropout layer only provides stochastic noise which leads to limited changes in the output. StarGAN gets the lowest IS score, which indicates that it cannot generate high-quality stylized images. We argue that the structure of StarGAN is designed for general multi-domain image translation and is not specifically optimized for the task of style transfer. UFDN also does not perform well in most cases and fails to decopling content and style. Without any specific design and constrains, it is difficult to pass different style information into a single generator to synthesize plausible stylization results. DRIT and MUNIT demonstrate better results than StarGAN and CycleGAN_D in terms of the two metrics. Our method outperforms these methods in most cases. The evaluation scores also show that our random-sample based generation is consistently better than exemplar-guided generation, which indicates that our random-sample approach is able to synthesize diverse and high-quality stylized images.
Finally, to demonstrate that our method preserves consistent domain information for multimodal generation, we conduct an experiment to re-classify stylized images into their corresponding style domains. We train a classification network using style labels from our training data. We compare our exemplar-guided and random-sample based approaches with two unified methods (i.e., StarGAN  and UFDN ) and report the corresponding results in Table III. In most cases (4 out of 5), our method leads to higher classification accuracy, which indicates our output distribution is closer to the reference style.
Iv-D Ablation Study
To analyze the effect of each component in our framework, we conduct a series of ablation study with certain components turned off and report the IS and LPIPS scores for evaluation. Table IV shows the result of ablation study. The first row shows the effect of replacing the multi-scale with a standard discriminator. As shown, the corresponded model does not perform well either in terms of quality or diversity. The result without shows a significant decrease on LPIPS score because there is no longer a requirement for style consistency. Without , the network no longer preserve the content of input reference, thus performs worst on both IS and LPIPS scores. When is turned off, we do observe a small increase in LPIPS score which indicates a slightly improved diversity. However, the image quality decreases as indicated by decreased IS score. Figure 10 demonstrates the visual quality under different ablation setups. Our full model in Figure 10g achieves the best trade-off between content fidelity and style consistency.
Iv-E User Study
|Ours w/o ms|
|Our full model|
|Content fidelity||Style preference|
|Ours vs WCT|
|Ours vs AdaIN|
|Ours vs MUNIT|
|Ours vs AAMS|
|Ours vs SANet|
To further evaluate our method quantitatively, we conduct user study to measure the preference of different stylization methods. We select five exemplar-guided style transfer methods as baselines to compare, including WCT , AdaIN , MUNIT , AAMS , and SANet . We hire annotators of different background and from different regions to answer randomly generated questions. Given the content image and the exemplar, we show the subjects two stylization results in random order, one by our method and the other from a baseline approach. For each pair of stylized images, the annotator is asked to answer two questions: (1) which one has higher fidelity to the content image; and (2) which one has the preferred style. Our user study results are reported in Table V. In each row, we provide the average percentages that our results are selected when compare to those from the corresponding baseline method by the two aformentioned questions. As shown in Table V, our results are more preferred than these from baseline approaches by both measures.
In this paper, we present a new framework to multimodal and achieve multi-domain style transfer. To enable image stylization via both exemplar-based and randomly sampled guidance, we propose a novel style alignment module to construct an embedding style space. The constructed space eliminates the explicit distribution gaps among various style domains and enables both image-guided style feature extraction and random style code generation, while reducing the risk of mode collapse due to the improper constraint by using KL-divergence loss. Our framework shows superior performance for tasks of multimodal and multi-domain style transfer. Extensive qualitative and quantitative evaluations demonstrate that our method outperforms previous style transfer methods.
-  (2018) ComboGAN: unrestrained scalability for image domain translation. In , Cited by: §II.
-  (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §III-B.
-  (2017) CVAE-GAN: fine-grained image generation through asymmetric training. In IEEE International Conference on Computer Vision, Cited by: §III-A, §III-A.
-  (2019) Sym-parameterized dynamic inference for mixed-domain image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4803–4811. Cited by: §II.
-  (2018) StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II, §III-B, §IV-A, TABLE II.
-  (2016) Image style transfer using convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 2414–2423. Cited by: §II, §IV-A.
-  (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §II, §III-A, §IV-A, §IV-E.
-  (2018) Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §I, §II, §III-A, §III-B, §IV-A, §IV-C, §IV-E, TABLE II.
Image-to-image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-A.
-  (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §II.
-  (2019) Content and style disentanglement for artistic style transfer. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §II, §II.
-  (2019) A content transformation block for image style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10032–10041. Cited by: §II.
-  (2018) Diverse image-to-image translation via disentangled representations. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §II, §III-B, §IV-A, TABLE II.
-  (2020) Drit++: diverse image-to-image translation via disentangled representations. International Journal of Computer Vision, pp. 1–16. Cited by: §II.
Learning linear transformations for fast arbitrary style transfer. arXiv preprint arXiv:1808.04537. Cited by: §II, §IV-A.
-  (2017) Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pp. 386–396. Cited by: §II, §IV-A, §IV-E.
-  (2018) A unified feature disentangler for multi-domain image translation and manipulation. In Advances in Neural Information Processing Systems, Cited by: §II, §IV-A, §IV-C, TABLE II.
-  (2017) Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, Cited by: §II.
-  (2019) Few-shot unsupervised image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 10551–10560. Cited by: §II.
GMM-unit: unsupervised multi-domain and multi-modal image-to-image translation via attribute gaussian mixture modeling. arXiv preprint arXiv:2003.06788. Cited by: §II.
-  (2019) Exemplar guided unsupervised image-to-image translation with semantic consistency. Proceedings of ICLR. Cited by: §II.
Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §II.
-  (2019) Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II.
-  (2017) Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §III-B.
-  (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §II.
Conditional image synthesis with auxiliary classifier gans.
Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2642–2651. Cited by: §II.
-  (2019) Arbitrary style transfer with style-attentional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5880–5888. Cited by: §II, §IV-A, §IV-E.
-  (2019) SMIT: stochastic multi-label image-to-image translation. In IEEE International Conference on Computer Vision Workshops, Cited by: §I, §II.
-  (2018) A style-aware content loss for real-time hd style transfer. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §II.
-  (2018) Avatar-net: multi-scale zero-shot style transfer by feature decoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8242–8250. Cited by: §II, §IV-A.
-  (2015) Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, Cited by: §II.
-  (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §IV-C.
-  (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §III-B.
-  (2020) StyleGAN2 distillation for feed-forward image manipulation. arXiv preprint arXiv:2003.03581. Cited by: §II.
-  (2019) A multi-domain and multi-modal representation disentangler for cross-domain image manipulation and classification. IEEE Transactions on Image Processing. Cited by: §II.
-  (2019) Attention-aware multi-stroke style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1467–1475. Cited by: §II, §IV-A, §IV-E.
-  (2019) Multi-mapping image-to-image translation via learning disentanglement. In Advances in Neural Information Processing Systems, Cited by: §I, §II.
The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §IV-C.
-  (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, Cited by: §I, §II, §III-B, §IV-A, TABLE II.