Distribution Aligned Multimodal and Multi-Domain Image Stylization

06/02/2020 ∙ by Minxuan Lin, et al. ∙ Beijing Kuaishou Technology Co.,Ltd. 0

Multimodal and multi-domain stylization are two important problems in the field of image style transfer. Currently, there are few methods that can perform both multimodal and multi-domain stylization simultaneously. In this paper, we propose a unified framework for multimodal and multi-domain style transfer with the support of both exemplar-based reference and randomly sampled guidance. The key component of our method is a novel style distribution alignment module that eliminates the explicit distribution gaps between various style domains and reduces the risk of mode collapse. The multimodal diversity is ensured by either guidance from multiple images or random style code, while the multi-domain controllability is directly achieved by using a domain label. We validate our proposed framework on painting style transfer with a variety of different artistic styles and genres. Qualitative and quantitative comparisons with state-of-the-art methods demonstrate that our method can generate high-quality results of multi-domain styles and multimodal instances with reference style guidance or random sampled style.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Style transfer is a typical technique to stylize a content image in the style of another input. Recently, image-to-image translation methods based on conditional generative adversarial networks 

[9] have played a pivotal role in addressing the problem of style transfer. While these pioneering techniques have shown promising results for generating a single stylized output from a reference image, two interesting problems have been raised, namely, multi-domain and multimodal stylization. Multi-domain stylization methods seek for better controllability during the style transfer process, i.e., to generate different styles based on guidance from multiple domains. Multimodal methods, on the other hand, focus on the diversity of generated stylization results, i.e., to synthesize multiple different results which are all consistent with the same reference style.

The majority of existing multi-domain methods, e.g., StarGAN [5], are inherited from unpaired conditional image-to-image translation [40], which learns a one-to-one mapping between two domains, and thus loses the ability of synthesizing multimodal results. Multimodal methods such as MUNIT [8], on the other hand, are usually limited to handling only two domains at one time and do not support multi-domain stylization. Although several methods [29, 38]

have been proposed to address both multimodal and multi-domain stylization simultaneously, they are restricted to generating images with only random sampling (i.e. generating results by random sampling from the style space) or suffer the issue of mode collapse due to the use of Kullback-Leibler divergence.

The key of addressing these problems is to construct a style embedding space that (1) preserves style information from reference style images for exemplar-based multimodal stylization; (2) is smooth enough for random sampling based multimodal stylization; (3) has a uniformly covered style distribution for avoiding mode collapse; and (4) provides flexible control using domain labels for multi-domain stylization.

To overcome these challenges, we propose a unified framework that achieves multi-domain and multimodal stylization simultaneously with both exemplar-based guidance and random sample guidance as well as reducing the possibility of mode collapse. The aligned space has the following properties: (1) a reference style can be extracted from a trained encoder to support exemplar-based stylization; (2) each conditioned space can support multimodal stylization via random sampling; (3) style features sufficient cover the sampling space; and (4) the space is conditioned on domains so that multi-domain stylization is available as well. We demonstrate the strength of our method on painting style transfer with a variety of artistic styles and genres. Both qualitative and quantitative comparisons with state-of-the-art methods indicate that our approach can generate high-quality results of multi-domain and multimodal stylization.

Ii Related Work

Style transfer.

Gatys et al. [6]

first adopt convolution neural network to deal with a single image stylization problem by an iterative optimization procedure. For more diversity, several arbitrary style transfer methods are proposed. WCT 

[17] progressively repeats whitening and coloring operations at multiple scales to alter any style patterns. Huang et al. [7] use the adaptive instance normalization (AdaIN) layer to align with the feature statistics of content and style images. AvatarNet [31] proposes a style decorator to semantically make up the content feature with the style feature in multi-scale layers. Li et al. [16] introduce a transformation matrix to transfer style across different levels flexibly and efficiently. Sanakoyeu et al. [30] emphasize the style-aware content constraint to achieve real-time HD style transfer. Kotovenko et al. [13] use a content transformation module to focus on details. Kotovenko et al. [12] exchange the content and style of stylized images to disentangle the two elements for better style mix. Moreover, several attention-aware fashions [37, 28] are proposed, where the models learn to adjust the influencing factor of the style feature for the content feature. However, none of these above methods support multi-domain style transfer, which are short of the controllability.

Method
Unified
Generator
Multiple
modals
Multiple
domains
Sample
Guided
Exemplar
Guided
Feature
Adaptation
Pix2Pix
CycleGAN
UNIT
StarGAN
MUNIT
DRIT
UFDN
EGSC-IT
DMIT
SMIT
Ours
TABLE I: Comparisons with recent methods for image-to-image translation and style transfer.

Image-to-image translation

Closely related to style transfer, image-to-image (I2I) translation addresses a more general synthesis problem which shifts the style distribution from one domain to another while maintaining semantic features between images. CGAN [26] renders primitive translation process with a noise condition. Pix2Pix [9] uses conditional generative adversarial networks to transfer images between two domains. Their methods are further improved by CycleGAN [40] which uses a dual-learning approach and eliminates the requirement of paired data. While showing promising results, these methods are intrinsically limited to learn a mapping between two domains.

Based on these explorations, several methods attempt to address either multi-domain or multimodal I2I translation. ACGAN [27]

proposes to append an auxiliary classifier in the discriminator to support multi-domain generation. For multi-domain translation, ComboGAN 

[1] leverages multiple encoder-decoders for altering between different styles. StarGAN [5] uses a unified conditional generator for multi-domain synthesis. SGN [4] explores the influence of mixed domains. For multimodal generation, MSGAN [24] introduces a new constraint which emphasizes on the ratio of the distances between images and their corresponded latent codes. EGSC-IT [22] controls the AdaIN parameters of image generator by a style coding branch. FUNIT [20] deals with multi-domain translation in a few-shot setting. However, all of them cannot perform multimodal and multi-domain translation simultaneously.

Recently, several methods [29, 38, 35, 21, 36, 15] propose to achieve multi-domain and multimodal synthesis within a single framework. SMIT [29] uses a combination of random noise and domain condition as guidance for image translation. DMIT [38] separates content, style, and domain information with different encoders. However, the limitations of guidance way (only support random sampling) and the difficulty of controlling style space by KL-divergence become their obstacles. Concurrent to our work, StarGAN v2 [35] uses a mapping network to transform a latent code to style code from multiple domains. The multi-branch strategy is also adopted by the discriminator. As a result, the number of parameters will inevitably increase with adding more domains.

Disentangled latent representations.

Building the mapping between the latent space and the image space promotes the quality and controllability of synthesized output. VAE [11] uses the reparametrization tricks to construct the relationships between the two spaces. CVAE [32] takes the one-hot label as the condition to construct multiple clusters in the latent space. AAE [23] proposes an adversarial strategy to force the latent space distribution to be close to the prior distribution. UNIT [19]

adopts double VAEs to encode the latent vectors into a shared latent space. MUNIT 

[8] and DRIT [14] further disentangle the content feature and style feature into disparate manifolds. To disentangle multi-domain features, UFDN [18] aligns domain representations by an adversarial domain classifier. Kotovenko et al. [12] use fixpoint loss to decouple the content and style space. Similarly, we also encode the two properties by respective encoders.

Table I summarizes different properties of our method and other related techniques. Most existing methods focus on either multi-domain or multimodal synthesis. Few of them have explored both with limited support of style guidance.

Iii Our Method

Fig. 1: Illustration of our entire framework.

The input of our method includes a natural content image that user wants to stylize, as well as a style code associated with its domain label , i.e., a one-hot vector indicating its style domain. The style code can be either generated from a reference style image of a certain style domain label

, or directly sampled from a normal Gaussian distribution

. In case of a style image is provided, the corresponding style code is extracted from our style encoder (Section III-A). The style code and the style domain label are then converted to parameters which control the AdaIN layers of our image generator (Section III-A). The output image is finally synthesised by our image generator based on the content image (Section III-B) and the above style information. Figure 1 illustrates our entire framework.

Fig. 2: The comparison of KL loss and our style alignment module. The improper KL constraint will lead to excessive or inadequate coverage. We avoid this situation by adversarial training to achieve complete and disjoint coverage.

Iii-a Style Space Embedding

The key to integrate multimodal and multi-domain style transfer into a unified framework, without losing ability for either exemplar-guided or random sampling, is an embedded style space that can be both controlled at the inter-domain level and randomly traversed at the intra-domain level. In other words, the style space should (1) be clearly separated between different styles via domain label control , and (2) form a smooth space that can be interpolated within a given style domain. To this end, inspired by CVAE-GAN 

[3], we design a style alignment module for style space embedding. The embedded style code is then further converted into parameters that control AdaIN layer of our image generation network as in [7].

Style alignment module.

Our style alignment module is an encoder-decoder network which constructs the embedded style space from style images of multiple domains. As shown in Figure  1(a), we feed a style image and its corresponding domain label into the style encoder together to form a one-dimensional style embedding . The style encoder consists of multiple down-sampling blocks, and global average pooling (GAP) is applied to the final layer to squeeze output style features.

Unlike CVAE-GAN, our style alignment module does not have to accurately reconstruct a given style image. Instead, the goal of our style alignment module is to eliminate the explicit distribution gaps among various style domains and align them, i.e. different artist styles are controlled by , and style space w.r.t each domain are aligned to Gaussian Distribution to enable sampling and smooth interpolation. Thus, we avoid the reconstruction loss in [3] since pixel-level reconstruction loss will cause more content-related information encoded into the style code, which obstructs our goal of extracting style information only.

Additionally, the KL-divergence loss used in CVAE-GAN without reconstruction constraint will lead to a trivial solution. Figure 2(a) and 2

(b) show two situations when KL-divergence loss is under an inappropriate weight: (1) A weak KL constraint makes the variance of style feature promptly converges to zero and results in inadequate coverage. (2) A strong KL constraint makes the style feature to be indistinguishable from

and results in excessive coverage. That is, the style space is destroyed in both situations. Thus, we also remove the KL-divergence term and train our style alignment module with only style adversarial loss :

(1)

where is randomly sampled from a Gaussian distribution . The style alignment discriminator determines whether the unknown style feature points are from a Gaussian distribution or generated by .

The adversarial loss tends to align the joint distribution of all domain styles (i.e. the unconditioned style space) to a Gaussian distribution. Consequently, each conditioned space is arranged accordingly to cover different regions, and their union spans the full Gaussian distribution, as illustrated in Figure 

2(c). A real case of the aligned feature distribution of our trained style space is illustrated in Figure 3(a) via t-SNE visualization and in Figure 3(b) via the distance distribution of the L1-distance between random sampled style feature pairs. Apparently, our style space have a complete and disjoint coverage.

(a)
(b)
Fig. 3: (a) The t-SNE embedding visualization of style features from different artist domains. (b) The L1 distance distribution of style coding pairs. To show the effect of alignment, two style features from same domain are extracted by as a pair to calculate the distribution of Manhattan distance.

Controllable image synthesis.

Our style alignment module provides two possible ways to parse a style code, i.e., exemplar guided and randomly sampled. As shown in Figure 1(b), exemplar-guided style code is extracted from the style alignment encoder, providing precise control information. And the corresponding stylized result is expected to achieve the same color distribution and texture appearance as the style image . Randomly sampled style code enables multimodal stylization in a certain domain via sampling the style code and an arbitrary domain label . Similar to [8], we use the style code and its corresponding domain label for stylized image generation by controlling the parameters of AdaIN layers. The style code is concatenated with the style label and transformed into channel-wise feature scale and bias for the AdaIN layer by a multi-layer perception network.

(a) Photo
(b) Exemplar
(c) Gatys
(d) AdaIN
(e) WCT
(f) AvatarNet
(g) MUNIT
(h) AAMS
(i) SANet
(j) LinearST
(k) Ours
Fig. 4: Exemplar-guided stylization results from different methods. The content and style images are shown in the left two columns. The remaining columns demonstrate output images generated by several popular style transfer approaches and our method.

Iii-B Stylized Image Generation

Image generator.

Given a content image and controlling parameters from and , the output image is synthesized by the stylized image generator . Figure 1(c) illustrates our generator framework. Inspired by CycleGAN [40], our network is constructed by an encoder-decoder architecture which contains several down-sampling layers, residual blocks and up-sampling layers. Different from other image-to-image translation and stylization methods [8, 14] which use consistent normalization methods for most layers, we employ Instance Normalization (IN) [34] in down-sampling layers and first half of residual blocks, AdaIN in second half of residual blocks and Layer Normalization (LN) [2] in up-sampling layers to avoid irregular artifacts.

Loss functions.

The goal of our stylized image generator network is to generate images that both preserve fidelity with the original content image and consistency with the style code . In order to ensure the stylized output image preserves the semantic content of the content image , we use a content preserving loss which constrains the output stylized image to achieve same encoded content feature as input content image:

(2)

where the output is a stylized image . distance is used as the metric.

To ensure the consistency between style image and the synthesis output during the training process, we apply a style preserving loss, which computes the distance of the gram matrix on multi-scale feature layers of a pre-trained VGG-16 classification network:

(3)

where indicates the gram matrix of the

-th feature map of the VGG network pretrained on ImageNet.

Furthermore, we introduce a conditional identity loss to preserve the content fidelity without affecting the output quality. Specifically, we constrain an identity mapping when using same style image both as style and content input:

(4)

where and encode features from the content image and the style image, respectively. Conditioned by the style label , reconstructs the style image under metric.

To generate realistic results, we use multi-scale patch-based discriminators for adversarial training and auxiliary classifiers for domain classification in Figure 1(d), similar to [5]:

(5)

where is the multi-scale discriminator network. LSGAN [25] loss is used for adversarial training. The auxiliary classification loss is applied to constrain the output stylized image and input style guidance into the same style domain:

(6)

where is the cross-entropy loss.

The final objective for our generator, discriminator, style alignment module and auxiliary classifier is formulated as:

(7)

where denotes the relative importance among these objectives. We set , , , .

Iv Experiments

Iv-a Experimental Setup

Implementation details.

We implement the proposed framework using PyTorch. The input resolution to our network is

. We set the dimension of style code to . For network training, we use Gaussian weight initialization and Adam [10] optimizer. The learning rate, and are set to , , and , respectively. The full model is trained with iterations.

 

 

 

(a) Photo
(b) Monet
(c) Picasso
(d) Cézanne
Fig. 5:

Random sampling guided stylization results of different domains (artists). Random style codes are sampled from a standard normal distribution. (a): Content images; (b)-(d): Our stylization results using the styles of three artists.

Training and test data.

For multi-domain training, we collect a total of paintings from five artists on Wikiart111https://www.wikiart.org/, including Monet (458), Van Gogh (184), Czanne (257), Gauguin (245), and Picasso (159) as style reference images. Each artist corresponds to one style domain. The content image sets for training are from the photo2art dataset of CycleGAN, with a total number of images. The test images are collected from Pexels222https://www.pexels.com/ using keywords landscape and nature. We prepare natural photos in total as input content images.

Baseline methods.

To demonstrate the controllability and diversity of our method, we compare with recent 7 style transfer methods (i.e., Gatys [6], AdaIN [7], WCT [17], AvatarNet [31], AAMS [37], SANet [28], and LinearST [16]), as well as 5 image-to-image translation methods (i.e., CycleGAN [40], MUNIT [8], DRIT [14], StarGAN [5], and UFDN [18]

). For a fair comparison, we use author released source code whenever possible and train all methods with default configurations on same training set with same number of iterations, except CycleGAN. We train CycleGAN with dropout layers of probability 0.5 to make it to be feasible for multimodal image generation. The new model is denoted as

CycleGAN_D. We evaluate the performance of all models on the same Pexels test set as mentioned above.

 

 

 

 

(a) Photo
(b) Cézanne
(c) Van Gogh
(d) Gauguin
(e) Picasso
Fig. 6: Style transfer results using artwork of different artists as exemplars. The first column shows input content images and the first row shows the guiding exemplars. For each artist (domain), two exemplars are used to demonstrate intra-domain discrepancy.

Iv-B Qualitative Evaluation

(a) Photo
(b) Cézanne
(c) Gauguin
(d) Monet
(e) Picasso
(f) Van Gogh
Fig. 7: Results of changing artist domains. We fix the content image and style code to conduct style transfer with different artists’ labels.
Fig. 8: Linear interpolation results of two random styles. The content images are shown on the left, while two randomly sampled styles are shown in the second left-most and the right-most columns, respectively.

Qualitative comparison.

Figure 4 shows the comparison of our exemplar-guided style transfer results with the ones of other approaches. Each row corresponds to one artist’s style (domain) and different columns represent different methods. The corresponding content image and style image are shown in leftmost columns. Overall, our method achieves more visually plausible results than others. For example, Gatys et al. (Figure 4c) fails to preserve content semantic information well and also to reproduce sky in first and third content image (1st and 3rd rows). AdaIN (Figure 4d) achieves high fidelity w.r.t input content images, but the results are often over-blurred. WCT (Figure 4e) cannot get satisfactory results with severely distorted contents and less consistent style w.r.t style exemplars. AvatarNet (Figure 4f) and AAMS (Figure 4h) tend to generate either blurry results or images with granular artifacts. MUNIT (Figure 4g) performs better than WCT, but also suffers blurring issues and dirty appearance artifacts (e.g. 2nd and 3rd rows). Finally, while SANet (Figure 4i) and LinearST (Figure 4j) present balanced appearance between content and style, they still suffer from content distortions (Figure 4i, 2nd row) and color deviations (Figure 4j, 4th row). Compared to these approaches, our results have less artifacts while achieve better visual quality, i.e., both content similarity and style consistency are well preserved.

Multimodal generation.

Our method can generate diverse multimodal results in different ways for style guidance. To demonstrate this advantage, for each domain representing the style of an artist, we generate multiple stylized images from the same content image (1) by random sampling in our learned style embedding space (see Figure 5), and (2) by using difference reference images of the corresponding artist (see Figure 6). In both cases, our method can generate vivid stylized images which are consistent with the unique style of each artist. For example, the results guided by Picasso’s artwork are composed of large color blocks in an abstract style, while the ones guided by Monet’s work appear to be vague and are full of subtle strokes.

Multi-domain generation.

Our style space decouples multi-domain control from multimodal generation. Figure 7 shows style transfer results with fixed style code and different artist domain labels. In general, our method can generate images of different styles which preserve unique brush strokes and customary color collocations of each artist. For instance, the stylized photos from Picasso are more vivid and abstract while the ones from Van Gogh generates many tiny strokes.

Style interpolation.

To validate the smoothness of our latent style space, we present interpolation results using different guidance methods. Figure 8 shows image sequences generated by linear interpolation of two randomly sampled style codes in latent space. We can observe smooth and plausible style change as the interpolate weight varies. Figure 9 demonstrates interpolation results between multiple styles defined using reference images shown in the four corners. We obtain satisfactory intra-domain (vertically) and inter-domain (horizontally) interpolation results.

Iv-C Quantitative Evaluation

Method Photo2Monet (P2M) Photo2Van Gogh (P2V) Photo2Czanne (P2C) Photo2Gauguin (P2G) Photo2Picasso (P2P)
IS LPIPS IS LPIPS IS LPIPS IS LPIPS IS LPIPS
CycleGAN_D [40]
StarGAN [5]
UFDN [18]
DRIT [14]
MUNIT [8]
Ours+Exemplar
Ours+Sample
Real
TABLE II: Comparisons of IS and LPIPS scores of different stylization methods for each painter domain. One hundred content images are respectively stylized 100 times at random for calculating IS score. The LPIPS metric is measured using 1900 pairs stylized images for the domain of each artist. For both metrics, the higher is the better. The last row presents the results of real data.
Method P2M P2V P2C P2G P2P Overall
StarGAN 0.46 0.14 0.15 0.11 0.348
UFDN 0.92 0.56 0.71 0.16 0.10 0.490
Ours + Sample 0.76 0.26 0.41 0.33 0.31 0.414
Ours + Exemplar 0.47
TABLE III: The artist classification accuracy of methods with a unified generator. For each domain, a hundred test images from Pexels.com are used to perform stylization. The results are categorized by a ResNet-18 model finetuned on the test set.

To evaluate our stylized image quality, we conduct quantitative comparison using two different metrics, i.e., Learned Perceptual Image Patch Similarity (LPIPS) score and Inception score (IS). The LPIPS metric [39] is defined as a perceptual distance between image pairs and is calculated as a weighted difference between their embeddings on a pretrained VGG16 network, where the weights are fit so that the metric agrees with human perceptual similarity judgments. The Inception score is the expectation of KL divergence distance between two sets of generated images, both in feature space extracted by a pretrained Inception-V3 [33] network, which measures generation quality and diversity. To measure the quality for each artist style, we finetune the specific embedding network for each domain separately on our training data. We select photos from test set and stylize them in specific domain with randomly sampled style codes for the IS metric and different style code pairs for the LPIPS metric. We report the mean and standard variation of both LPIPS and IS in Table II. As indicated by the scores, our method outperforms other methods in most test cases. Specifically, CycleGAN_D gets the lowest LPIPS score and cannot generate diverse enough results, since the dropout layer only provides stochastic noise which leads to limited changes in the output. StarGAN gets the lowest IS score, which indicates that it cannot generate high-quality stylized images. We argue that the structure of StarGAN is designed for general multi-domain image translation and is not specifically optimized for the task of style transfer. UFDN also does not perform well in most cases and fails to decopling content and style. Without any specific design and constrains, it is difficult to pass different style information into a single generator to synthesize plausible stylization results. DRIT and MUNIT demonstrate better results than StarGAN and CycleGAN_D in terms of the two metrics. Our method outperforms these methods in most cases. The evaluation scores also show that our random-sample based generation is consistently better than exemplar-guided generation, which indicates that our random-sample approach is able to synthesize diverse and high-quality stylized images.

Finally, to demonstrate that our method preserves consistent domain information for multimodal generation, we conduct an experiment to re-classify stylized images into their corresponding style domains. We train a classification network using style labels from our training data. We compare our exemplar-guided and random-sample based approaches with two unified methods (i.e., StarGAN [8] and UFDN [18]) and report the corresponding results in Table III. In most cases (4 out of 5), our method leads to higher classification accuracy, which indicates our output distribution is closer to the reference style.

Iv-D Ablation Study

Fig. 9: Linear interpolation results of multiple styles. The input style images are shown in the four corners.
(a) Style
(b) Photo
(c) w/o ms
(d) w/o
(e) w/o
(f) w/o
(g) full model
Fig. 10: Qualitative results of ablation study. The influencing factors are respectively removed and the corresponding results are compared with our full model.

To analyze the effect of each component in our framework, we conduct a series of ablation study with certain components turned off and report the IS and LPIPS scores for evaluation. Table IV shows the result of ablation study. The first row shows the effect of replacing the multi-scale with a standard discriminator. As shown, the corresponded model does not perform well either in terms of quality or diversity. The result without shows a significant decrease on LPIPS score because there is no longer a requirement for style consistency. Without , the network no longer preserve the content of input reference, thus performs worst on both IS and LPIPS scores. When is turned off, we do observe a small increase in LPIPS score which indicates a slightly improved diversity. However, the image quality decreases as indicated by decreased IS score. Figure 10 demonstrates the visual quality under different ablation setups. Our full model in Figure 10g achieves the best trade-off between content fidelity and style consistency.

Iv-E User Study

Method IS LPIPS
Ours w/o ms
Ours w/o
Ours w/o
Ours w/o
Our full model
TABLE IV: Ablation study of our method. Different factors are removed separately to evaluate the corresponding impact on the results.
Methods
Mean of win rate
Content fidelity Style preference
Ours vs WCT
Ours vs AdaIN
Ours vs MUNIT
Ours vs AAMS
Ours vs SANet
TABLE V: Our user study results. In each row, we report the average percentages that our results are selected when compare to these from the corresponding baseline method.

To further evaluate our method quantitatively, we conduct user study to measure the preference of different stylization methods. We select five exemplar-guided style transfer methods as baselines to compare, including WCT [17], AdaIN [7], MUNIT [8], AAMS [37], and SANet [28]. We hire annotators of different background and from different regions to answer randomly generated questions. Given the content image and the exemplar, we show the subjects two stylization results in random order, one by our method and the other from a baseline approach. For each pair of stylized images, the annotator is asked to answer two questions: (1) which one has higher fidelity to the content image; and (2) which one has the preferred style. Our user study results are reported in Table V. In each row, we provide the average percentages that our results are selected when compare to those from the corresponding baseline method by the two aformentioned questions. As shown in Table V, our results are more preferred than these from baseline approaches by both measures.

V Conclusion

In this paper, we present a new framework to multimodal and achieve multi-domain style transfer. To enable image stylization via both exemplar-based and randomly sampled guidance, we propose a novel style alignment module to construct an embedding style space. The constructed space eliminates the explicit distribution gaps among various style domains and enables both image-guided style feature extraction and random style code generation, while reducing the risk of mode collapse due to the improper constraint by using KL-divergence loss. Our framework shows superior performance for tasks of multimodal and multi-domain style transfer. Extensive qualitative and quantitative evaluations demonstrate that our method outperforms previous style transfer methods.

References

  • [1] A. Anoosheh, E. Agustsson, R. Timofte, and L. Van Gool (2018) ComboGAN: unrestrained scalability for image domain translation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    ,
    Cited by: §II.
  • [2] J. L. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §III-B.
  • [3] J. Bao, D. Chen, F. Wen, H. Li, and G. Hua (2017) CVAE-GAN: fine-grained image generation through asymmetric training. In IEEE International Conference on Computer Vision, Cited by: §III-A, §III-A.
  • [4] S. Chang, S. Park, J. Yang, and N. Kwak (2019) Sym-parameterized dynamic inference for mixed-domain image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4803–4811. Cited by: §II.
  • [5] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018) StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II, §III-B, §IV-A, TABLE II.
  • [6] L. A. Gatys, A. S. Ecker, and M. Bethge (2016) Image style transfer using convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 2414–2423. Cited by: §II, §IV-A.
  • [7] X. Huang and S. Belongie (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §II, §III-A, §IV-A, §IV-E.
  • [8] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §I, §II, §III-A, §III-B, §IV-A, §IV-C, §IV-E, TABLE II.
  • [9] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    .
    In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §II.
  • [10] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-A.
  • [11] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §II.
  • [12] D. Kotovenko, A. Sanakoyeu, S. Lang, and B. Ommer (2019) Content and style disentanglement for artistic style transfer. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §II, §II.
  • [13] D. Kotovenko, A. Sanakoyeu, P. Ma, S. Lang, and B. Ommer (2019) A content transformation block for image style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10032–10041. Cited by: §II.
  • [14] H. Lee, H. Tseng, J. Huang, M. Singh, and M. Yang (2018) Diverse image-to-image translation via disentangled representations. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §II, §III-B, §IV-A, TABLE II.
  • [15] H. Lee, H. Tseng, Q. Mao, J. Huang, Y. Lu, M. Singh, and M. Yang (2020) Drit++: diverse image-to-image translation via disentangled representations. International Journal of Computer Vision, pp. 1–16. Cited by: §II.
  • [16] X. Li, S. Liu, J. Kautz, and M. Yang (2018)

    Learning linear transformations for fast arbitrary style transfer

    .
    arXiv preprint arXiv:1808.04537. Cited by: §II, §IV-A.
  • [17] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M. Yang (2017) Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pp. 386–396. Cited by: §II, §IV-A, §IV-E.
  • [18] A. H. Liu, Y. Liu, Y. Yeh, and Y. F. Wang (2018) A unified feature disentangler for multi-domain image translation and manipulation. In Advances in Neural Information Processing Systems, Cited by: §II, §IV-A, §IV-C, TABLE II.
  • [19] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, Cited by: §II.
  • [20] M. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz (2019) Few-shot unsupervised image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 10551–10560. Cited by: §II.
  • [21] Y. Liu, M. De Nadai, J. Yao, N. Sebe, B. Lepri, and X. Alameda-Pineda (2020)

    GMM-unit: unsupervised multi-domain and multi-modal image-to-image translation via attribute gaussian mixture modeling

    .
    arXiv preprint arXiv:2003.06788. Cited by: §II.
  • [22] L. Ma, X. Jia, S. Georgoulis, T. Tuytelaars, and L. Van Gool (2019) Exemplar guided unsupervised image-to-image translation with semantic consistency. Proceedings of ICLR. Cited by: §II.
  • [23] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey (2015)

    Adversarial autoencoders

    .
    arXiv preprint arXiv:1511.05644. Cited by: §II.
  • [24] Q. Mao, H. Lee, H. Tseng, S. Ma, and M. Yang (2019) Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §II.
  • [25] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley (2017) Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Cited by: §III-B.
  • [26] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §II.
  • [27] A. Odena, C. Olah, and J. Shlens (2017) Conditional image synthesis with auxiliary classifier gans. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    ,
    pp. 2642–2651. Cited by: §II.
  • [28] D. Y. Park and K. H. Lee (2019) Arbitrary style transfer with style-attentional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5880–5888. Cited by: §II, §IV-A, §IV-E.
  • [29] A. Romero, P. Arbeláez, L. Van Gool, and R. Timofte (2019) SMIT: stochastic multi-label image-to-image translation. In IEEE International Conference on Computer Vision Workshops, Cited by: §I, §II.
  • [30] A. Sanakoyeu, D. Kotovenko, S. Lang, and B. Ommer (2018) A style-aware content loss for real-time hd style transfer. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §II.
  • [31] L. Sheng, Z. Lin, J. Shao, and X. Wang (2018) Avatar-net: multi-scale zero-shot style transfer by feature decoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8242–8250. Cited by: §II, §IV-A.
  • [32] K. Sohn, H. Lee, and X. Yan (2015) Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, Cited by: §II.
  • [33] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §IV-C.
  • [34] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §III-B.
  • [35] Y. Viazovetskyi, V. Ivashkin, and E. Kashin (2020) StyleGAN2 distillation for feed-forward image manipulation. arXiv preprint arXiv:2003.03581. Cited by: §II.
  • [36] F. Yang, J. Chang, C. Tsai, and Y. F. Wang (2019) A multi-domain and multi-modal representation disentangler for cross-domain image manipulation and classification. IEEE Transactions on Image Processing. Cited by: §II.
  • [37] Y. Yao, J. Ren, X. Xie, W. Liu, Y. Liu, and J. Wang (2019) Attention-aware multi-stroke style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1467–1475. Cited by: §II, §IV-A, §IV-E.
  • [38] X. Yu, Y. Chen, S. Liu, T. Li, and G. Li (2019) Multi-mapping image-to-image translation via learning disentanglement. In Advances in Neural Information Processing Systems, Cited by: §I, §II.
  • [39] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §IV-C.
  • [40] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision, Cited by: §I, §II, §III-B, §IV-A, TABLE II.