1 Introduction
Imagetoimage translation problem is a general formulation which involves a wide range of various computer vision problems. Just as a sentence may be translated in either English or French, an image may be rendered in another image. Many problem in image processing can be defined as “translating” an input image in one domain into a corresponding output image in another domain. Typically, denoising, superresolution and colorization all pertain to imagetoimage translation where input is a degraded image (noisy, lowresolution, or gray scale) and the output is a highquality color image.
Recently, a series of attractive works ignite a renewed interest in the imagetoimage translation problem by adopting Convolution Neural Networks (CNNs). Gatys et al.
[3] first study how to use CNN to reproduce famous painting styles on natural images. Since the seminal work by Goodfellow et al. [4], GAN has been proposed for a wide variety of problems. Unlike past works, by utilizing GANs, [15, 16, 29, 30] are proposed to translate an image from a source domain to a target domainin the absence of paired examples. These algorithms often produce more impressive results near to the corresponding target domain, since a joint distribution that can be learnt from two different domains by using images from the marginal distributions in individual domains.
Notwithstanding their demonstrated success, currently existing approaches basically focus on the one model with two domains setting. Specially, learnt through one fresh training, translation is limited to transfers one pair of different domains. After a careful examination of existing imagetoimage translation networks, we argue that different marginal distributions can be projected into a common space in their learnt network structures. To the best of our knowledge, a translation among domains has not yet been proposed with complexity in these previous works.
As a result, the network is only able to capture a two specific domains translate one at a time. For a new domain, the whole network has to be retrained endtoend, which leads to an unavoidable burden under the situation where transformations are required, given domains. In practice, this make these methods unable to scale to a large number of domains, especially when the domain require to be incrementally augmented. Additionally, how to further reduce the training time, network model size and enable more flexibilities to control translation among domains, remain to be considered yet to be addressed.
To overcome these problems, we explore a multidomain image translation in which we reconsider the joint distributions of multiple domains. From the perspective of a probabilistic modeling, the coupling theory [14] states there exists an infinite set of joint distributions that can arrive the given marginal distributions in general. This highly illposed problem forces us to make additional assumption on the structure of the joint distribution. By further considering the interaction of domains, we make a global sharedlatent space assumption that assumes every sampled image from one of domains can be mapped to an universal sharedlatent space. Based on the universal assumption, we propose a compact, and easily extended DomainBank framework that learns every domain pairs’ joint distribution simultaneously.
In details, the proposed DomainBank framework is composed of multiple domainspecific component banks and each component represents one specific domain. Specially, a component bank consist of encoder and decoder for a specific domain. For an input image, the corresponding component bank maps an image to a sharedlatent space, and then decodes it to the target image.
In several challenging unsupervised multidomain image translation tasks like face image translation, fashionclothes translation and painting style translation, we comprehensively demonstrate the superior efficiency and at least comparable results to the stateoftheart methods. Furthermore, as more domains’ samples are engaged in DomainBank framework, the performance gain of domain adaptation tasks on digital recognition becomes consistently obvious. More importantly, it not only allows us to simultaneously learn the translation among various domains, but also enables a very efficient incremental learning for a new image domain. This is achieved by learning a new component bank of domain while holding the other autoencoder/decoder fixed.
Compared with existing models under the unsupervised imagetoimage translation settings, our proposed DomainBank is unique in the following aspects:

Our model is designed with a compact and clean structure, which also obtains a considerably huge reduction (from a complexity of to ) of training time and model parameters in case of () domains with transformations.

The universal shared autoencoder subnetwork is trained efficiently and effectively with multidomain training samples/pairs, thus leading to a better generation which is confirmed in both quantitative and qualitative experimental results.

The shared autoencoder, along with the domainspecific encoders/decoders, can provide more functional utilizations like domain linear combination or incrementally learning a new domain.
The remainder of the paper is organized as follows: Related work is summarized in Section 2. We devote Section 3 to the main technical design of the proposed DomainBank. Section 4 gives the experimental results in both quantitative and qualitative aspects. New characteristics of the proposed framework can be found in Section 5. Finally, we conclude our paper in Section 6.
2 Related Work
ImagetoImage translation problem has already been promoted by deep neural network and obtains some impressive results especially in the domain/style transfer fields. Neural generative models has recently received an increasing amount of attention. Several algorithms, including generative adversarial networks [4]
, variational autoencoders (VAEs)
[10, 11], stochastic backpropagation [23] and diffusion processes[25], have demonstrated that a deep neural network can learn a domain distribution from examples. Thus, the learned networks can be used to generate novel images. We are interested in imagetoimage translation problem. After analyzing the image translation problem from a probabilistic modeling attitude, the key challenge is to learn a joint distribution of images in different domains from its marginal distributions of individual domains.ImagetoImage Translation. Many image processing and computer vision tasks can be posed as an imagetoimage translation problem, mapping an image in one domain to a corresponding image in another domain, e.g., image segmentation, stylization, superresolution and abstraction. Particularly, the imagetoimage can be traced back at least to Hertzmann et al’s Image Analogies [6]. Hence, image segmentation can be considered as a problem of mapping a natural image to a corresponding segmented image. More recent approaches use a dataset of inputoutput samples to learn a parametric function using CNNs. Similar ideas have also been applied to various tasks including generating photo from sketches or attributes and semantic layouts etc.
Unsupervised ImagetoImage Translation. In unsupervised imagetoimage setting, we only have two independent sets of images where one possesses images in one domain and so do the other. Note that there exists no paired samples guiding how an image could be translated to a corresponding image in another domain. Several other approaches also adopt the unpaired setting, where the goal is to relate two data domains, domain and domain . More Recently, [26]
proposed the domain transformation network (DTN) and achieved promising results on translating small resolution face and digit images. Liu et al. proposed CoGAN
[16], which use a weightsharing strategy to learn a common representation across two domains. Following, Liu et al. first made a sharedlatent assumption, and then they proposed an unsupervised imagetoimage translation framework [15], which uses a VAEs and GANs to learn a mapping from input to output images. Our approach builds on the this framework. However, unlike these prior works, we learn the translation among multiple domains (more than two domains) without paired training samples and also enable a very efficient incremental learning for a new domain based on our proposed framework at the same time.Generative Adversarial Networks (GANs). GANs have achieved great success in a wide variety of computer vision applications, enhancing both supervised tasks and unsupervised ones. The key of GANs is the introduction of the adversarial loss, that forces the generated images to be indistinguishable from real images substantially. Learning in GAN is via staging a zerosum game between two players, where the discriminator tries to distinguish reliable real samples from fake ones and the generator attempts to fool it. Soon after, various GANs have been proposed to the image generation on class labels [19], attributes [21, 28] and images [13, 15, 30, 16, 29, 22, 7]. A list of training tricks of GANs is given in [24].
Variational Auto Encoders (VAEs). A VAE consists of two networks that encode a data sample to a latent representation and decode the latent representation back to data space. The key of VAEs is to optimize a variational bound. By enhancing the variational approximation, superior image generation results were obtained [9, 18]. Larsen et al. [11] proposed a VAEGAN architecture to improve image generation quality of VAEs. VAEs also were applied to translate face image attribute in [28]. More recently, Liu et al. [15] extend the framework of VAEGAN to unsupervised imagetoimage translation problems.
3 DomainBank Networks
Our goal is to learn translations for domains. It may offer a new understanding for the image domain translation problems, and then help design a more elegant architecture to address multidomain translation problems.
We construct a multidomain image translation network based on variational autoencoders (VAEs) [10, 11, 23] and generative adversarial networks (GANs) [4, 16, 30], which encodes an input image to the sharedlatent space and can also reconstruct/transfer it.
Networks  {}  {}  {}  {}  {} 

Functions  VAE for  GAN for  VAEGAN [11]  Image Translator  Cycleconsistency [30] 
3.1 Network Architecture
Figure 1 shows our multidomain translation architecture, which is based on the universal sharedlatent space assumption. Suppose we are considering arbitrary two domains of domains, namely , which contain training samples where and where , respectively. We denote the corresponding marginal data distribution as and . We aim to learn a joint distribution of images in domain and by utilizing images from the marginal distributions in two individual domains. It can be easily extended to the case of domains. That is, we can learn a joint distribution of domains.
Every one (supposed to be ) has two functional paths, through which any given image sampled from can be projected into the sharedlatent code and it can be recovered back as well. That is, we suppose there exists functions and () such that, given a pair of corresponding images () (where , , and ) from the joint distribution. We define , on the contrary, and . In our structures, we map domain to domain through the function , which can be represented by the function . Equally, we define two reconstruction functions for domain to domain : 1 and 2 . The function 1 can be equivalently written as , and the 2 is written as . More notably, for an input image in domain , the function 1 directly translate it to an image in domain . However, in function 2, the input images are first translated from domain to domain , and then the generated images in domain are converted back to the domain . In addition, a necessary condition for translating domain to domain to exist is the cycleconsistency constraint [8, 15, 30]: . In other words, we can reconstruct the input image from translating back to the translated input image. Therefore, the sharedlatent space assumption indicates the cycleconsistency assumption.
DomainSpecific Encoder and Decoder. Following the architecture used in [15], the image encoder consists of 3 convolutional layers and 3 basic residual blocks [5], symmetrically, the image decoder also consists of 3 basic residual blocks and 3 transposed convolutional layers. In our mulitdomain imagetoimage translation, different domains have domainspecific encoders and domainspecific decoders . For instance, Monet’s painting need to use Monet’s specific encoder whilst Van Gogh’s has to use Van Gogh’s specific encoder. Similarly, this is also necessary for the domainspecific decoders. Different domains use domainspecific encoders to extract representations of the input images, and then domainspecific decoders responsible for decoding representations for reconstructing images in different domains. In other words, the encoder and the decoder can be seen as a domainspecific component, and we only need to train different components for different domains. In practice, when a new domain arrives, it also enables us to conduct incremental learning to train the encoder and decoder.
Universal Shared AutoEncoder. Based on the sharedlatent assumption, we enforce a weightsharing constraint to relate the VAEs. Specially, we further assume a share intermediate representation that the process of generating corresponding images satisfy the formula
(1) 
Therefore, we assume where is a common lowlevel generation function that maps to , respectively. However, are highlevel generation function that maps to . From another view , can be considered as the highlevel representation of different domains, and can be regarded as a special implementation of through . Similarly, also admit us to represent by In the implementation, we share the weights of the last few layers of that are responsible for extracting highlevel representations of input images in the domains. Equally, the first few layers of are shared, which responsible for decoding highlevel representations for reconstructing the input images.
DomainSpecific Discriminator. Since we have different domains, our framework has adversarial networks: = {}. In , for real images sampled from the domain , should output true, while for images generated by , it should output false. In our framework, can generate two types of images: 1 images from the reconstruction streams and 2 images from the translation streams . The reconstruction streams can be trained with supervisions that we only apply adversarial training to images from the translation streams, . Thus, we require train domainspecific discriminators for different domains.
3.2 Loss Function
To better understand the losses applied in DomainBank, we first give a decomposed perspective of possible combinations of the key components in Figure 1. Basically, our framework is based on variational autoencoders (VAEs) and generative adversarial networks (GANs) including domain image encoders , domain image generators and domain adversarial discriminators where . The Table 1 further explains the various roles inside our framework and their corresponding functions.
In the imagetoimage translation problem of our DomainBank, we have three kinds of fundamental information streams, namely the image reconstruction streams, the image translation streams, and the cyclereconstruction streams. In Table 1, the encoderdecoder pair {} constitutes VAE for the image reconstruction streams. For an input domain , the image is translated to another domain by the translation stream {}. Since the sharedlatent space assumption indicates the cycleconsistency constraint, we require a cyclereconstruction stream {} to reconstruct input images. Consequently, we are jointly considering address the problem of VAE and GAN to solve the image translation problem.
VAE loss. In our framework, we use variational autoencoder (VAE) to generate images in which VAE is supervised by the
divergence. In the VAE, the encoder outputs a mean vector
where the input image . The distribution of the latent code is written as whereis an identity matrix. We assume the distribution of
as a random vector of and sample from it. Thus, the reconstructed image is . In addition, letbe a random vector with a multivariate Gaussian distribution:
. In the VAE, the function is implemented via . The aim of VAE is to minimize a variational upper bound that the VAE object is written as(2) 
where and display the weights of corresponding objective and the divergence term penalizes the deviation of the latent code distribution from the prior distribution. More notable, the L1 loss used in VAE is to ensure similarity between the generated real image and the original rendering image.
Adversarial loss. The adversarial losses are applied to translation functions. Note that we have defined and its discriminator , where . We express the objective as:
(3) 
where the hyperparameter controls the impact of the GAN objective functions. In the adversarial part, attempts to generate images that look like images from domain , while tries to distinguish between translated samples and real samples . Finally, aims to minimize this objective against an adversary that tries to maximize it.
Cycleconsistency Loss. We use a VAElike function to model the cycleconsistency constraint, which is written as
(4) 
Full objective. As a result, our ultimate objective is written as:
(5) 
where and . We aim to solve:
(6) 
3.3 Training Strategy
We employ an alternative training strategy motivated by GAN’s [4] solving a minimax problem where the optimization aims to find a saddle point. The zerosum game in our framework consists of two plays: the domainspecific discriminators as the first team, and the domainspecific encoders/decoders for the second. During training with a specific pair of images from domain and , we first train domainspecific ’s discriminator with all other components fixed. Afterwards the ’s encoder/decoder and ’s encoder/decoder are involved not only to minimize the VAEs losses and the cycleconsistency losses but also to defeat the first player.
4 Experiments
We first give qualitative results on various tasks along with rich complexity comparisons. Further, we present the quantitative performance gain on the digital domain adaptation tasks.
4.1 Qualitative Analysis
Face attributes translation. The CelebA dataset [17] is exploited for attributebased face images translation. There are many different attributes of face images including hair, smiling and eyeglass. Particularly, we select a domain of hair with different colors including blond, brown, black, etc. Specifically, the hair with blond color constitutes the 1st domain, the brown hair constitutes the 2nd domain, while the black hair constitutes the 3rd domain. In Figure 2, we visualize the results where we display the transitions between hair with different colors. We find that the translated hair images are impressive. It is not difficult to see that we obtain comparable results to other algorithms in hair translation.
When training for multidomain translation, the sharedlatent space of our framework accurately captures the invisible innersimilarity whereas encoders/decoders represent the shallowdifference. Thanks to innersimilarity of face images which is captured by sharedlatent space, it can be found that besides the change of hair color, our method maintains the original quality of human face and facial identity. More importantly, our results are obtained by training the network only once. While for three different kinds of hair, the UNIT and CycleGAN need to be trained three times.
Painting style translation. We further utilize the landscape photographs downloaded from Flickr and WikiArt, which is also used in [30]. The size of the dataset for each artist/style is 526, 1073, 400, and 563 for Cezanne, Monet, Van Gogh, and Ukiyoe. In this experiment, we choose three of them, while Van Gogh is for incremental learning. Figure 3 shows our results in comparison with the other methods. Compared with UNIT, our results are superior than it. Particularly, Our generated images are more clear and with higher contrast ratio than UNIT’s, whilst comparable to CycleGAN’s.
Fashionclothes translation. We shows several example results achieved on a recently released dataset FashionMNIST [27], which contains roughly 5 domains of clothes and 3 domains of shoes. Precisely, clothes are composed of TShirt, Pullover, Coat, Shirt and Dress, whilst shoes include Sandals, Sneaker and Ankle boots. We treat different categories of clothes as different domains. Figure 5 shows several results of translation between different clothes. In the texture and details of generated images, our method keeps the original texture of the clothes and have clearer clothing details. In general, our method obtains superior translation results to others.
Summary of Complexity Comparisons. To demonstrate the advantages of complexity of our proposed framework, we compare the training time and model parameters with those baselines in Table 2. It can be clearly seen that our DomainBank has less parameters and training time. This is because others are able to capture only one pair specific domains translation, which leads to an unavoidable burden under the situation where transformations are needed, given domains. However, in our framework, we merely require endtoend training once where we can efficiently accomplish the translation between arbitrary two domains.
Experiment  Type  UNIT [15]  CycleGAN [30]  ours 

Face (3)  Time  6 day  13 day  3 day 
Param  54.06M  68.28M  25.58M  
Painting (4)  Time  12 day  27 day  4 day 
Param  19.62  136.56M  5.94M  
Clothes (5)  Time  21 day  46 day  7 day 
Param  32.75M  227.65M  7.28M 
4.2 Quantitative Performance
In order to better understand the performance gained by sharing more information through more than two domains, we adopt our framework to the domain adaptation task, which adapts a classifier trained using labeled samples in one domain (source domain) to classify samples in a new domain where labeled samples in the new domain (target domain) are unavailable during training. In our case, we append additional auxiliary domains by applying our framework with minimal efforts to check whether it can boost the system’s performance.
More specifically, we utilize three datasets for digits: the Street View House Number (SVHN) dataset [20], the MNIST dataset [12] and USPS dataset [2], and perform multitask learning where our framework is supposed to 1) translate images between any two of three domains and 2) classify the samples in the source domain using the features extracted by the discriminator in it. In the practice, we adopt a small network because the digit images have a small resolution.
Method  CoGAN [16]  UNIT [15]  ours 

SVHN MNIST    0.9053%  0.9146% 
MNIST USPS  0.9565%  0.9597%  0.9645% 
USPS MNIST  0.9315%  0.9358%  0.9412% 
In the experiment, we find that the cycleconsistency constraint is not necessary for this problem, and that is why we remove the cycleconsistency stream from the framework. In addition, we also tie the weights of the highlevel layer of in order to adapt a classifier trained in the source domain to the target domain.
As a result, Figure 4 shows the visualization of digit and Table 3 reports the achieved performance with comparison to the competing algorithms. We achieve better performance for SVHN MNIST task than the UNIT approach, which is the stateoftheart right now. We also obtain the superior results than UNIT on MNIST USPS and USPS MNIST tasks.
5 Capabilities of Our Framework
5.1 Incremental Training
Retraining a new model is inconvenient and risks being not able to recover the performance of previous trained model, when a new style needs to be added. Our framework proposed in this paper has same capability described in [1] supporting incremental training.
When we need add a new style, only the images sampled from the specific incremental domain participate in the training process. The incremental domain’s encoder/decoder and discriminator layers are not fixed shown in Figure 7. Considering the incremental domain and an existing domain , a’s encoder/decoder will be fixed and only samples where
assist incremental training. The loss function for incremental training is defined as follow (
is the number of domain):(7) 
Furthermore, we show few samples in Figure 6 to demonstrate the efficiency.
5.2 Domain Fusion
In this section, we demonstrate an experiment for style fusion: linear fusion of two different styles. Style Linear Fusion. Translations between different styles are encoded into different pairs of , especially, for the input port for source domain and for the output port for target domain. We linearly combine for different target domains in DomainBank layers and the fused can be a new output port:
Figure 8 shows progressive results of two styles with variant ratio .
6 Conclusion
In this paper, we have proposed a novel multidomain image translation framework, namely DomainBank. We show it learnt to translate from multiple domains to multiple domains in one training process. Particularly, our DomainBank explicitly reduces the training time from to , given domains. The universal shared autoencoder subnetwork leads to a better generation which is confirmed in both quantitative and qualitative experimental results. More notably, our framework has less parameters and training time when comparing to others. In addition, we also provide more functional augmentations like domain linear combination and incremental learning.
References
 [1] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. arXiv preprint arXiv:1703.09210, 2017.
 [2] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer Series in Statistics New York, 2001.
 [3] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
 [4] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.

[5]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 770–778, 2016.  [6] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. Image analogies. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pages 327–340. ACM, 2001.
 [7] P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros. Imagetoimage translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
 [8] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim. Learning to discover crossdomain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017.
 [9] D. P. Kingma, T. Salimans, and M. Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016.
 [10] D. P. Kingma and M. Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 [11] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
 [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [13] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photorealistic single image superresolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
 [14] T. Lindvall. Lectures on the coupling method. Courier Corporation, 2002.
 [15] M.Y. Liu, T. Breuel, and J. Kautz. Unsupervised imagetoimage translation networks. arXiv preprint arXiv:1703.00848, 2017.
 [16] M.Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in Neural Information Processing Systems, pages 469–477, 2016.
 [17] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, 2015.
 [18] L. Maaløe, C. K. Sønderby, S. K. Sønderby, and O. Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.
 [19] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
 [20] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on Deep Learning and Unsupervised Feature Learning, volume 2011, page 5, 2011.
 [21] G. Perarnau, J. van de Weijer, B. Raducanu, and J. M. Álvarez. Invertible conditional gans for image editing. arXiv preprint arXiv:1611.06355, 2016.
 [22] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.

[23]
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and variational inference in deep latent gaussian models.
InInternational Conference on Machine Learning
, 2014.  [24] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
 [25] J. SohlDickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015.
 [26] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised crossdomain image generation. arXiv preprint arXiv:1611.02200, 2016.
 [27] H. Xiao, K. Rasul, and R. Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
 [28] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In European Conference on Computer Vision, pages 776–791. Springer, 2016.
 [29] Z. Yi, H. Zhang, P. T. Gong, et al. Dualgan: Unsupervised dual learning for imagetoimage translation. arXiv preprint arXiv:1704.02510, 2017.
 [30] J.Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired imagetoimage translation using cycleconsistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
Comments
There are no comments yet.