1 Introduction
In such a short amount of time, we have come such a long way in the field of image domain adaptation and style transfer, with projects such as [11], [8], [5], [6], [9], [3], and more paving the way. The first four are of particular interest as they do not simply transfer style in terms of texture and color, but in terms of semantics, and they maintain realism in their results. But they must be trained on examples from two specific domains, whereas the other two do not. The first three are even more noteworthy for being able to do this without any supervision between the two domains chosen  no pairing of matching data. Now what if we want to go beyond two domains?
Naively, we could train a model for each pair of domains we desire. With domains, this leads to = models to train. In this work, we approach this problem by dividing each model into two parts: one that handles the conversion of one domain into a common representation and one that converts common representations into that domain. Having one of these pairs per domain allows us to mixandmatch by obtaining a common representation for any image and translating it to any other domain. All the while, the number of models increases linearly with the number of domains, as does the required training time.
1.1 Generative Adversarial Models
Creating a generative model of natural images is a challenging task. Such a model needs to be able to capture the rich distributions from which natural images come from. Generative Adversarial Networks (GANs) [4]
have proven to be excellent for this task and can produce images of high visual fidelity. GANs, by default, consist of a pair of models (typically neural networks): a generator
and a discriminator .is trained to estimate the probability that a sample
comes from a true training data distribution , while simultaneouslyis trained turn vector
sampled from its own prior distribution into , in order to maximize the value outputs when fed . ’s outputs receiving higher score by implies the distribution learns nears the true distribution .This training procedure is referred to as adversarial training and corresponds to a Minimax game between and . Training itself is executed in two alternating steps; first D is trained to distinguish between one or more pairs of real and generated samples, and then the generator is trained to fool D with generated samples. Should the discriminator be too powerful at identifying real photos to begin with, the generator will quickly learn lowlevel ”tricks” to fool the discriminator that do not lie along any natural image manifold. For example, a noisy matrix of pixels can mathematically be a solution that makes produce a high realimage probability. To combat this, the training is often done with a small handful of examples per turn, allowing and to gradually improve alongside each other. The optimization problem at hand can be formulated as:
(1) 
GANs have found applications in many computer vision related domains, including image superresolution
[7] and style transfer [3].1.2 Related Works
Many tasks in computer vision and graphics can be thought of as translation problems where an input image is to be translated from domain to in another domain . Isola et al. [6]
introduced an imagetoimage translation framework that uses GANs in a conditional setting where the generator transforms images conditioned on the existing image
. Instead of sampling from a vector distribution to generate images, it simply modifies the given input. Their method requires image data from two domains, but it requires they be aligned in corresponding pairs.Introduced by Zhu et al., CycleGAN [11] extends this framework to unsupervised imagetoimage translation, meaning no alignment of image pairs are necessary. CycleGAN consists of two pairs of neural networks, and , where the translators between domains and are and . is trained to discriminate between real images and translated images , while is trained to discriminate between images and . The system is trained using both an adversarial loss, as expressed in (1), and a cycle consistency loss expressed in (3). The Cycle consistency loss is a way to regularize the highly unconstrained problem of translating an image onedirection alone, by encouraging the mappings and to be inverses of each other such that and . However, here the traditional negative loglikelihood loss in (1) is replaced by a meansquared loss (2) that has been shown to be more stable during training and to produce higher quality results [10]. The full CycleGAN objective is expressed:
(2) 
(3) 
(4) 
The reconstruction part of the cycle loss forces the networks to preserve domainagnostic detail and geometry in translated images if they are to be reverted as closely as possible to the original image. Zhu et al. were able to produce very convincing image translations such as ones trained to translate between horses and zebras, between paintings and photographs, and between artistic styles.
Liu et al. [8]
implemented a similar approach with UNIT, adding further losses to the intermediate activation results within the generator instead of purely on the final generated outputs. Using the CycleGAN architecture, they designate the activations from the central layer of the generators as the shared latent vectors. Using a variationalautoencoder loss, these vectors from both domains are pushed into a gaussian distribution. This is done in addition to the discriminator loss and cycle loss from CycleGAN, seemingly improving the image translation task over CycleGAN in cases with significantly varying geometry in the domains.
Lastly there is the concurrent work of StarGAN [1], which aims to solve a similar problem as ours: scalability of unsupervised imagetranslation methods. StarGAN melds the generators and discriminators from CycleGAN into one generator and discriminator used in common by all domains. As such, the model can take as input any number of domains, though this requires passing in a vector along with each input to the generator specifying the output domain desired. Meanwhile the discriminator is trained to output the detected domain of an image along with a real/fake label, as opposed to simply the latter when each domain has its own discriminator. The results suggest having a shared model for domains similar enough to each other may be beneficial to the learning process. Nevertheless, this method was only applied to the task of face attribute modification, where all the domains were slight shifts in qualities of the same category of images: human faces.
2 The ComboGAN Model
2.1 Decoupling the Generators
The scalability of setups such as CycleGAN’s is hindered by the fact that both networks used are tied jointly to two domains, one from some domain to and the other from to . To add another domain , we would then need to add four new networks, to , to , to , and to . To solve this issue of exploding model counts, we introduce a new model, ComboGAN, which decouples the domains and networks from each other. ComboGAN’s generator networks are identical to the networks used in CycleGAN (see Appendix A for network specifications), yet we divide each one in half, labeling the frontal halves as encoders and the latter halves as decoders. We can now assign an encoder and decoder to each domain.
As the name ComboGAN suggests, we can combine the encoders and decoders of our trained model like building blocks, taking as input any domain and outputting any other. For example during inference, to transform an image from an arbitrary domain to from domain , we simply perform . The result of can even be cached when translating to other domains as not to repeat computation.
With only one generator (an encoderdecoder pair) per domain, the number of generators scales exactly linearly with the number of domains, instead of . The discriminators remain untouched in our experiment; the number of discriminators already scales linearly when each domain receives its own. Figure 1 displays our full setup.
2.2 Training
Fully utilizing the same losses as CycleGAN involves focusing on two domains, as the generator’s cyclic training and discriminator’s true/falsepair training are not directly adaptable for more domains. ComboGAN’s training procedure involves focusing on 2 of our domains at a time. At the beginning of each iteration, we select two domains from our domains, uniformly at random. Then maintaining the same notation as CycleGAN in (4), we set and and proceed as CycleGAN would for the remainder of the iteration. Figure 2 shows one of the two forward passes in a training iteration. The other half is simply the symmetric mirroring of the procedure for the other domain, as if the two were swapped.
Randomly choosing between two domains per iteration means we should eventually cover training between all pairs of domains uniformly. Though of course the training time (number of iterations) required must increase as well. If training between two domains with CycleGAN requires iterations, then with domains, CycleGAN setups would require iterations to complete. In our situation, we instead keep the training linear in the number of domains, since the number of parameters (weights) in our model increases linearly with the number of domains, as well. We desire each domain to be chosen for a training iteration the same number of times as in CycleGAN. Note that it will not be the same number of times that a given pair is chosen, as achieving that would just require the same number of iterations as the naive method; rather we only care about whether a domain is chosen to be trained alongside any other domain or not. We observe that since a domain is chosen in each iteration with probability , during training it is chosen in expectation times. Requiring equality to the twodomain case , we obtain , or iterations per domain, which proves satisfactory in practice.
As for the discriminators, training is the same as CycleGAN’s. After each training iteration for two given generators, the two corresponding discriminators receive a training iteration, as well. For a single discriminator, a real image from its domain and a fake image intended for the same domain (the output of that domain’s decoder) are fed to train the network to better distinguish real and fake images. This is done independently for both discriminators.
2.3 Relation with CycleGAN
It is easy to see our changes only distinguish our model when more than two domains are present. For the case of two domains, our entire procedure becomes exactly equivalent to CycleGAN. Because of this, we consider ComboGAN a proper extension of the CycleGAN model that needs not change the underlying foundation.
In the case of more than two domains, for the end result to work as intended, it is implied the encoders must be placing input images into a shared representation, in which all inputs are equally fit for any domain transformation. Achieving this latent space suggests that the encoders learn to conceal qualities that make an image unique or distinguishable among the domains, with decoders refilling them with the necessary detail that defines that domain’s characteristics. As detailed in [2], cycleconsistent image translation schemes are known to hide reconstruction details in oftenimperceptible noise. This could theoretically be avoided by strictly enforcing the latent space assumption with added losses acting upon intermediate values (encoder outputs) instead of the decoder outputs. ComboGAN’s decoupledgenerator structure allows for enhancements such as this, but for sake of direct comparison with CycleGAN, we omit tweaks to the objective formulation in this current experiment.
It should be noted though, that in the case of only two domains (and in CycleGAN), the concept of the images being taken to a shared latent space need not hold at all. In this situation, the output of an encoder is always given to the same decoder, so it will learn to optimize for that specific domain’s decoder. In the case of more than two domains, the encoder output has to be suitable for all other decoders, meaning encoders cannot specialize.
3 Experiments and Results
3.1 Datasets
The first of two datasets used in this experiment consists of approximately 6,000 images of the Alps mountain range scraped from Flickr. The photos are individually categorized into four seasons based on the provided timestamp of when it was taken. This way we can translate among Spring, Summer, Autumn, and Winter.
The other dataset is a collection of approximately 10,000 paintings total from 14 different artists from Wikiart.org. The artists used are listed alphabetically: Zdzislaw Beksinski, Eugene Boudin, David Burliuk, Paul Cezanne, Marc Chagall, JeanBaptisteCamille Corot, Eyvind Earle, Paul Gauguin, Childe Hassam, Isaac Levitan, Claude Monet, Pablo Picasso, Ukiyoe (style, not person), and Vincent Van Gogh.
3.2 Setup
All images in our trials were scaled to 256x256 size. Batches are not used (only one image per input), random image flipping is enabled, random crops are disabled, and dropout is not used. Learning rate begins at 0.0002 for generators and 0.0001 for discriminators, constant for the first half of training and decreasing linearly to zero during the second half. The specific architectures for our networks used are detailed in Table 1 in the appendix. We run our experiments for epochs, having domains, as we consider a normal CycleGAN training with two domains to require 200 epochs for adequate results. The fourteen painters dataset, for example, ran 1400 epochs in 220 hours on our nVidia Titan X GPU. Note that pairwise CycleGAN instead would have taken about 2860 hours, or four months. Our code is publicly available at https://github.com/AAnoosheh/ComboGAN
3.3 Discussion
Figure 4 shows validation image results for ComboGAN trained on the Alps seasons photos for 400 iterations. ComboGAN did reasonably well converting among the four domains. Looking closely one can notice many examples hide information necessary for the reconstruction process (from training) within them. Many are semanticallymeaningful, such as the cloud inversion in the summer images, while some are easy ways to change color back and forth, such as color inversion. Meanwhile in Figure 5 we show results from CycleGAN trained on all six combinations of the four seasons to produce the same images, demonstrating that ComboGAN maintains comparable quality, while only training four networks for 400 epochs instead of CyleGAN’s twelve nets for a total of 1200 epochs.
Figure 6 shows randomlychosen validation images for our fourteen painters dataset. The figure contains translations of a single real image from each artist to every other one. Looking at columns as a whole, one can see common texture behavior and color palettes common to the pieces per artist column. In addition, we have included further real sample artworks from each artist in Figure 7 to help give a better impression of what an artist’s style is supposed to be. One piece in the translation results which stands out almost immediately is the tenth item under Chagall’s column: this image was styled as a completely blackandwhite sketch. The datasets gathered did happen to contain a few artworks which were unfinished, preliminary sketches for paintings; this led to the translation model coincidentally choosing to translate Corot’s painting to a monochrome pencil/charcoal sketch. Comparison with CycleGAN is not shown as it is computationally infeasible.
4 Conclusion
We have shown a novel construction of the CycleGAN model as ComboGAN, which solves the scaling issue inherent in current image translation experiments. ComboGAN still maintains the visual richness of CycleGAN without being constrained to two domains. In theory, additional domains can be appended to an existing ComboGAN model by simply creating a new encoder/decoder pair to train alongside a pretrained model.
Though the proposed framework is not restricted to CycleGAN, its formulation can be easily extended to UNIT [8], for example. The model allows for more modifications, such as encoderdecoder layer sharing, or to add latentspace losses to the representations outputted by the encoders. These were omitted from this work to demonstrate the sole effect of scaling the CycleGAN model and showing it still compares to the original, without introducing scalingirrelevant adjustments that might improve results on their own.
References
 [1] Y. Choi, M. Choi, M. Kim, J.W. Ha, S. Kim, and J. Choo. StarGAN: Unified Generative Adversarial Networks for MultiDomain ImagetoImage Translation. ArXiv eprints, Nov. 2017.
 [2] C. Chu, A. Zhmoginov, and M. Sandler. CycleGAN: a Master of Steganography. ArXiv eprints, Dec. 2017.
 [3] L. A. Gatys, A. S. Ecker, and M. Bethge. A Neural Algorithm of Artistic Style. ArXiv eprints, Aug. 2015.
 [4] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
 [5] J. Hoffman, E. Tzeng, T. Park, J.Y. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell. CyCADA: CycleConsistent Adversarial Domain Adaptation. ArXiv eprints, Nov. 2017.

[6]
P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros.
ImagetoImage Translation with Conditional Adversarial Networks.
ArXiv eprints, Nov. 2016.  [7] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. PhotoRealistic Single Image SuperResolution Using a Generative Adversarial Network. ArXiv eprints, Sept. 2016.
 [8] M. Liu, T. Breuel, and J. Kautz. Unsupervised imagetoimage translation networks. CoRR, abs/1703.00848, 2017.
 [9] F. Luan, S. Paris, E. Shechtman, and K. Bala. Deep photo style transfer. CoRR, abs/1703.07511, 2017.
 [10] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. Least Squares Generative Adversarial Networks. ArXiv eprints, Nov. 2016.
 [11] J.Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired ImagetoImage Translation using CycleConsistent Adversarial Networks. ArXiv eprints, Mar. 2017.
Appendix A Network Architectures
The network architecture used translation experiments is detailed in Table 1
. We use the following abbreviations for brevity: N=Neurons, K=Kernel size, S=Stride size. The transposed convolutional layer is denoted by DCONV. The residual basic block is denoted as RESBLK.
Layer #  Encoders 

1  CONV(N64,K7,S1), InstanceNorm, ReLU 
2  CONV(N128,K3,S2), InstanceNorm, ReLU 
3  CONV(N256,K3,S2), InstanceNorm, ReLU 
4  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
5  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
6  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
7  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
Layer #  Decoders 
1  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
2  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
3  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
4  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
5  RESBLK(N256,K3,S1), InstanceNorm, ReLU 
6  DCONV(N128,K4,S2), InstanceNorm, ReLU 
7  DCONV(N64,K4,S2), InstanceNorm, ReLU 
8  CONV(N3,K7,S1), Tanh 
Layer #  Discriminators 
1  CONV(N64,K4,S2), LeakyReLU 
2  CONV(N128,K4,S2), InstanceNorm, LeakyReLU 
3  CONV(N256,K4,S2), InstanceNorm, LeakyReLU 
4  CONV(N512,K4,S1), InstanceNorm, LeakyReLU 
5  CONV(N1,K4,S1) 