Log In Sign Up

Reusing Discriminators for Encoding Towards Unsupervised Image-to-Image Translation

Unsupervised image-to-image translation is a central task in computer vision. Current translation frameworks will abandon the discriminator once the training process is completed. This paper contends a novel role of the discriminator by reusing it for encoding the images of the target domain. The proposed architecture, termed as NICE-GAN, exhibits two advantageous patterns over previous approaches: First, it is more compact since no independent encoding component is required; Second, this plug-in encoder is directly trained by the adversary loss, making it more informative and trained more effectively if a multi-scale discriminator is applied. The main issue in NICE-GAN is the coupling of translation with discrimination along the encoder, which could incur training inconsistency when we play the min-max game via GAN. To tackle this issue, we develop a decoupled training strategy by which the encoder is only trained when maximizing the adversary loss while keeping frozen otherwise. Extensive experiments on four popular benchmarks demonstrate the superior performance of NICE-GAN over state-of-the-art methods in terms of FID, KID, and also human preference. Comprehensive ablation studies are also carried out to isolate the validity of each proposed component. Our codes are available at


page 5

page 13

page 14

page 15

page 16

page 17

page 18


SPA-GAN: Spatial Attention GAN for Image-to-Image Translation

Image-to-image translation is to learn a mapping between images from a s...

Independent Encoder for Deep Hierarchical Unsupervised Image-to-Image Translation

The main challenges of image-to-image (I2I) translation are to make the ...

Less is More: Unified Model for Unsupervised Multi-Domain Image-to-Image Translation

In this paper, we aim at solving the multi-domain image-to-image transla...

UU-Nets Connecting Discriminator and Generator for Image to Image Translation

Adversarial generative model have successfully manifest itself in image ...

Feature Quantization Improves GAN Training

The instability in GAN training has been a long-standing problem despite...

Unsupervised Homography Estimation with Coplanarity-Aware GAN

Estimating homography from an image pair is a fundamental problem in ima...

UGAN: Untraceable GAN for Multi-Domain Face Translation

The multi-domain image-to-image translation is received increasing atten...

Code Repositories


Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding Towards Unsupervised Image-to-Image Translation

view repo

1 Introduction

Image-to-Image translation transforming images from one domain to the other has boosted a variety of applications in vision tasks, from colorization 


, super-resolution 

[19] to video generation [35]. Given the extensive effort of collecting paired images between domains, a more practical line of research [40, 24, 11, 20, 15] directs the goal to unsupervised scenario where no paired information is characterized. Due to the non-identifiability problem [24] in unsupervised translation, various methods have been proposed to address this issue by using additional regulations including weight-coupling [24], cycle-consistency [40, 16, 38], forcing the generator to the identity function [34, 40], or more commonly, combination of them.

When we revisit current successful translation frameworks (such as the one proposed by CycleGAN [40]

), most of them consist of three components for each domain: an encoder to embed the input image to a low-dimension hidden space, a generator to translate hidden vectors to images of the other domain, and a discriminator for domain alignment by using GAN training 

[8]. While this piled-up way is standard, we are still interested in asking: is there any possibility to rethink the role of each component in current translation frameworks? and more importantly, can we change the current formulation (for example, to a more compact architecture) based on our rethinking?

Figure 1: Illustrative difference between CycleGAN-alike methods and our NICE-GAN.

The answer is yes, if we check the relation between the encoder and the discriminator. Basically, the discriminator is to distinguish between the translated image of the source domain and the real image of the target domain. To do so, the discriminator should conduct sort of semantics encoding of the input images before it can tell what images are true and what are false. This, in other words, contends the two roles of the discriminator: encoding and classifying. Indeed, the DCGAN paper 

[30] has revealed the encoding ability of the discriminator: strongly responses to the input image are observed in the first 6 learned convolutional features from the last convolution layer in the discriminator.

Upon the motivation mentioned above, this paper proposes to reuse the discriminator for encoding. In particular, we reuse early layers of certain number in the discriminator as the encoder of the target domain, as illustrated in Figure. 1. Such kind of reusing exhibits two-fold advantages: I. A more compact architecture is achieved. Since the encoder now becomes part of the discriminator, we no longer require an independent component for encoding. Also, unlike existing methods where the discriminator is abandoned after training, its encoding part is still kept for inference in our framework. II. The encoder is trained more effectively. Traditional training of the encoder is conducted by back-propagating the gradients from the generator, which is indirect. Here, by plugging it into the discriminator, the encoder is directly trained through the discriminative loss. Moreover, modern discriminators have resorted to the multi-scale scheme for more expressive power [7, 12, 6, 36]; our encoder will inherit the expressive ability by nature if the multi-scale discriminator is applied.

A remaining issue of our approach is how to perform adversary training. For traditional methods [40, 24, 11, 20, 15], the encoder is trained along with the generator for minimizing the GAN loss, while the discriminator is trained separately to maximize the objective. In our framework, the encoder and the discriminator become overlap, and it will bring in instability if we apply traditional training setting—the encoder as part of translation is trained for minimizing, and at the same time it belongs to the discriminator and is also trained for maximizing. To eliminate the inconsistency, we develop a decoupled training paradigm. Specifically, the training of the encoder is only associated with the discriminator, independent to the generator. Our experiments on several benchmarks show that such simple decoupling promotes the training remarkably (see details in Section 4.7). Another intuition behind is that disentangling the encoder from the training of translation will make it towards more general purpose of encoding other than translation along, thereby enabling more generality.

We summarize our contributions as follow.

  • To the best of our knowledge, we are the first to reuse discriminators for encoding specifically for unsupervised image-to-image translation. By such a reusing, a more compact and more effective architecture is derived, which is dubbed as No-Independent-Component-for-Encoding GAN (NICE-GAN).

  • Given that the reusing of discriminator will incur instability in terms of typical training procedure, this paper develops a decoupled training paradigm, which is simple yet efficient.

  • Extensive experimental evaluations on several popular benchmarks reveal that the proposed method outperforms various state-of-the-art counterparts. The comprehensive ablation studies are also conducted to verify the effectiveness of each proposed component.

2 Related Work

Image-to-image translation. Conditional GAN-based standard framework, proposed by Isola et al. [13] , promotes the study on image-to-image translation. Several works extend it to deal with super-resolution[36] or video generation[35]. Despite of the promising results they attain, all these approaches need paired data for training, which limits their practical usage.

Unsupervised image-to-image translation. In terms of unsupervised image-to-image translation with unpaired training data, CycleGAN [40], DiscoGAN [16], DualGAN [38] preserve key attributes between the input and the translated image by using a cycle-consistency loss. Various studies have been proposed towards extension of CycleGAN. The first kind of development is to enable multi-modal generations: MUNIT [11] and DRIT [20] decompose the latent space of images into a domain-invariant content space and a domain-specific style space to get diverse outputs. Another enhancement of CycleGAN is to perform translation across multiple (more than two) domains simultaneously, such as StarGAN [5]. A more funtional line of research focuses on transformation between domains with larger difference. For example, CoupledGAN [25], UNIT [24], ComboGAN [2] and XGAN [31] using domain-sharing latent space, and U-GAT-IT [15]

resort to attention modules for feature selection. Recently, TransGAGA 

[37] and TravelGAN [1] are proposed to characterize the latent representation by using Cartesian product of geometry and preserving vector arithmetic, respectively.

Introspective Neural Networks.

Exploring the double roles of the discriminator has been conducted by Introspective Neural Networks (INN) 

[14, 18, 22] and Introspective Adversarial Networks (IAN) [4, 33]. Deep convolution network in INN, also known as the generator, is used to discriminate between synthesized samples and the training data. Meanwhile, deep convolution network in IAN used as discriminator, can be treated as the part of encoder. Although INN does share the same purpose of reusing discriminator for generation, it exhibits several remarkable differences compared to our NICE-GAN. First, INN and NICE-GAN tackle different tasks. INN is for pure generation, and the discriminator is reused for generation from hidden vectors to images (as decoding); our NICE-GAN is for translation, and the discriminator is reused for embedding from images to hidden vectors (as encoding). In this sense, INN reuses discriminator for decoding, while NICE-GAN reuses it for encoding. Furthermore, INN requires sequential training even when doing inference, while NICE-GAN only needs one forward pass to generate a novel image, thus depicting more efficiency. Besides, it is noteworthy that IAN is also for pure generation and uses one discriminator to increase negative samples, which is an introspective mechanism; our NICE-GAN uses one discriminator to encode one of the domains to generate a negative sample of the other domain, which is a mutual introspective mechanism.

Figure 2: Illustration of the flowchart of NICE-GAN. Here we only display one translation stream from to (from dog to cat). Note that we apply a decoupled training fashion: the encoder is fixed when minimizing the adversarial loss, the reconstruction loss and the cycle loss, and it is trained when maximizing the adversarial loss.


This section presents the detailed formulation of our method. We first introduce the general idea, and then follow it up by providing the details of each component in NICE-GAN. The decoupled training mechanism is specified as well.

3.1 General Formulation

Problem Definition. Let , be two image domains. While supervised image-to-image translation requires to learn the conditional mappings and

given the joint distribution

, unsupervised translation learns and with only the marginals and

provided. Unsupervised translation is ill-posed, since there are infinitely many conditional probabilities corresponded to the same marginal distributions. To address this issue, current methods resort to adding extra regulations, such as weight-coupling 

[25, 24, 20], cycle-consistency [40, 16, 38], and identity-mapping-enforcing [34, 40], the latter two of which are employed in this paper.

In most of existing frameworks, the translation (resp. ) is composed of an encoder (resp. ) and a generator (resp. ). By combining them all together, it gives (resp. ). The GAN [8] training fashion is usually adopted to enable the translated output to fit the distribution of the target domain. Namely, we use a discriminator (resp. ) to classify between the true image and the translated image (resp. and ).

No Independent Component for Encoding (NICE). As mentioned in introduction, our NICE-GAN reuses discriminators for encoding, delivering the advantages of efficiency and effectiveness for training. Formally, we divide the discriminator into the encoding part and classification part . The encoding part will replace the original encoder in , resulting in a new translation . Similarly for the discriminator , we define and , and reformulate the translation function as . As for the classification components and , we further employ the multi-scale structure to boost the expressive power. Besides, the newly-formulated encoders and exist in the training loops of both translation and discrimination, making them difficult to train. Hence we proposed a decoupled training flowchart in NICE-GAN. The details of the architecture’s build-up and training are presented in Section 3.2 and Section 3.3, respectively. Figure 2 illustrates our framework. Unless otherwise noticed, we will remove the superscript from and for simplicity in what follows.

3.2 Architecture

Multi-Scale Discriminators and . We only discuss here, since the formulation of is similar. Full details are provided in the supplementary material. Our usage of multi-scale discriminators is inspired from previous works [7, 12, 6, 36]. In these approaches, the discriminator of different scale is applied to the image of different size (the small-size images are attained from the original image by down-sampling). In this paper, we consider a more efficient way by regarding the feature maps in different layers of the single input to be the images of different scales, and then feed each of them to the classifier with the corresponding input size for discriminating, which is similar to the application of feature pyramid representations in object detection (e.g. SSD [26] and FPN [23]).

We now introduce our idea in a formal way. As mentioned above, the discriminator contains two parts: the encoder and the classifier . To enable multi-scale processing, the classifier is further divided into three sub-classifiers: for local scale (10 x 10 receptive field), for middle scale (70 x 70 receptive field), and for global scale (286 x 286 receptive field). is directly connected to the output of . Then, a down-sampling-convolution layer is conducted on to provide the feature maps of smaller scale, which are concatenated to two branches: one is linked to , and the other one is further down sampled through convolution layers followed by . For a single input image, , , and are all trained to predict whether the image is true of false. The multi-scale discriminator is also illustrated in Figure 2.

Besides the multi-scale design, we develop a residual attention mechanism to further facilitate the feature propagation in our discriminator. Using attention in discriminator is originally proposed by U-GAT-IT [15]. Suppose the encoder contains feature maps of number (i.e. ). The idea of U-GAT-IT is first learning an attention vector , each element of which counts the importance of each feature map. Then, the attended features computed by

are leveraged for later classification. Upon but beyond U-GAT-IT, this paper further takes the residual connection into account, that is, we use

, where the trainable parameter determines the trade-off between the attended features and the original ones. When , it returns to indicating no attention is used, and otherwise, the attention is activated. By this modification, our method becomes more flexible on adjusting the importance of different feature maps and thus attains more effectiveness in training, which will be clarified by our experiments.

Generators and . Both and are composed of six residual blocks [9], and two sub-pixel convolutional layers for up-sampling [29, 32]. And, we use the AdaLIN light version similar to the paper [15]. In addition, spectral normalization [28] used for the discriminator and cycle-consistency loss is conducted to prevent generators from mode collapse. Complete details are presented in the supplementary material.

3.3 Decoupled Training

The training process is proceeded in terms of three kinds of losses: adversarial loss, identity reconstruction loss, and cycle-consistency loss. The adversarial loss is to pursue domain transfer, while both reconstruction loss and cycle-consistency loss are for tackling the non-identifiability issue as pointed out before.

Since the encoder is not only a part of the discriminator but is also taken as the input of the generator , it will incur inconsistency if we apply conventional adversarial training. To overcome this defect, we decouple the training of from that of the generator . The details of formulating each loss are provided below.

Adversarial loss. First, we make use of the least-square adversarial loss by [27] for its more stable training and higher quality of generation. The min-max game is conducted by


where, is fixed and is trained when maximizing , and both of them are fixed when minimizing .

Cycle-consistency loss. The cycle-consistency loss is first introduced by CycleGAN [40] and DiscoGAN [16], which is to force the generators to be each other’s inverse.


where computes the norm, and both and are also frozen.

Reconstruction loss. Forcing the generator to be close to the identity function is another crucial regulation technique in CycleGAN [40]. Unlike CycleGAN where the identity loss is based on domain similarity assumption, our reconstruction is based on the shared-latent space assumption. Reconstruction loss is to regularize the translation to be near an identity mapping when real samples’ hidden vectors of the source domain are provided as the input to the generator of the source domain. Namely,


where is still kept unchanged.

Similarly, we can define the losses from domain to : , , and .

Figure 3: Examples of generated outputs. From top to bottom: dogcat, wintersummer, photovangogh, and zebrahorse.

Full objective. The discriminators’ final objective is


while the generators’ final loss objective is

where, , and , , and are the trade-off weights (they are fixed as , , and throughout our experiments).

Note again, the encoders and are trained under the discriminator’s objective but are decoupled from the training of generators. The benefit of the proposed decoupled training paradigm will be analyzed by our experiments.

MethodDataset dog cat winter summer photo vangogh zebra horse
NICE-GAN 48.79 1.58 76.44 1.22 122.27 3.71 149.48 4.29
NICE-GAN* 51.98 1.50 79.02 1.35 122.59 3.53 150.57 4.43
U-GAT-IT-light 80.75 3.22 80.33 1.82 137.70 6.03 145.47 3.39
CycleGAN 119.32 4.93 79.58 1.36 136.97 4.75 156.19 5.54
UNIT 59.56 1.94 95.93 4.63 136.80 5.17 170.76 6.30
MUNIT 53.25 1.26 99.14 4.66 130.55 4.50 193.43 7.25
DRIT 94.50 5.20 78.61 1.69 136.24 5.43 200.41 10.12
MethodDataset cat dog summer winter vangogh photo horse zebra
NICE-GAN 44.67 1.20 76.03 0.67 112.00 2.79 65.93 2.09
NICE-GAN* 55.72 1.89 77.13 0.73 117.81 3.61 84.89 3.29
U-GAT-IT-light 64.36 2.49 88.41 1.43 123.57 4.91 113.44 5.13
CycleGAN 125.30 6.93 78.76 0.78 135.01 4.71 95.98 3.24
UNIT 63.78 1.94 112.07 5.36 143.96 7.44 131.04 7.19
MUNIT 60.84 2.42 114.08 5.27 138.86 6.19 128.70 6.92
DRIT 79.57 4.57 81.64 1.27 142.69 5.62 111.63 7.40
Table 1: The FID and the KID 100 for different algorithms. Lower is better. All of the methods are trained to the 100K-th iterations. NICE-GAN* is the version that the generator network is composed of only four residual blocks.

4 Experiments

4.1 Baselines

We compare the performance NICE-GAN with state-of-the-art methods including CycleGAN [40], UNIT [24], MUNIT [11], DRIT [20], and U-GAT-IT [15] considering their competitive performance on unsupervised image-to-image translation. All compared methods are conducted by using the public codes. Specifically for U-GAT-IT, we use its light version due to the memory limit of our GPU machine. The details of all baselines are introduced in the supplementary material.

4.2 Dataset

The experiments are carried out on four popular benchmarks of unpaired images: horsezebra, summerwinter_yosemite, vangoghphoto and catdog. The first three datasets are used in CycleGAN [40], whose train-test splits are respectively: 1,067/120 (horse), 1,334/140 (zebra); 1,231/309 (summer), 962/238 (winter); 400/400 (vangogh), 6,287/751 (photo). The last dataset is studied in DRIT [20][21], whose train-test splits are: 771/100 (cat), 1,264/100 (dog). All images of all datasets are cropped and resized to 256 256 for training and testing.

MethodModule Total number of params(FLOPs)
Generators Discriminators
U-GAT-IT-light 21.2M(105.0G) 112.8M(15.8G)
NICE-GAN 16.2M(67.6G) 93.7M(12.0G)
NICE-GAN* 11.5M(48.2G) 93.7M(12.0G)
Table 2: Total number of parameters and FLOPs of network modules. NICE-GAN* are the version that the generator network is composed of only four residual blocks.

4.3 Evaluation Metrics

Human Preference. To compare the veracity of translation outputs generated by different methods, we carry out human perceptual study. Similar to Wang et al[36], volunteers are shown an input image and three translation outputs from different methods, and given unlimited time they select which translation output looks better.

The Fréchet Inception Distance (FID) proposed by Heusel et al. (2017) [10]

contrast the statistics of generated samples against real samples. The FID fits a Gaussian distribution to the hidden activations of InceptionNet for each compared image set and then computes the Fréchet distance (also known as the Wasserstein-2 distance) between those Gaussians. Lower FID is better, corresponding to generated images more similar to the real.

Figure 4: Human preference results. The number indicates the percentage of preference on that translation task. Abbreviation: (D)og, (C)at; (W)inter, (S)ummer; (P)hoto, (V)angogh; (Z)ebra, (H)orse.

The Kernel Inception Distance (KID) developed by [3] is a metric similar to the FID but uses the squared Maximum Mean Discrepancy(MMD) between Inception representations with a polynomial kernel, , where is the representation dimension. It can also be viewed as an MMD directly on input images with the kernel , where

is the function mapping images to Inception representations. Unlike FID, KID has a simple unbiased estimator, making it more reliable especially when there are much more inception features channels than image numbers. Lower KID indicates more visual similarity between real and generated images. Our implementation of KID is based on

where the hidden representations are from the Inception-v3 pool3 layer.

4.4 Setup

We use ReLU as the actionvation function in the generator and leaky-ReLU with a slope of

in the discriminator. We train all models using the Adam [17] optimizer with the learning rate and on NVIDIA RTX 2080Ti GPUs. For data augmentation, we flip the images horizontally with a probability of , resized them to , and randomly cropped them to . The batch size is set to for all experiments. We also use a weight decay at the rate of . All models are trained over 100K iterations. More details on the training process and network architecture are provided in the supplementary materials.

4.5 Comparisons with state of the arts

Table 1 shows that our approach generally achieves the lowest FID or KID scores on all cases except zebrahorse, indicating the promising translation ability of our NICE framework on varying tasks. These two metrics maintain good consistency in relative scores, which fully demonstrates our NICE-GAN reasonably performs well regardless of what measure we have used. By contrast, other methods only perform well on certain datasets; for instance, U-GAT-IT-light, UNIT and MUNIT successfully transforms the semantic of objects(e.g. animal faces), while CycleGAN is good at modifying low-level characteristics (e.g. colors and textures). U-GAT-IT-light roughly shares the same structures (multi-scale discriminators and generators) as NICE-GAN, and it differs from NICE-GAN mainly in its independent formulation of encoders. Table 2 reports total number of parameters and FLOPs of U-GAT-IT-light and NICE-GAN, and it reads that our architecture is more compact by reusing discriminators for encoding. To further observe the visual difference, Figure 3 depicts the translated images of different methods on test sets. The generated images by NICE-GAN are almost more meaningful and have less artifacts than others (see the catdog task for an example).

In addition to our method, we select two baselines achieving lowest KID scores in each dataset to conduct a human perceptual study. Firstly, volunteers are shown an example pair consisting of source-domain image and a target-domain image as a reference to better understand what style is translating. Secondly, they are given an input image, and three translated outputs among which one is from NICE-GAN and the other two from the selected baselines. They have unlimited time to choose which looks most real based on perceptual realism. The synthesized images are displayed in a randomized order to ensure fair comparisons. Besides, checkpoint questions are set and distributed to each volunteer to validating human effort. A total of 123 questionnaires are collected in which we find that 120 are valid. Figure 4 shows that NICE-GAN wins the majority of votes in all cases except for zebrahorse. These results are also consistent with the quantitative metrics in Table 1. More examples of the results from our model are included in the supplementary materials.

Data Components FID KID
Set NICE RA 100
80.75 3.22
67.60 2.94
63.80 3.27
dog 48.55 1.23
48.79 1.58
cat 53.52 1.84
203.56 15.27
216.03 18.57
64.36 2.49
64.62 2.41
51.49 1.68
cat 52.92 1.82
44.67 1.20
dog 54.90 2.17
238.62 21.41
231.24 22.12
Table 3: Ablation Study. Results of methods are all in 100K iterations of discriminator. NICE: No Independent Component for Encoding; RA: add residual connection in CAM attention module; for local scale (10 x 10 receptive field), for middle scale (70 x 70 receptive field), and for global scale (286 x 286 receptive field); : decreasing the number of shared layers by 1; : increasing by 1.

4.6 Ablation study

We conduct ablation studies on the catdog datasets in Table 3 to isolate the validity of the key components of our method: the NICE strategy, the multi-scale formulation and the Residual Attention (RA) mechanism in discriminators. We perform two groups of experiments.

NICE and RA. The first group keeps employing the multi-scale formulation but removes either or both of NICE and RA to draw the performance difference. The results are reported in Table 3. It verifies that each of NICE and RA contributes to the performance improvement while the importance of NICE is more significant. Overall, by combing all components, NICE-GAN remarkably outperforms all other variants.

The number of shared layers. For consistent comparison, we employ the commonly-used ResNet backbone in [40] as our generator and naturally share the whole encoder therein. We also evaluate the effect of changing the number of layers shared by discriminator and encoder. Table 3 shows that there is no general improvement whether decreasing or increasing the number of layers. Thus, the best choice here is sharing the whole default encoder.

Multi-scale. The other group of experiments is to evaluate the impact of the multi-scale formulation (, , ) in discriminators. The results are summarized in Table 3. We find that removing will cause a serious detriment; the importance of and is task-dependent, and adding upon does not exhibit clear enhancement on this task. Actually, all three scales are generally necessary and multi-scale is more robust, with more discussions in the supplementary material.

Weight-coupling. Besides, there are existing methods of model compression, such as weight-coupling [24, 20]. Sharing a few layers between two generators enables model compression but detriments the translation performance. For example on catdog, the FID increases from 48.79/44.67 to 49.67/56.32 if sharing the first layer of the decoders, and from 48.79/44.67 to 55.00/55.60 if sharing the last layer of encoders. Similar results are observed if we reuse the first layer of the classifiers, the FID increases from 48.79/44.67 to 61.73/46.65 . It implies weight-coupling could weaken the translation power for each domain.

Visualization in Figure 5 shows the latent vectors of each domain w/ and w/o NICE on cat dog via t-SNE, as well as Maximum Mean Discrepancy (MMD) to compute domain difference. Interestingly, with NICE training, the latent distributions of two domains become more clustered and closer, yet separable to each other. Such phenomenon explains why our NICE-GAN performs promisingly. By shortening the transition path between domains in the latent space, NICE-GAN built upon the shared latent space assumption can probably facilitate domain translation in the image space.

Figure 5: The t-SNE visualization of the latent vectors, as well as the MMD to measure domain difference.

4.7 Decoupled Training Analysis

In our NICE framework, we decouple the training of from that of the generator . To prove the effectiveness of this strategy, we develop two additional variants: NICE-GAN-1 and NICE-GAN-2. To be specific, NICE-GAN-1 adopts the conventional training approach, where the encoders are jointly trained with the discriminators and generators. As for NICE-GAN-2, it is also performed in a decoupled way but acts oppositely with our method, that is, the encoders are trained along with the generator, independent to the classifiers in the discriminators. From a more essential sense, the discriminators indeed degenerate to the classifiers in NICE-GAN-2.

Figure 6 reports the training curves of NICE-GAN, NICE-GAN-1 and NICE-GAN-2. Clearly, the training of NICE-GAN-1 is unstable, which is consistent with our analysis. NICE-GAN-2 performs more stably and better than NICE-GAN-1, but is still inferior to our NICE-GAN. We conjecture that in NICE-GAN-2, using the classifiers for discriminating is actually aligning the distributions of hidden vectors. Nevertheless, NICE-GAN leverages both the encoders and classifiers for discriminating, underlying that it is matching the distributions of image space, thus more precise information is captured.

A clear disentanglement of responsibilities of different components makes NICE-GAN simple and effective. Besides, It further supports the idea [4] that features learned by a discriminatively trained network tend to be more expressive than those learned by an encoder network trained via maximum likelihood, and thus better suited for inference.

Figure 6: Decoupled Training Analysis. NICE-GAN: decoupled training, and will only be updated by ; NICE-GAN-1: jointly train the discriminators and generators, and will be updated by ,and ; NICE-GAN-2: decoupled training, and will be updated by ,and .

5 Conclusion

In this paper, we present NICE-GAN, a novel framework for unsupervised image-to-image translation. It reuses discriminators for encoding and develops a decoupled paradigm for efficient training. Comparable experimental evaluations on several popular benchmarks reveal that NICE-GAN generally achieves superior performance over state-of-the-art methods. Our research is expected to evoke the rethinking on what discriminators actually can do, and it is potentially applicable to refresh the GAN-based models in other cases.


  • [1] M. Amodio and S. Krishnaswamy (2019) TraVeLGAN: image-to-image translation by transformation vector learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 8983–8992. Cited by: §2.
  • [2] A. Anoosheh, E. Agustsson, R. Timofte, and L. Van Gool (2018) Combogan: unrestrained scalability for image domain translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 783–790. Cited by: §2.
  • [3] M. Bińkowski, D. Sutherland, M. Arbel, and A. Gretton (2018) Demystifying mmd gans. In ICLR, Cited by: §4.3.
  • [4] A. Brock, T. Lim, J. M. Ritchie, and N. Weston (2016) Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093. Cited by: §2, §4.7.
  • [5] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018) Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797. Cited by: §2.
  • [6] U. Demir and G. Unal (2018)

    Patch-based image inpainting with generative adversarial networks

    arXiv preprint arXiv:1803.07422. Cited by: §1, §3.2.
  • [7] I. Durugkar, I. Gemp, and S. Mahadevan (2017) Generative multi-adversarial networks. In ICLR, Cited by: §1, §3.2.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1, §3.1.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.2.
  • [10] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §4.3.
  • [11] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In ECCV, Cited by: §1, §1, §2, §4.1.
  • [12] S. Iizuka, E. Simo-Serra, and H. Ishikawa (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (ToG) 36 (4), pp. 107. Cited by: §1, §3.2.
  • [13] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, Cited by: §2.
  • [14] L. Jin, J. Lazarow, and Z. Tu (2017) Introspective classification with convolutional nets. In Advances in Neural Information Processing Systems, pp. 823–833. Cited by: §2.
  • [15] J. Kim, M. Kim, H. Kang, and K. H. Lee (2020) U-gat-it: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. In International Conference on Learning Representations, External Links: Link Cited by: §1, §1, §2, §3.2, §3.2, §4.1.
  • [16] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim (2017) Learning to discover cross-domain relations with generative adversarial networks. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    pp. 1857–1865. Cited by: §1, §2, §3.1, §3.3.
  • [17] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.4.
  • [18] J. Lazarow, L. Jin, and Z. Tu (2017) Introspective neural networks for generative modeling. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2774–2783. Cited by: §2.
  • [19] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §1.
  • [20] H. Lee, H. Tseng, J. Huang, M. K. Singh, and M. Yang (2018) Diverse image-to-image translation via disentangled representations. In European Conference on Computer Vision, Cited by: §1, §1, §2, §3.1, §4.1, §4.2, §4.6.
  • [21] H. Lee, H. Tseng, Q. Mao, J. Huang, Y. Lu, M. K. Singh, and M. Yang (2019) DRIT++: diverse image-to-image translation viadisentangled representations. arXiv preprint arXiv:1905.01270. Cited by: §4.2.
  • [22] K. Lee, W. Xu, F. Fan, and Z. Tu (2018) Wasserstein introspective neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3702–3711. Cited by: §2.
  • [23] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §3.2.
  • [24] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. In Advances in neural information processing systems, pp. 700–708. Cited by: §1, §1, §2, §3.1, §4.1, §4.6.
  • [25] M. Liu and O. Tuzel (2016) Coupled generative adversarial networks. In Advances in neural information processing systems, pp. 469–477. Cited by: §2, §3.1.
  • [26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §3.2.
  • [27] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley (2017) Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802. Cited by: §3.3.
  • [28] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. In ICLR, Cited by: §3.2.
  • [29] A. Odena, V. Dumoulin, and C. Olah (2016) Deconvolution and checkerboard artifacts. Distill. External Links: Link, Document Cited by: §3.2.
  • [30] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
  • [31] A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, and K. Murphy (2017) Xgan: unsupervised image-to-image translation for many-to-many mappings. arXiv preprint arXiv:1711.05139. Cited by: §2.
  • [32] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang (2016)

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874–1883. Cited by: §3.2.
  • [33] J. Su (2019) O-gan: extremely concise approach for auto-encoding generative adversarial networks. External Links: 1903.01931 Cited by: §2.
  • [34] Y. Taigman, A. Polyak, and L. Wolf (2016) Unsupervised cross-domain image generation. In ICLR, Cited by: §1, §3.1.
  • [35] T. Wang, M. Liu, J. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro (2018) Video-to-video synthesis. In Advances in Neural Information Processing Systems, pp. 1144–1156. Cited by: §1, §2.
  • [36] T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8798–8807. Cited by: §1, §2, §3.2, §4.3.
  • [37] W. Wu, K. Cao, C. Li, C. Qian, and C. C. Loy (2019) Transgaga: geometry-aware unsupervised image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8012–8021. Cited by: §2.
  • [38] Z. Yi, H. Zhang, P. Tan, and M. Gong (2017) Dualgan: unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision, pp. 2849–2857. Cited by: §1, §2, §3.1.
  • [39] R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In European conference on computer vision, pp. 649–666. Cited by: §1.
  • [40] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networkss. In Computer Vision (ICCV), 2017 IEEE International Conference on, Cited by: §1, §1, §1, §2, §3.1, §3.3, §3.3, §4.1, §4.2, §4.6.

Appendix A Appdendix

a.1 Introduction of state-of-the-art models

CycleGAN uses an adversarial loss to learn the mapping between two different domains. The method regularizes the mapping through cycle-consistency losses, using two down-sampling convolution blocks, nine residual blocks, two up-sampling deconvolution blocks and four discriminator layers. Codes are on

UNIT consists of two VAE-GAN with shared latent space. The structure of the UNIT is similar to CycleGAN, but different from CycleGAN in that it uses multi-scale discriminators and shares the weight of the high-level layer stage of the encoder and decoder. Codes are on

MUNIT can generate various outputs for a single input image. MUNIT assumes that the image representation can be decomposed into a content code and a style code. The main difference between MUNIT’s network structure and other networks is that it uses AdaIN in the decoder and also a multi-scale discriminator. We generate N = 1 images for each input image in the test set. We use the generated samples and all samples in test set to compute FID and KID. Codes are on

DRIT can also create different outputs for a single input image similar to MUNIT. It decomposes the image into a content code and a style code, using a multi-scale discriminator. The difference between DRIT and MUNIT is that the content code is shared like UNIT. We generate N = 1 images for each input image in the test set. We use the generated samples and all samples in test set to compute FID and KID. Codes are on

U-GAT-IT is a recent work associated with unsupervised image-to-image translation, which incorporates a CAM (Class Activation Map) module and an AdaLIN (Adaptive Layer-Instance Normalization) function in an end-to-end manner. U-GAT-IT can translate images requiring holistic changes or large shape changes. Light version is applied due to the limited memory of our gpu. Codes are on

a.2 Network Architecture

The architectures of the discriminator and generator in NICE-GAN are shown in Table 4 and  5, respectively. For the generator network, we use adaptive layer-instance normalization in decoders except the last output layer. For the discriminator network, Leaky-ReLU is applied with a negative slope of 0.2 and spectral normalization is put in all layers. We apply softmax instead of clip to limit

in AdaLIN. Besides, we concat global average & max pooling’s feature maps before Classifier0 so that the input channel of MLP-(N1) is 256. More details are presented in our source code. There are some notations: N is the number of output channels; K is the kernel size; S is the side size; P is the padding size; AdaLIN is the adaptive layer-instance normalization; LIN is layer-instance normalization; SN is the spectral normalization; RA is adding residual connection in CAM attention module.

Component Input Output Shape Layer Information
Encoder CONV-(N64, K4, S2, P1), SN, Leaky-ReLU
Down-sampling0 CONV-(N128, K4, S2, P1), SN, Leaky-ReLU
RA of Global Average & Max Pooling,
Encoder& MLP-(N1), Multiply the weights of MLP
Classifier0 CONV-(N128, K1, S1), RA, Leaky-ReLU

CONV-(N256, K4, S2, P1), SN, Leaky-ReLU
Classifier1 CONV-(N512, K4, S1, P1), SN, Leaky-ReLU
CONV-(N1, K4, S1, P1), SN
Down-sampling2 CONV-(N512, K4, S2, P1), SN, Leaky-ReLU
CONV-(N1024, K4, S2, P1), SN, Leaky-ReLU
Classifier2 CONV-(N2048, K4, S1, P1), SN, Leaky-ReLU
CONV-(N1, K4, S1, P1), SN
Table 4: Discriminator network architecture
Component Input Output Shape Layer Information
Sampling CONV-(N256, K3, S1, P1), LIN, ReLU
Global Average Pooling
MLP-(N256), ReLU
MLP-(N256), ReLU
MLP-(N256), ReLU
Bottleneck AdaResBlock-(N256, K3, S1, P1), AdaLIN, ReLU
AdaResBlock-(N256, K3, S1, P1), AdaLIN, ReLU
AdaResBlock-(N256, K3, S1, P1), AdaLIN, ReLU
AdaResBlock-(N256, K3, S1, P1), AdaLIN, ReLU
AdaResBlock-(N256, K3, S1, P1), AdaLIN, ReLU
AdaResBlock-(N256, K3, S1, P1), AdaLIN, ReLU
Up-sampling Sub-pixel-CONV-(N128, K3, S1, P1), LIN, ReLU
Sub-pixel-CONV-(N64, K3, S1, P1), LIN, ReLU
CONV-(N3, K7, S1, P3), Tanh
Table 5: Generator network architecture
Object dog winter photo zebra
-0.2492 0.2588 -0.0006 -0.2699
Object cat summer vangogh horse
0.3023 0.0006 0.3301 0.2723
Table 6: RA Analysis. , where the trainable parameter determines the trade-off between the attended features and the original ones. When , it returns to indicating no attention is used, and otherwise, the attention is activated.

a.3 Additional results

a.3.1 Discussing

As for Residual Attention (RA) module, the parameter is task-specific (as illustrated in table 6). Regarding tasks like photo vangogh and summer winter, is close to 0 indicating more attention is paid to global features, which is reasonable as translating the whole content of the images in these tasks is more necessary than focusing on local details.

MethodDataset dog2cat winter2summer photo2vangogh zebra2horse
216.03 18.57 81.12 1.50 135.17 3.92 215.79 12.79
, 203.56 15.27 77.52 1.14 121.47 2.86 193.11 10.37
, , 48.79 1.58 76.44 1.22 122.27 3.71 149.48 4.29
, 45.46 0.85 77.50 1.17 131.38 5.38 147.24 3.92
, 54.31 2.20 88.02 2.45 130.73 4.87 154.13 5.43
MethodDataset cat2dog summer2winter vangogh2photo horse2zebra
231.24 22.12 76.88 0.63 155.50 7.40 168.57 10.74
, 238.62 21.41 77.10 0.67 132.08 4.67 104.46 4.60
, , 44.67 1.20 76.03 0.67 112.00 2.79 65.93 2.09
, 53.94 1.95 79.91 1.11 128.47 4.87 90.00 3.77
, 65.99 2.62 96.26 2.08 123.05 4.32 80.50 2.85
Table 7: Multi-Scale Analysis. For both FID and KID, lower is better. Results of methods are all in 100K iterations of discriminator. 1-scale: consists of ; 2-scale: consists of , ; 3-scale: consists of , , and .

a.3.2 More analysis on the multi-scale discriminator.

Table 7 evaluates the impact of on various datasets. For the cat dog task, global characteristics of the semantic of objects is of much importance. For the colorization and stylization task(e.g. summer winter, photo vangogh ), preserving middle and local scale still delivers promising performance. Specifically, if removing the local scale, FID increases significantly from 66 to 90 on horsezebra; and from 76/76 to 88/96 on summerwinter if leaving out the medium scale. It implies all three scales are generally necessary.

Figure 7: Translation results with linear-interpolated hidden vectors between two domains. : images of Domain X generated from the hidden vectors;

: images of Domain Y generated from the hidden vectors. Results show that the hidden vectors share latent space since it successfully generates reasonable image from linear-interpolated hidden vectors between two domains.

Figure 8: The heat-map visualizations of the hidden vectors.

a.3.3 More visualizations of hidden vectors.

The training process is proceeded in terms of three kinds of losses: adversarial loss, identity reconstruction loss, and cycle-consistency loss. The adversarial loss is to pursue domain transfer, while both reconstruction loss and cycle-consistency loss are for tackling the non-identifiability issue. As shown in Figure 7, our method enables meaningful hidden interpolations since the shared-latent space assumption are enforced by NICE framework and three kinds of losses in our training.

Figure 8 visualizes more heat-maps of the hidden vectors. Generally, the heat-maps by the model with NICE show more concise and distinguishable semantics encoding than that without NICE (namely an independent encoder is used). It shows using NICE captures the texture and local parts of the object more clearly, exhibiting the superiority of NICE-GAN.

a.3.4 Additional comparisons with state of the arts

Due to the lack of standard protocol so far, our experiments use released codes to train all baselines over the same iterations for fair comparison. Table 8 shows additional comparisons with state of the arts in 200K-th iterations. Still, NICE-GAN (trained for more iterations) generally performs superiorly.

MethodDataset dog cat winter summer photo vangogh zebra horse
NICE-GAN 42.22 0.73 77.51 1.37 126.29 4.35 138.77 3.26
U-GAT-IT-light 63.85 2.08 72.58 1.99 120.92 3.68 150.34 3.64
CycleGAN 93.72 3.46 77.01 1.07 115.74 2.90 140.65 3.64
UNIT 53.18 1.36 95.76 4.59 135.37 5.03 174.65 6.36
MUNIT 48.52 1.21 99.14 4.36 132.22 4.75 190.06 6.32
DRIT 63.13 2.75 83.30 2.03 126.11 4.28 164.92 6.78
MethodDataset cat dog summer winter vangogh photo horse zebra
NICE-GAN 34.71 0.61 78.87 0.78 107.53 2.99 75.64 1.77
U-GAT-IT-light 69.43 2.48 84.16 1.16 110.03 3.54 85.66 2.78
CycleGAN 103.95 5.41 78.39 0.82 117.88 3.08 68.11 1.52
UNIT 42.32 0.90 111.14 5.34 125.85 5.97 118.98 6.34
MUNIT 45.17 1.14 110.91 4.90 131.25 6.01 104.72 5.26
DRIT 53.19 1.73 81.64 1.10 111.46 3.76 92.26 4.58
Table 8: The FID and the KID 100 for different algorithms. Lower is better. All of the methods are trained to the 200K-th iterations.

a.3.5 More visualizations of translated images.

In addition to the results presented in the paper, we show more generated images for the four datasets in Figure 9, 10, 11, 12, 13, 14, 15 and 16.

Figure 9: Examples of cat dog translation images. As is shown in these examples, images generated by NICE-GAN, UNIT and MUNIT have better quality.
Figure 10: Examples of dog cat translation images. Most images are optimistic except those generated by CycleGAN and DRIT.
Figure 11: Examples of horse zebra translation images. The translation images shows that NICE-GAN has better ability in adding textures except for subtle color differences during the translation process.
Figure 12: Examples of zebra horse translation images. As is shown in the examples, images generated by U-GAT-IT gain the best results. The disadvantage of NICE-GAN still lies in subtle color differences.
Figure 13: Examples of summer winter translation images. Images generated by different methods gain relatively ideal and realistic results.
Figure 14: Examples of winter summer translation images. Images generated by different methods look optimistic except for images generated by CycleGAN and UNIT.
Figure 15: Examples of vangogh photo translation images. The translation of vangogh photo is a difficult task, most methods could barely finish the task.
Figure 16: Examples of photo vangogh translation images. Images generated by different methods gain relatively ideal results except for DRIT.