1 Introduction
Generative models have come a long way. Researchers proposed various generative models such as Restricted Boltzmann Machine (RBM)
(Hinton and Salakhutdinov, 2006), Deep Boltzmann Machine (DBM) (Salakhutdinov and Hinton, 2009). Among these generative models, Variational Autoencoder (VAE) (Kingma and Welling, 2013) and Generative Adversarial Network (GAN) (Goodfellow et al., 2014) are the most popular models.Recently, GAN is under the research spotlight since it produces sharp images and better image quality than any other generative models. Accordingly, researchers proposed variants of GANs such as architectural variants e.g. Deep Convolutional GAN (DCGAN) (Radford et al., 2015), EnergyBased GAN (EBGAN) (Zhao et al., 2016) or loss variants e.g. Least Squares GAN (LSGAN) (Mao et al., 2017), Wasserstein GAN (WGAN) (Arjovsky et al., 2017).
However, GAN and its variants have innate disadvantages which make GANs hard to train. It is difficult to balance the learning speed of the generator and the discriminator so the losses of these two networks oscillate. Besides, GANs often suffer from mode collapse. Among these disadvantages, what we want to emphasize is balancing the generator and the discriminator. Balancing the generator and the discriminator is hard because in many cases the discriminator converges faster than the generator.
To solve this phenomenon, Boundary Equilibrium GAN (BEGAN) (Berthelot et al., 2017)
proposed equilibrium hyperparameter that defines the ratio between losses of the generator and the discriminator. BEGAN showed impressive performance with a simple architecture. BEGAN proved that balancing the generator and the discriminator is important for GANs performance. However, BEGAN architecture is not widely applicable since it has a different structure from other GANs. BEGAN used EBGAN based architecture which uses autoencoder as a discriminator and consequently uses pixelwise mean squared loss.
In this paper, we propose Unbalanced GANs, which uses pretrained variational decoder as a generator of GAN. There are similar attempts with us which is to combine VAE and GAN. One of the examples is VAEGAN (Larsen et al., 2015). VAEGAN uses an endtoend architecture that is configured in the order of variational encoder, decoder, and discriminator. This architecture makes possible to generate sharper and more realistic images than VAE. However, since VAEGAN trains three networks simultaneously, it is difficult to extract the latent distribution in the variational encoder and discriminate between the real and generated images in the discriminator.
During the training process, first, we train VAE with a given dataset. Then, we transfer the weight of the variational decoder to the generator. With the pretrained generator, we train GAN again in the same way as before. Our method can be used in any GANs. If there is a pretrained variational decoder which has the same network architecture with a generator, then our method can be utilized and enhance GANs’ performance.
Since VAE is trained while assuming the prior distribution as the normal distribution, the approximated posterior distribution also becomes the normal distribution. If we initialize the weights of the generator using the pretrained variational decoder, we can start training GAN in the state that the generative distribution is equal to the normal variational decoder distribution. due to this, we can prevent the fast convergence of the discriminator and stabilize the training of GAN.
Our contributions are as follows:

We combine VAE with GAN by using a pretrained variational decoder as a generator of GAN. Our method can be applied to any GANs by constructing proper VAE architecture according to GAN’s.

Using pretrained generator, we prevent the discriminator from winning too easily (faster convergence of the discriminator compare to the generator) at the early epoch while maintaining the stabilized learning process of GANs and reducing mode collapses.

We show faster convergence of the generator and the discriminator and better image quality at early epochs compared to ordinary GANs.
2 Related Work
2.1 Variational Autoencoder
VAE has two networks, an encoder and a decoder network. The encoder network samples the distribution of a given data assuming that the distribution of the dataset is normal . The decoder network reconstructs the data from the sampled distribution .
(1) 
VAE uses variational inference to approximate prior distribution. The VAE loss is a mixture of the prior regularization parameter and the reconstruction loss:
(2) 
where is the Kullback–Leibler (KL) divergence.
The reconstruction loss is pixelwise Binary Cross Entropy (BCE) between real images and reconstructed images . The regularization parameter regularizes the distribution of to be the zeromean normal distribution by minimizing KL divergence.
VAE is stable during training because VAE uses pixelwise reconstruction error. However, as VAE is optimized to match the average reconstruction loss of given inputs, it produces blurry images.
If we use a pretrained variational decoder as a generator, we can utilize the advantages of both VAE and GAN. We can initialize the generative distribution with the variational decoder distribution which is the normal distribution. Through training GAN, we can also make blurry images of VAE sharp and clear. Furthermore, there is no concern about the failure of pretraining.
2.2 Generative Adversarial Network
GAN has two networks: a generator and a discriminator network. GAN is trained by a twoplayer game between the generator and the discriminator. The generator network creates samples to fool the discriminator. The discriminator network examines samples to determine whether the given input is real or generated.
Given random noise and real data , the generator is trained to minimize and the discriminator is trained to maximize
(3) 
where and is the real data distribution and the prior distribution respectively. GAN usually uses BCE loss for the generator and the discriminator. However, some variants of GANs use different losses such as mean squared error.
When the generator is fixed, the optimal discriminator is
(4) 
And given optimal discriminator, GAN loss has its global optimum when . At that point, the loss of the generator becomes
(5) 
where JSD is the JensenShannon (JS) divergence.
This means that the generator is trained to minimize the distance between prior and real data distribution. As a result, the generator learns to mimic real data distribution.
GAN is one of the most promising generative models since it produces sharp and diverse images. However, because GAN loss is a minmax loss and GAN trains the generator and the discriminator alternately, it is hard to reach global optimum which is a saddle point. To successfully reach the saddle point, balancing the generator and the discriminator is essential. But the discriminator usually converges faster than the generator so it is hard to attain the balance between them. If GAN fails to balance between them, GAN might fail to learn or suffer from mode collapse.
2.3 Boundary Equilibrium GAN
BEGAN proposed an equilibrium enforcing method between the loss of the generator and the discriminator. BEGAN used autoencoder as a discriminator which was first proposed in EBGAN. While other GANs attempted to match data distribution directly, BEGAN matched the autoencoder loss distributions between real and generated data by minimizing the Wasserstein distance.
To maintain the balance between the generator and the discriminator loss, BEGAN introduced an equilibrium hyperparameter :
(6) 
Besides, BEGAN borrowed the idea of Proportional Control Theory and used a variable to maintain the equilibrium . BEGAN loss is as follows:
(7) 
where is a pixelwise autoencoder loss
(8) 
The discriminator of BEGAN has two objectives: autoencoding and reconstructing real images, and discriminating real images from generated images. BEGAN can balance between these two objectives by adjusting and maintain the ratio of the generator and the discriminator loss.
By introducing , BEGAN showed remarkable performance yet with simple architecture. BEGAN pointed out that balancing the generator and the discriminator is crucial for improving GANs performance.
However, BEGAN loss and may not be applied to other GANs since BEGAN uses pixelwise autoencoder loss. In order to use them, it is forced to use autoencoder as a discriminator and various structures of the discriminator may not be used. Additionally, as BEGAN is trained to match the distribution of autoencoder losses of ground truth data, it cannot solve mode collapse.
2.4 Transferring GANs
Transferring knowledge of pretrained network and finetuning are widely applied techniques to enhance discriminative models’ performance. Adopting these, Wang et al. (2018) studied about transferring pretrained knowledge on GANs and proposed Transferring GANs.
They experimented transferring knowledge on WGANGP (Gulrajani et al., 2017). WGANGP has ResNet (He et al., 2016)
based architecture and is known to be stable and robust. Batch normalization
(Ioffe and Szegedy, 2015) and layer normalization (Ba et al., 2016) are also used in both generator and discriminator.They divided datasets into source and target domains. The source domain is the dataset that they pretrained the network on and the target domain is the dataset that pretrained GANs were adapted on. They first pretrained four WGANGP networks on four source datasets. Then, they randomly chose the target dataset and automatically estimate the most suitable pretrained network by calculating Frétchet Inception Distance (FID)
(Heusel et al., 2017) between the source and the target dataset. Next, they transferred the knowledge of the most suitable network to the chosen dataset.They experimented every combination which is initializing the generator and the discriminator with random or pretrained weights and concluded that initializing both networks with pretrained weights obtained the best result.
Transferring GANs showed that transferring knowledge and domain adaptation can also be used in not only discriminative models but also generative models. Furthermore, they improved the performance of GANs in terms of faster convergence and improving the quality of generated images.
However, since transferring GANs used GAN to pretrain GAN, it is not free with the innate difficulties of GANs. If they fail to pretrain GANs on source datasets while suffering from oscillating or mode collapse, then it would be difficult to transfer knowledge or adapt the pretrained network to the target domain.
3 Method
In this section, we provide our methods to train our model Unbalanced GANs. We explain why we choose the variational decoder to pretrain the generator. We also provide details of training process and model architecture.
3.1 Variational Decoder as a Generator
There are mainly four reasons why we use the variational decoder to pretrain the generator.
First, VAE and GAN use similar losses to find prior distribution. VAE loss consists of two losses: regularization and reconstruction loss. Reconstruction loss is pixelwise BCE between real images and reconstructed images. Regularization loss is the KL divergence between a prior distribution and a sampled distribution. While minimizing the VAE loss, regularization loss is also minimized and thus the KL divergence between two distributions decreases. GAN loss is a minmax loss between the generator and the discriminator. When the discriminator reaches its global optimum, GAN loss becomes the JS divergence between real and generated data distribution. As the JS divergence and the KL divergence are metrics to examine the distance between two distributions, VAE and GAN have a point of sameness that they are trained to minimize the distance between two distributions. Besides, minimizing the KL divergence affects minimizing the JS divergence because the JS divergence is the symmetric version of the KL divergence.
Second, the generator distribution can be initialized with the normal distribution. Since VAE assumes that the prior distribution is the normal distribution, the approximation class of the posterior distribution is trained to match the normal distribution and becomes similar to it. On the other hands, GAN does not assume any distribution from the beginning of the training so the generator distribution is trained to approximate any distribution. By initializing the generator distribution with the normal distribution, we can prevent the generator from converging to a strange distribution and suffering from mode collapse. Additionally, we can prevent the discriminator from winning too easily at early epochs. As the generator distribution is initialized with the variational decoder distribution, the generator generates blurry images at the beginning. However, after several epochs, the generator distribution is approximated to match the ground truth data distribution and it produces sharp images due to the GAN loss.
Third, VAE is stable. Unlike GAN, as VAE uses pixelwise BCE loss (reconstruction loss) between real and reconstructed images, the variational encoder and the decoder network is trained to converge to the local minimum. There is no need to balance these networks. Hence, VAE does not suffer from mode collapse or loss oscillating. If we use GAN to pretrain GAN, there is a chance to fail pretraining. However, the use of VAE for pretraining can fundamentally prevent pretraining from failure.
Fourth, VAE and GAN both can use the normal distribution to sample noises which are used as input of the variational decoder and the generator. VAE uses a reparametrization trick to sample latent distribution on the normal distribution. GAN indeed uses the normal distribution to sample latent noises. If they are using different types of inputs, it will be difficult to transfer weights. However, since they are having the same type of inputs, we can transfer the weights of the variational decoder to the generator.
3.2 Training Unbalanced GANs
Training Unbalanced GANs is a sequential composition of training VAE and GAN. In Algorithm 1, we provide pseudocode of the training procedure of Unbalanced GANs.
We first set a target GAN (DCGAN, LSGAN, WGAN) to train and design architecture of it. Then, based on the target GANs’ architecture, we construct an architecture for VAE. We provide the details of architecture in Section 3.3.
Next, we train and update the weights of VAE with a given dataset for several epochs. After training, VAE can generate blurry images similar with the dataset. Using pretrained weights and knowledge, we transfer the weights and knowledge of the variational decoder to the generator. Finally, we train and update the weight of GAN for several epochs.
The transferred weights of the variational decoder enable the generator to produce blurry but similar images with the dataset at the beginning of the training. This prevents the discriminator to overwhelm the generator at the beginning and thus can balance the generator and the discriminator at the early stage of the training.
3.3 Model Architecture
For the model architecture of GAN, we mainly follow the architectural guidelines for Deep Convolutional GANs. 3
3 convolutional layers with stride 2 are used for downsampling and 3
3 fractionalstrided convolutional layers with stride 2 are used for upsampling. Leaky ReLU
(Maas et al., 2013)activation function is used in hidden layers of the variational decoder and the generator network, and ReLU (Nair and Hinton, 2010) activation function is used in hidden layers of the variational encoder and the discriminator. Besides, we use the Tanh activation function for the output layer of the generator and the Sigmoid activation function for the output layer of the discriminator. We applied dropout layers (Srivastava et al., 2014) at the variational encoder and the discriminator to prevent overfitting. Additionally, we use batch normalization between all layers except for the output layer of the generator and the input layer of the variational encoder and the discriminator.The architecture of VAE is similar to the target GAN. As we transfer the weights of the variational decoder to the generator, the variational decoder has the same architecture with the generator. The variational encoder also has a similar architecture with the discriminator except for the output layer. The variational encoder has a fullyconnected output layer having the same number with the latent dimension to sample the latent distribution. The discriminator has only one fullyconnected output layer with the Sigmoid activation function to calculate the probability that the given input is real or generated. We display one of the sample architectures of Unbalanced GANs in Figure
2.The main idea of Unbalanced GANs is to pretrain the generator using the variational decoder. Our proposed method can be adopted by any GANs. The architecture of Unbalanced GANs can be varied according to the generator architecture that you want to pretrain. When you construct your own Unbalanced GANs, you have to properly construct VAE to pretrain. It is recommended to use the same architecture for variational decoder and generator, variational encoder and discriminator.
4 Experiments
4.1 Setup
We trained three Unbalanced GANs: DCGAN, LSGAN and WGAN on MNIST (LeCun et al., 1998), CIFAR10 (Krizhevsky et al., 2009) and LSUN Bedroom (Yu et al., 2015) dataset. We chose these GANs since to the best of our knowledge almost all variants of GANs are modified versions of them. We followed the main design concepts of each GANs when implementing.
We trained our models using Adam optimizer (Kingma and Ba, 2014) with learning rate 0.0002, 0.5 and
0.999. We applied the same hyperparameters at both GAN and VAE. We used 128 for the latent dimension, 64 for the batch size. In the Leaky ReLU activation function, we used 0.2 for the slope of the leak. The dropout rate was set to 0.25. We initialized the weights of GAN and VAE with zerocentered normal distribution with standard deviation 0.02 except for the generator which is initialized with the transferred weights of the variational decoder.
We trained our models with 28, 32, 64 image sizes according to the dataset, adding or removing convolutional layers. Training was on two NVIDIA TITAN X GPUs. We pretrained VAE about 50000 steps and trained GAN about the same number of steps with VAE.
4.2 Mnist
We used 28 28 image size for the MNIST dataset. No data augmentation was applied to the images. Furthermore, we used 7 7 downsampled images for the discriminator. We made use of labels of the dataset to train in a supervised way in order to achieve better performance.
We assessed our models in both qualitative and quantitative ways. For quantitative analysis, we used the convergence and the standard deviation of losses since reducing the standard deviation of loss is one of the key aspects to design highperformance GANs (Goodfellow, 2016).
We plotted the result of the experiments in Figure 3. As we displayed on the loss graphs, Unbalanced GANs converges faster compared to ordinary GANs. The losses of ordinary GANs oscillate at the beginning of the training and become smaller as training goes by. But the losses of Unbalanced GANs show small oscillations from the start. Additionally, the losses at the early epochs and the subsequent epochs have smaller differences and remain almost constant. Unbalanced GANs show smaller standard deviations compared to ordinary GANs.
For qualitative analysis, we generated sample images of WGAN and Unbalanced WGAN at every epoch. We displayed some results in Figure 4. At epoch 1, WGAN generated images that can be distinguished only the location of the numbers and the boundaries of the background. On the other hand, Unbalanced WGAN generated images that can be identified as a number. For other epochs, WGAN generated numericallylooking but blurry and noisy images but Unbalanced WGAN generated vivid and clear numbers.
Step  1000  5000  10000  20000 

DCGAN  
Unbalanced DCGAN 

LSGAN  
Unbalanced LSGAN 
4.3 Cifar10
For the CIFAR10 dataset, we used 32 32 image size and 4 4 downsampled images for the discriminator. Other conditions were the same as the experiment on the MNIST dataset. According to the transferring GANs, pretraining the generator can be harmful to increase the performance of GANs. However, our experiment showed a different outcome from that.
We did a numerical analysis using the difference of the Inception Score (Salimans et al., 2016) of ordinary GANs and Unbalanced GANs on the CIFAR10 dataset. The Inception Score is a metric for GANs, measuring single sample quality and diversity using the Inception model (Szegedy et al., 2016). We displayed the plot of the Inception Score at each epoch in Figure 5.
Surprisingly, Unbalanced GANs get higher Inception Scores on almost all epochs. In the case of DCGAN and Unbalanced DCGAN, there are some turnovers between them but the Inception Scores of Unbalanced DCGAN are above the Inception Scores of DCGAN in broad outlines. As training goes by the Inception Scores of them become saturated and the gap between them narrows.
In the case of LSGAN and WGAN, the Inception Scores of Unbalanced GANs are over the Inception Scores of ordinary GANs. Unbalanced LSGAN shows small enhancement compared to LSGAN. However, Unbalanced WGAN significantly overwhelms WGAN making a huge difference in the Inception Score.
We did not display the losses of our experiments on the CIFAR10 dataset but the results were similar to the experiments on the MNIST dataset. The losses converge faster and the standard deviations are lower.
4.4 LSUN Bedroom
We also applied Unbalanced GANs on the LSUN Bedroom dataset. We used 64 64 image size and for the discriminator, we downsampled until 4 4 image size. We trained in an unsupervised way since the dataset has only one class.
Unlike the MNIST dataset and the CIFAR10 dataset, as the image size is doubled, the loss of VAE increased and took a long time to converge. Besides, the VAE pretrained on the MNIST dataset produced less blurry reallike images, but on the LSUN Bedroom dataset produced blurry images that only overall structure can be recognized. The image size and the diversity of the dataset affected the quality of images that the variational decoder generated.
Because of the low quality variational decoder and excluding the conditions, the standard deviations of losses of Unbalanced GANs and ordinary GANs have almost no differences. Furthermore, the speed of convergence made no great difference. However, the quality of the generated images at early epochs was enhanced.
We put generated sample images of Unbalanced GANs and ordinary GANs in Figure 6. Unbalanced DCGAN showed better performance than DCGAN. At step 5000, DCGAN generated meaningless patterns that are not related to the bedroom. On the contrary, Unbalanced DCGAN generated blurry images that have a similar structure with the bedroom. This trend persists in subsequent steps. At step 20000, DCGAN generated images that are similar in color but distorted while Unbalanced DCGAN generated images that have similar color, structure and feature with the bedroom.
Especially in LSGAN and Unbalanced LSGAN, the difference grows bigger. LSGAN suffered from mode collapse at the beginning of the training and generates strange patterns. However, Unbalanced generated bedroomlike images while not suffering from mode collapse. This phenomenon shows that Unbalanced LSGAN can learn in a more stable manner than LSGAN.
5 Conclusion
We introduced Unbalanced GANs which pretrains the generator using variational autoencoder. By pretraining the generator, we can prevent the fast convergence of discriminator at early epochs and thus can balance the generator and the discriminator. Unbalanced GANs produce better performance than ordinary GANs with respect to faster convergence, low variance and better image quality at early epochs. Furthermore, we can say that the training of Unbalanced GANs is more stable than that of ordinary GANs since mode collapse happens in ordinary GANs but not in Unbalanced GANs. We believe that Unbalanced GANs can be widely applicable to other GANs to enhance their performance. It does not require a complex training process but just sequentially training VAE and GAN.
Our approach enables stabilized transfer learning in GANs. Using VAE rather than GAN as a pretrained model eliminates concerns about the failure of pretraining since training VAE is not a matter of failure but time. Like Inception models, if we construct VAE that was pretrained enough times to generate reallike images, we can utilize them to transfer knowledge to the generator and thus transfer learning can be frequently used in GANs.
References
 Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §1.
 Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §2.4.
 Began: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717. Cited by: §1.
 Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
 NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160. Cited by: §4.2.
 Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777. Cited by: §2.4.

Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 770–778. Cited by: §2.4.  Gans trained by a two timescale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §2.4.

Reducing the dimensionality of data with neural networks
. science 313 (5786), pp. 504–507. Cited by: §1.  Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §2.4.
 Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
 Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1.
 Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §4.1.
 Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300. Cited by: §1.
 Gradientbased learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §4.1.
 Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, Vol. 30, pp. 3. Cited by: §3.3.
 Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802. Cited by: §1.
 Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML10), pp. 807–814. Cited by: §3.3.
 Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
 Deep boltzmann machines. In Artificial intelligence and statistics, pp. 448–455. Cited by: §1.
 Improved techniques for training gans. In Advances in neural information processing systems, pp. 2234–2242. Cited by: §4.3.
 Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §3.3.
 Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §4.3.
 Transferring gans: generating images from limited data. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 218–234. Cited by: §2.4.
 LSUN: construction of a largescale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §4.1.
 Energybased generative adversarial network. arXiv preprint arXiv:1609.03126. Cited by: §1.
Comments
There are no comments yet.