EBGAN
Energy-Based GAN
view repo
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.
READ FULL TEXT VIEW PDF
In this paper, we propose Orthogonal Generative Adversarial Networks
(O-...
read it
We propose Unbalanced GANs, which pre-trains the generator of the genera...
read it
Generative adversarial networks (GANs) have great successes on synthesiz...
read it
We present a novel introspective variational autoencoder (IntroVAE) mode...
read it
This article proposes Adversarially-Trained Normalized Noisy-Feature
Aut...
read it
The Generative Adversarial Network framework has shown success in implic...
read it
3D multi object generative models allow us to synthesize a large range o...
read it
Energy-Based GAN
Margin Adaptation GAN
None
The essence of the energy-based model
(LeCun et al., 2006)is to build a function that maps each point of an input space to a single scalar, which is called “energy”. The learning phase is a data-driven process that shapes the energy surface in such a way that the desired configurations get assigned low energies, while the incorrect ones are given high energies. Supervised learning falls into this framework: for each
in the training set, the energy of the pair takes low values when is the correct label and higher values for incorrect ’s. Similarly, when modelingalone within an unsupervised learning setting, lower energy is attributed to the data manifold. The term
contrastive sample is often used to refer to a data point causing an energy pull-up, such as the incorrect ’s in supervised learning and points from low data density regions in unsupervised learning.Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have led to significant improvements in image generation (Denton et al., 2015; Radford et al., 2015; Im et al., 2016; Salimans et al., 2016), video prediction (Mathieu et al., 2015) and a number of other domains. The basic idea of GAN is to simultaneously train a discriminator and a generator. The discriminator is trained to distinguish real samples of a dataset from fake samples produced by the generator. The generator uses input from an easy-to-sample random source, and is trained to produce fake samples that the discriminator cannot distinguish from real data samples. During training, the generator receives the gradient of the output of the discriminator with respect to the fake sample. In the original formulation of GAN in Goodfellow et al. (2014)
, the discriminator produces a probability and, under certain conditions, convergence occurs when the distribution produced by the generator matches the data distribution. From a game theory point of view, the convergence of a GAN is reached when the generator and the discriminator reach a Nash equilibrium.
In this work, we propose to view the discriminator as an energy function (or a contrast function) without explicit probabilistic interpretation. The energy function computed by the discriminator can be viewed as a trainable cost function for the generator. The discriminator is trained to assign low energy values to the regions of high data density, and higher energy values outside these regions. Conversely, the generator can be viewed as a trainable parameterized function that produces samples in regions of the space to which the discriminator assigns low energy. While it is often possible to convert energies into probabilities through a Gibbs distribution (LeCun et al., 2006), the absence of normalization in this energy-based form of GAN provides greater flexibility in the choice of architecture of the discriminator and the training procedure.
The probabilistic binary discriminator in the original formulation of GAN can be seen as one way among many to define the contrast function and loss functional, as described in LeCun et al. (2006) for the supervised and weakly supervised settings, and Ranzato et al. (2007) for unsupervised learning. We experimentally demonstrate this concept, in the setting where the discriminator is an auto-encoder architecture, and the energy is the reconstruction error. More details of the interpretation of EBGAN are provided in the appendix B.
Our main contributions are summarized as follows:
An energy-based formulation for generative adversarial training.
A proof that under a simple hinge loss, when the system reaches convergence, the generator of EBGAN produces points that follow the underlying data distribution.
An EBGAN framework with the discriminator using an auto-encoder architecture in which the energy is the reconstruction error.
A set of systematic experiments to explore hyper-parameters and architectural choices that produce good result for both EBGANs and probabilistic GANs.
A demonstration that EBGAN framework can be used to generate reasonable-looking high-resolution images from the ImageNet dataset at
pixel resolution, without a multi-scale approach.Let be the underlying probability density of the distribution that produces the dataset. The generator is trained to produce a sample
, for instance an image, from a random vector
, which is sampled from a known distribution , for instance . The discriminatortakes either real or generated images, and estimates the energy value
accordingly, as explained later. For simplicity, we assume that produces non-negative values, but the analysis would hold as long as the values are bounded below.The output of the discriminator goes through an objective functional in order to shape the energy function, attributing low energy to the real data samples and higher energy to the generated (“fake”) ones. In this work, we use a margin loss, but many other choices are possible as explained in LeCun et al. (2006). Similarly to what has been done with the probabilistic GAN (Goodfellow et al., 2014), we use a two different losses, one to train and the other to train , in order to get better quality gradients when the generator is far from convergence.
Given a positive margin , a data sample and a generated sample , the discriminator loss and the generator loss are formally defined by:
(1) | ||||
(2) |
where . Minimizing with respect to the parameters of is similar to maximizing the second term of . It has the same minimum but non-zero gradients when .
In this section, we present a theoretical analysis of the system presented in section 2.1. We show that if the system reaches a Nash equilibrium, then the generator produces samples that are indistinguishable from the distribution of the dataset. This section is done in a non-parametric setting, i.e. we assume that and have infinite capacity.
Given a generator , let be the density distribution of where . In other words, is the density distribution of the samples generated by .
We define and . We train the discriminator to minimize the quantity and the generator to minimize the quantity .
A Nash equilibrium of the system is a pair that satisfies:
(3) | |||||
(4) |
If is a Nash equilibrium of the system, then almost everywhere, and .
First we observe that
(5) | |||||
(6) |
The analysis of the function (see lemma 1 in appendix A for details) shows:
(a) almost everywhere. To verify it, let us assume that there exists a set of measure non-zero such that . Let . Then which violates equation 3.
(b) The function reaches its minimum in if and in otherwise. So reaches its minimum when we replace by these values. We obtain
(7) | |||||
(8) | |||||
(9) | |||||
(10) |
The second term in equation 10 is non-positive, so .
A Nash equilibrium of this system exists and is characterized by (a) (almost everywhere) and (b) there exists a constant such that (almost everywhere).^{1}^{1}1This is assuming there is no region where . If such a region exists, may have any value in for in this region..
See appendix A. ∎
In our experiments, the discriminator is structured as an auto-encoder:
(13) |
The diagram of the EBGAN model with an auto-encoder discriminator is depicted in figure 1. The choice of the auto-encoders for may seem arbitrary at the first glance, yet we postulate that it is conceptually more attractive than a binary logistic network:
[leftmargin=*]
Rather than using a single bit of target information to train the model, the reconstruction-based output offers a diverse targets for the discriminator. With the binary logistic loss, only two targets are possible, so within a minibatch, the gradients corresponding to different samples are most likely far from orthogonal. This leads to inefficient training, and reducing the minibatch sizes is often not an option on current hardware. On the other hand, the reconstruction loss will likely produce very different gradient directions within the minibatch, allowing for larger minibatch size without loss of efficiency.
Auto-encoders have traditionally been used to represent energy-based model and arise naturally. When trained with some regularization terms (see section 2.3.1), auto-encoders have the ability to learn an energy manifold without supervision or negative examples. This means that even when an EBGAN auto-encoding model is trained to reconstruct a real sample, the discriminator contributes to discovering the data manifold by itself. To the contrary, without the presence of negative examples from the generator, a discriminator trained with binary logistic loss becomes pointless.
One common issue in training auto-encoders is that the model may learn little more than an identity function, meaning that it attributes zero energy to the whole space. In order to avoid this problem, the model must be pushed to give higher energy to points outside the data manifold. Theoretical and experimental results have addressed this issue by regularizing the latent representations (Vincent et al., 2010; Rifai et al., 2011; Marc’Aurelio Ranzato & Chopra, 2007; Kavukcuoglu et al., 2010). Such regularizers aim at restricting the reconstructing power of the auto-encoder so that it can only attribute low energy to a smaller portion of the input points.
We argue that the energy function (the discriminator) in the EBGAN framework is also seen as being regularized by having a generator producing the contrastive samples, to which the discriminator ought to give high reconstruction energies. We further argue that the EBGAN framework allows more flexibility from this perspective, because: (i)-the regularizer (generator) is fully trainable instead of being handcrafted; (ii)-the adversarial training paradigm enables a direct interaction between the duality of producing contrastive sample and learning the energy function.
We propose a “repelling regularizer” which fits well into the EBGAN auto-encoder model, purposely keeping the model from producing samples that are clustered in one or only few modes of . Another technique “minibatch discrimination” was developed by Salimans et al. (2016) from the same philosophy.
Implementing the repelling regularizer involves a Pulling-away Term (PT) that runs at a representation level. Formally, let denotes a batch of sample representations taken from the encoder output layer. Let us define PT as:
(14) |
PT operates on a mini-batch and attempts to orthogonalize the pairwise sample representation. It is inspired by the prior work showing the representational power of the encoder in the auto-encoder alike model such as Rasmus et al. (2015) and Zhao et al. (2015)
. The rationale for choosing the cosine similarity instead of Euclidean distance is to make the term bounded below and invariant to scale. We use the notation “EBGAN-PT” to refer to the EBGAN auto-encoder model trained with this term. Note the PT is used in the generator loss but not in the discriminator loss.
Our work primarily casts GANs into an energy-based model scope. On this direction, the approaches studying contrastive samples are relevant to EBGAN, such as the use of noisy samples (Vincent et al., 2010)
and noisy gradient descent methods like contrastive divergence
(Carreira-Perpinan & Hinton, 2005). From the perspective of GANs, several papers were presented to improve the stability of GAN training, (Salimans et al., 2016; Denton et al., 2015; Radford et al., 2015; Im et al., 2016; Mathieu et al., 2015).Kim & Bengio (2016) propose a probabilistic GAN and cast it into an energy-based density estimator by using the Gibbs distribution. Quite unlike EBGAN, this proposed framework doesn’t get rid of the computational challenging partition function, so the choice of the energy function is required to be integratable.
In this section, we study the training stability of EBGANs over GANs on a simple task of MNIST digit generation with fully-connected networks. We run an exhaustive grid search over a set of architectural choices and hyper-parameters for both frameworks.
Formally, we specify the search grid in table 1. We impose the following restrictions on EBGAN models: (i)-using learning rate 0.001 and Adam (Kingma & Ba, 2014) for both and ; (ii)-nLayerD represents the total number of layers combining and . For simplicity, we fix to be one layer and only tune the #layers; (iii)-the margin is set to 10 and not being tuned. To analyze the results, we use the inception score (Salimans et al., 2016) as a numerical means reflecting the generation quality. Some slight modification of the formulation were made to make figure 2 visually more approachable while maintaining the score’s original meaning, ^{2}^{2}2This form of the “inception score” is only used to better analyze the grid search in the scope of this work, but not to compare with any other published work. (more details in appendix C). Briefly, higher score implies better generation quality.
Settings | Description | EBGANs | GANs |
---|---|---|---|
nLayerG | number of layers in | [2, 3, 4, 5] | [2, 3, 4, 5] |
nLayerD | number of layers in | [2, 3, 4, 5] | [2, 3, 4, 5] |
sizeG |
number of neurons in |
[400, 800, 1600, 3200] | [400, 800, 1600, 3200] |
sizeD | number of neurons in | [128, 256, 512, 1024] | [128, 256, 512, 1024] |
dropoutD | if to use dropout in | [true, false] | [true, false] |
optimD | to use Adam or SGD for | adam | [adam, sgd] |
optimG | to use Adam or SGD for | adam | [adam, sgd] |
lr | learning rate | 0.001 | [0.01, 0.001, 0.0001] |
#experiments: | - | 512 | 6144 |
Histograms We plot the histogram of scores in figure 2. We further separated out the optimization related setting from GAN’s grid (optimD, optimG and lr) and plot the histogram of each sub-grid individually, together with the EBGAN scores as a reference, in figure 3. The number of experiments for GANs and EBGANs are both 512 in every subplot. The histograms evidently show that EBGANs are more reliably trained.
Digits generated from the configurations presenting the best inception score are shown in figure 4.
We explore the potential of using the EBGAN framework for semi-supervised learning on permutation-invariant MNIST, collectively on using 100, 200 and 1000 labels. We utilized a bottom-layer-cost Ladder Network (LN)
(Rasmus et al., 2015) with the EGBAN framework (EBGAN-LN). Ladder Network can be categorized as an energy-based model that is built with both feedforward and feedback hierarchies powered by stage-wise lateral connections coupling two pathways.One technique we found crucial in enabling EBGAN framework for semi-supervised learning is to gradually decay the margin value of the equation 1. The rationale behind is to let discriminator punish generator less when gets closer to the data manifold. One can think of the extreme case where the contrastive samples are exactly pinned on the data manifold, such that they are “not contrastive anymore”. This ultimate status happens when and the EBGAN-LN model falls back to a normal Ladder Network. The undesirability of a non-decay dynamics for using the discriminator in the GAN or EBGAN framework is also indicated by Theorem 2: on convergence, the discriminator reflects a flat energy surface. However, we posit that the trajectory of learning a EBGAN-LN model does provide the LN (discriminator) more information by letting it see contrastive samples. Yet the optimal way to avoid the mentioned undesirability is to make sure has been decayed to when the Nash Equilibrium is reached. The margin decaying schedule is found by hyper-parameter search in our experiments (technical details in appendix D).
From table 2, it shows that positioning a bottom-layer-cost LN into an EBGAN framework profitably improves the performance of the LN itself. We postulate that within the scope of the EBGAN framework, iteratively feeding the adversarial contrastive samples produced by the generator to the energy function acts as an effective regularizer; the contrastive samples can be thought as an extension to the dataset that provides more information to the classifier. We notice there was a discrepancy between the reported results between Rasmus et al. (2015) and Pezeshki et al. (2015), so we report both results along with our own implementation of the Ladder Network running the same setting. The specific experimental setting and analysis are available in appendix D.
model | 100 | 200 | 1000 |
LN bottom-layer-cost, reported in Pezeshki et al. (2015) | 1.690.18 | - | 1.050.02 |
LN bottom-layer-cost, reported in Rasmus et al. (2015) | 1.090.32 | - | 0.900.05 |
LN bottom-layer-cost, reproduced in this work (see appendix D) | 1.360.21 | 1.240.09 | 1.040.06 |
LN bottom-layer-cost within EBGAN framework | 1.040.12 | 0.990.12 | 0.890.04 |
Relative percentage improvement | 23.5% | 20.2% | 14.4% |
We apply the EBGAN framework with deep convolutional architecture to generate RGB images, a more realistic task, using the LSUN bedroom dataset (Yu et al., 2015) and the large-scale face dataset CelebA under alignment (Liu et al., 2015). To compare EBGANs with DCGANs (Radford et al., 2015), we train a DCGAN model under the same configuration and show its generation side-by-side with the EBGAN model, in figures 5 and 6. The specific settings are listed in appendix C.
Finally, we trained EBGANs to generate high-resolution images on ImageNet (Russakovsky et al., 2015). Compared with the datasets we have experimented so far, ImageNet presents an extensively larger and wilder space, so modeling the data distribution by a generative model becomes very challenging. We devised an experiment to generate images, trained on the full ImageNet-1k dataset, which contains roughly 1.3 million images from 1000 different categories. We also trained a network to generate images of size , on a dog-breed subset of ImageNet, using the wordNet IDs provided by Vinyals et al. (2016). The results are shown in figures 7 and 8. Despite the difficulty of generating images on a high-resolution level, we observe that EBGANs are able to learn about the fact that objects appear in the foreground, together with various background components resembling grass texture, sea under the horizon, mirrored mountain in the water, buildings, etc. In addition, our dog-breed generations, although far from realistic, do reflect some knowledge about the appearances of dogs such as their body, furs and eye.
We bridge two classes of unsupervised learning methods – GANs and auto-encoders – and revisit the GAN framework from an alternative energy-based perspective. EBGANs show better convergence pattern and scalability to generate high-resolution images. A family of energy-based loss functionals presented in LeCun et al. (2006) can easily be incorporated into the EBGAN framework. For the future work, the conditional setting (Denton et al., 2015; Mathieu et al., 2015) is a promising setup to explore. We hope the future research will raise more attention on a broader view of GANs from the energy-based perspective.
We thank Emily Denton, Soumith Chitala, Arthur Szlam, Marc’Aurelio Ranzato, Pablo Sprechmann, Ross Goroshin and Ruoyu Sun for fruitful discussions. We also thank Emily Denton and Tian Jiang for their help with the manuscript.
Proceedings of the IEEE International Conference on Computer Vision
, pp. 3730–3738, 2015.Contractive auto-encoders: Explicit invariance during feature extraction.
InProceedings of the 28th international conference on machine learning (ICML-11)
, pp. 833–840, 2011.Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.Let , . The minimum of on exists and is reached in if , and it is reached in otherwise (the minimum may not be unique).
The function is defined on , its derivative is defined on and if and if .
So when , the function is decreasing on and increasing on . Since it is continuous, it has a minimum in .
It may not be unique if or .
On the other hand, if the function is increasing on , so is a minimum.
∎
If and are probability densities, then if and only if .
Let’s assume that . Then
(15) | |||||
(16) | |||||
(17) | |||||
(18) | |||||
(19) | |||||
(20) |
So and since the term in the integral is always non-negative, for almost all . And implies , so almost everywhere. Therefore which completes the proof, given the hypothesis. ∎
The sufficient conditions are obvious.
The necessary condition on comes from theorem 1, and the necessary condition on is from the proof of theorem 1.
Let us now assume that is not constant almost everywhere and find a contradiction.
If it is not, then there exists a constant and a set of non-zero measure such that and
. In addition we can choose such that there exists a subset of non-zero measure such that on (because of the assumption in the footnote). We can build a generator such that over and over . We compute
(21) | ||||
(22) | ||||
(23) | ||||
(24) |
which violates equation 4.
GANs can be interpreted in two complementary ways. In the first interpretation, the key component is the generator, and the discriminator plays the role of a trainable objective function. Let us imagine that the data lies on a manifold. Until the generator produces samples that are recognized as being on the manifold, it gets a gradient indicating how to modify its output so it could approach the manifold. In such scenario, the discriminator acts to punish the generator when it produces samples that are outside the manifold. This can be understood as a way to train the generator with a set of possible desired outputs (e.g. the manifold) instead of a single desired output as in traditional supervised learning.
For the second interpretation, the key component is the discriminator, and the generator is merely trained to produce contrastive samples. We show that by iteratively and interactively feeding contrastive samples, the generator enhances the semi-supervised learning performance of the discriminator (e.g. Ladder Network), in section 4.2.
For training both EBGANs and GANs for the grid search, we use the following setting:
[leftmargin=*]
Batch normalization (Ioffe & Szegedy, 2015) is applied after each weight layer, except for the generator output layer and the discriminator input layer (Radford et al., 2015).
Training images are scaled into range [-1,1]. Correspondingly the generator output layer is followed by a Tanh function.
ReLU is used as the non-linearity function.
Initialization: the weights in from and in from . The bias are initialized to be .
We evaluate the models from the grid search by calculating a modified version of the inception score, , where denotes a generated sample and is the label predicted by a MNIST classifier that is trained off-line using the entire MNIST training set. Two main changes were made upon its original form: (i)-we swap the order of the distribution pair; (ii)-we omit the operation. The modified score condenses the histogram in figure 2 and figure 3. It is also worth noting that although we inherit the name “inception score” from Salimans et al. (2016), the evaluation isn’t related to the “inception” model trained on ImageNet dataset. The classifier is a regular 3-layer ConvNet trained on MNIST.
The generations showed in figure 4 are the best GAN or EBGAN (obtaining the best score) from the grid search. Their configurations are:
We use a deep convolutional generator analogous to DCGAN’s and a deep convolutional auto-encoder for the discriminator. The auto-encoder is composed of strided convolution modules in the feedforward pathway and fractional-strided convolution modules in the feedback pathway. We leave the usage of upsampling or switches-unpooling
(Zhao et al., 2015) to future research. We also followed the guidance suggested by Radford et al. (2015) for training EBGANs. The configuration of the deep auto-encoder is:Encoder: (64)4c2s-(128)4c2s-(256)4c2s
Decoder: (128)4c2s-(64)4c2s-(3)4c2s
where “(64)4c2s” denotes a convolution/deconvolution layer with 64 output feature maps and kernel size 4 with stride 2. The margin is set to for LSUN and for CelebA.
We built deeper models in both and experiments, in a similar fashion to section 4.3,
model:
Generator: (1024)4c-(512)4c2s-(256)4c2s-(128)4c2s-
(64)4c2s-(64)4c2s-(3)3c
Noise #planes: 100-64-32-16-8-4
Encoder: (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s
Decoder: (256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s
Margin:
model:
Generator: (2048)4c-(1024)4c2s-(512)4c2s-(256)4c2s-(128)4c2s-
(64)4c2s-(64)4c2s-(3)3c
Noise #planes: 100-64-32-16-8-4-2
Encoder: (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s
Decoder: (256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s
Margin:
Note that we feed noise into every layer of the generator where each noise component is initialized into a 4D tensor and concatenated with current feature maps in the feature space. Such strategy is also employed by
Salimans et al. (2016).As stated in section 4.2, we chose a bottom-layer-cost Ladder Network as our baseline model. Specifically, we utilize an identical architecture as reported in both papers (Rasmus et al., 2015; Pezeshki et al., 2015); namely a fully-connected network of size 784-1000-500-250-250-250, with batch normalization and ReLU following each linear layer. To obtain a strong baseline, we tuned the weight of the reconstruction cost with values from the set {, , , }, while fixing the weight on the classification cost to . In the meantime, we also tuned the learning rate with values {0.002, 0.001, 0.0005, 0.0002, 0.0001}. We adopted Adam as the optimizer with being set to 0.5. The minibatch size was set to 100. All the experiments are finished by 120,000 steps. We use the same learning rate decay mechanism as in the published papers – starting from the two-thirds of total steps (i.e., from step #80,000) to linearly decay the learning rate to . The result reported in section 4.2 was done by the best tuned setting: .
We place the same Ladder Network architecture into our EBGAN framework and train this EBGAN-LN model the same way as we train the EBGAN auto-encoder model. For technical details, we started training the EBGAN-LN model from the margin value 16 and gradually decay it to 0 within the first 60,000 steps. By the time, we found that the reconstruction error of the real image had already been low and reached the limitation of the architecture (Ladder Network itself); besides the generated images exhibit good quality (shown in figure 10). Thereafter we turned off training the generator but kept training the discriminator for another 120,000 steps. We set the initial learning rates to be for discriminator and for generator. The other setting is kept consistent with the best baseline LN model. The learning rate decay started at step #120,000 (also two-thirds of the total steps).
[leftmargin=*]
Notice that we used the 28
28 version (unpadded) of the MNIST dataset in the EBGAN-LN experiment. For the EBGAN auto-encoder grid search experiments, we used the zero-padded version, i.e., size 32
32. No phenomenal difference has been found due to the zero-padding.We generally took the norm of the discrepancy between input and reconstruction for the loss term in the EBGAN auto-encoder model as formally written in section 2.1. However, for the EBGAN-LN experiment, we followed the original implementation of Ladder Network using a vanilla form of loss.
Borrowed from Salimans et al. (2016), the batch normalization is adopted without the learned parameter but merely with a bias term . It still remains unknown whether such trick could affect learning in some non-ignorable way, so this might have made our baseline model not a strict reproduction of the published models by Rasmus et al. (2015) and Pezeshki et al. (2015).
It is crucial to set a proper energy margin value in the framework of EBGAN, from both theoretical and experimental perspective. Hereby we provide a few tips:
Delving into the formulation of the discriminator loss made by equation 1, we suggest a numerical balance between its two terms which concern real and fake sample respectively. The second term is apparently bounded by (assuming the energy function is non-negative). It is desirable to make the first term bounded in a similar range. In theory, the upper bound of the first term is essentially determined by (i)-the capacity of ; (ii)-the complexity of the dataset.
In practice, for the EBGAN auto-encoder model, one can run (the auto-encoder) alone on the real sample dataset and monitor the loss. When it converges, the consequential loss implies a rough limit on how well such setting of is capable to fit the dataset. This usually suggests a good start for a hyper-parameter searching on .
being overly large results in a training instability/difficulty, while being too small is prone to the mode-dropping problem. This property of is depicted in figure 9.
One successful technique, as we introduced in appendix D, is to start from a large and gradually decayed it to 0 along training proceeds. Unlike the feature matching semi-supervised learning technique proposed by Salimans et al. (2016), we show in figure 10 that not only does the EBGAN-LN model achieve a good semi-supervised learning performance, it also produces satisfactory generations.
Abstracting away from the practical experimental tips, the theoretical understanding of EBGAN in section 2.2 also provides some insight for setting a feasible . For instance, as implied by Theorem 2, setting a large results in a broader range of to which may converge. Instability may come after an overly large because it generates two strong gradients pointing to opposite directions, from loss 1, which would demand more finicky optimization setting.
For LSUN bedroom dataset, aside from the experiment on the whole images, we also train an EBGAN auto-encoder model based on dataset augmentation by cropping patches. All the patches are of size and cropped from original images. The generation is shown in figure 11.
To further demonstrate how the pull-away term (PT) may influence EBGAN auto-encoder model training, we chose both the whole-image and augmented-patch version of the LSUN bedroom dataset, together with the CelebA dataset to make some further experimentation. The comparison of EBGAN and EBGAN-PT generation are showed in figure 12, figure 13 and figure 14. Note that all comparison pairs adopt identical architectural and hyper-parameter setting as in section 4.3. The cost weight on the PT is set to .
Comments
There are no comments yet.