unrolled_gan
Unrolled Generative Adversarial Networks
view repo
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
READ FULL TEXT VIEW PDF
We investigate under and overfitting in Generative Adversarial Networks
...
read it
In this article, we introduce a new mode for training Generative Adversa...
read it
We propose to incorporate adversarial dropout in generative multi-advers...
read it
Generative Adversarial Networks (GANs) have been shown to be able to sam...
read it
In this study, we employ Generative Adversarial Networks as an oversampl...
read it
In this paper, we propose Orthogonal Generative Adversarial Networks
(O-...
read it
Generative Adversarial Networks (GANs) have proven to be a powerful fram...
read it
Unrolled Generative Adversarial Networks
Code for "Gradient descent GAN optimization is locally stable"
The use of deep neural networks as generative models for complex data has made great advances in recent years. This success has been achieved through a surprising diversity of training losses and model architectures, including denoising autoencoders
(Vincent et al., 2010), variational autoencoders (Kingma & Welling, 2013; Rezende et al., 2014; Gregor et al., 2015; Kulkarni et al., 2015; Burda et al., 2015; Kingma et al., 2016), generative stochastic networks (Alain et al., 2015), diffusion probabilistic models (Sohl-Dickstein et al., 2015)(Theis & Bethge, 2015; van den Oord et al., 2016a, b), real non-volume preserving transformations (Dinh et al., 2014, 2016), Helmholtz machines (Dayan et al., 1995; Bornschein et al., 2015), and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014).While most deep generative models are trained by maximizing log likelihood or a lower bound on log likelihood, GANs take a radically different approach that does not require inference or explicit calculation of the data likelihood. Instead, two models are used to solve a minimax game: a generator which samples data, and a discriminator which classifies the data as real or generated. In theory these models are capable of modeling an arbitrarily complex probability distribution. When using the optimal discriminator for a given class of generators, the original GAN proposed by Goodfellow et al. minimizes the Jensen-Shannon divergence between the data distribution and the generator, and extensions generalize this to a wider class of divergences
(Nowozin et al., 2016; Sonderby et al., 2016; Poole et al., 2016).The ability to train extremely flexible generating functions, without explicitly computing likelihoods or performing inference, and while targeting more mode-seeking divergences as made GANs extremely successful in image generation (Odena et al., 2016; Salimans et al., 2016; Radford et al., 2015)
, and image super resolution
(Ledig et al., 2016). The flexibility of the GAN framework has also enabled a number of successful extensions of the technique, for instance for structured prediction (Reed et al., 2016a, b; Odena et al., 2016), training energy based models
(Zhao et al., 2016), and combining the GAN loss with a mutual information loss (Chen et al., 2016).In practice, however, GANs suffer from many issues, particularly during training. One common failure mode involves the generator collapsing to produce only a single sample or a small family of very similar samples. Another involves the generator and discriminator oscillating during training, rather than converging to a fixed point. In addition, if one agent becomes much more powerful than the other, the learning signal to the other agent becomes useless, and the system does not learn. To train GANs many tricks must be employed, such as careful selection of architectures (Radford et al., 2015), minibatch discrimination (Salimans et al., 2016), and noise injection (Salimans et al., 2016; Sonderby et al., 2016)
. Even with these tricks the set of hyperparameters for which training is successful is generally very small in practice.
Once converged, the generative models produced by the GAN training procedure normally do not cover the whole distribution (Dumoulin et al., 2016; Che et al., 2016)
, even when targeting a mode-covering divergence such as KL. Additionally, because it is intractable to compute the GAN training loss, and because approximate measures of performance such as Parzen window estimates suffer from major flaws
(Theis et al., 2016), evaluation of GAN performance is challenging. Currently, human judgement of sample quality is one of the leading metrics for evaluating GANs. In practice this metric does not take into account mode dropping if the number of modes is greater than the number of samples one is visualizing. In fact, the mode dropping problem generally helps visual sample quality as the model can choose to focus on only the most common modes. These common modes correspond, by definition, to more typical samples. Additionally, the generative model is able to allocate more expressive power to the modes it does cover than it would if it attempted to cover all modes.Many optimization schemes, including SGD, RMSProp
(Tieleman & Hinton, 2012), and Adam (Kingma & Ba, 2014), consist of a sequence of differentiable updates to parameters. Gradients can be backpropagated through unrolled optimization updates in a similar fashion to backpropagation through a recurrent neural network. The parameters output by the optimizer can thus be included, in a differentiable way, in another objective
(Maclaurin et al., 2015). This idea was first suggested for minimax problems in (Pearlmutter & Siskind, 2008), while (Zhang & Lesser, 2010) provided a theoretical analysis and experimental results on differentiating through a single step of gradient ascent for simple matrix games. Differentiating through unrolled optimization was first scaled to deep networks in (Maclaurin et al., 2015), where it was used for hyperparameter optimization. More recently, (Belanger & McCallum, 2015; Han et al., 2016; Andrychowicz et al., 2016) backpropagate through optimization procedures in contexts unrelated to GANs or minimax games.In this work we address the challenges of unstable optimization and mode collapse in GANs by unrolling optimization of the discriminator objective during training.
The GAN learning problem is to find the optimal parameters for a generator function in a minimax objective,
(1) | ||||
(2) | ||||
(3) |
where is commonly chosen to be
(4) |
Here is the data variable, is the latent variable, is the data distribution, the discriminator outputs the estimated probability that a sample comes from the data distribution, and are the discriminator and generator parameters, and the generator function transforms a sample in the latent space into a sample in the data space.
For the minimax loss in Eq. 4, the optimal discriminator is a known smooth function of the generator probability (Goodfellow et al., 2014),
(5) |
When the generator loss in Eq. 2 is rewritten directly in terms of and Eq. 5 rather than and , then it is similarly a smooth function of . These smoothness guarantees are typically lost when and are drawn from parametric families. They nonetheless suggest that the true generator objective in Eq. 2 will often be well behaved, and is a desirable target for direct optimization.
Explicitly solving for the optimal discriminator parameters for every update step of the generator is computationally infeasible for discriminators based on neural networks. Therefore this minimax optimization problem is typically solved by alternating gradient descent on and ascent on .
The optimal solution is a fixed point of these iterative learning dynamics. Additionally, if is convex in and concave in , then alternating gradient descent (ascent) trust region updates are guaranteed to converge to the fixed point, under certain additional weak assumptions (Juditsky et al., 2011). However in practice is typically very far from convex in and concave in , and updates are not constrained in an appropriate way. As a result GAN training suffers from mode collapse, undamped oscillations, and other problems detailed in Section 1.1. In order to address these difficulties, we will introduce a surrogate objective function for training the generator which more closely resembles the true generator objective .
A local optimum of the discriminator parameters can be expressed as the fixed point of an iterative optimization procedure,
(6) | ||||
(7) | ||||
(8) |
where is the learning rate schedule. For clarity, we have expressed Eq. 7 as a full batch steepest gradient ascent equation. More sophisticated optimizers can be similarly unrolled. In our experiments we unroll Adam (Kingma & Ba, 2014).
By unrolling for steps, we create a surrogate objective for the update of the generator,
(9) |
When this objective corresponds exactly to the standard GAN objective, while as it corresponds to the true generator objective function . By adjusting the number of unrolling steps
, we are thus able to interpolate between standard GAN training dynamics with their associated pathologies, and more costly gradient descent on the true generator loss.
The generator and discriminator parameter updates using this surrogate loss are
(10) | ||||
(11) |
For clarity we use full batch steepest gradient descent (ascent) with stepsize above, while in experiments we instead use minibatch Adam for both updates. The gradient in Eq. 10 requires backpropagating through the optimization process in Eq. 7. A clear description of differentiation through gradient descent is given as Algorithm 2 in (Maclaurin et al., 2015), though in practice the use of an automatic differentiation package means this step does not need to be programmed explicitly. A pictorial representation of these updates is provided in Figure 1.
It is important to distinguish this from an approach suggested in (Goodfellow et al., 2014), that several update steps of the discriminator parameters should be run before each single update step for the generator. In that approach, the update steps for both models are still gradient descent (ascent) with respect to fixed values of the other model parameters, rather than the surrogate loss we describe in Eq. 9. Performing steps of discriminator update between each single step of generator update corresponds to updating the generator parameters using only the first term in Eq. 12 below.
To better understand the behavior of the surrogate loss , we examine its gradient with respect to the generator parameters ,
(12) |
Standard GAN training corresponds exactly to updating the generator parameters using only the first term in this gradient, with being the parameters resulting from the discriminator update step. An optimal generator for any fixed discriminator is a delta function at the to which the discriminator assigns highest data probability. Therefore, in standard GAN training, each generator update step is a partial collapse towards a delta function.
The second term captures how the discriminator would react to a change in the generator. It reduces the tendency of the generator to engage in mode collapse. For instance, the second term reflects that as the generator collapses towards a delta function, the discriminator reacts and assigns lower probability to that state, increasing the generator loss. It therefore discourages the generator from collapsing, and may improve stability.
As , goes to a local optimum of , where , and therefore the second term in Eq. 12 goes to 0 (Danskin, 1967). The gradient of the unrolled surrogate loss with respect to is thus identical to the gradient of the standard GAN loss both when and when , where we take to imply that in the standard GAN the discriminator is also fully optimized between each generator update. Between these two extremes, captures additional information about the response of the discriminator to changes in the generator.
GANs can be thought of as a game between the discriminator () and the generator (). The agents take turns taking actions and updating their parameters until a Nash equilibrium is reached. The optimal action for is to evaluate the probability ratio for the generator’s move (Eq. 5). The optimal generator action is to move its mass to maximize this ratio.
The initial move for will be to move as much mass as its parametric family and update step permits to the single point that maximizes the ratio of probability densities. The action will then take is quite simple. It will track that point, and to the extent allowed by its own parametric family and update step assign low data probability to it, and uniform probability everywhere else. This cycle of moving and following will repeat forever or converge depending on the rate of change of the two agents. This is similar to the situation in simple matrix games like rock-paper-scissors and matching pennies, where alternating gradient descent (ascent) with a fixed learning rate is known not to converge (Singh et al., 2000; Bowling & Veloso, 2002).
In the unrolled case, however, this undesirable behavior no longer occurs. Now ’s actions take into account how will respond. In particular, will try to make steps that will have a hard time responding to. This extra information helps the generator spread its mass to make the next step less effective instead of collapsing to a point.
In principle, a surrogate loss function could be used for both
and . In the case of 1-step unrolled optimization this is known to lead to convergence for games in which gradient descent (ascent) fails (Zhang & Lesser, 2010). However, the motivation for using the surrogate generator loss in Section 2.2, of unrolling the inner of two nested and functions, does not apply to using a surrogate discriminator loss. Additionally, it is more common for the discriminator to overpower the generator than vice-versa when training a GAN. Giving more information to by allowing it to ‘see into the future’ may thus help the two models be more balanced.In this section we demonstrate improved mode coverage and stability by applying this technique to five datasets of increasing complexity. Evaluation of generative models is a notoriously hard problem (Theis et al., 2016)
. As such the de facto standard in GAN literature has become sample quality as evaluated by a human and/or evaluated by a heuristic (Inception score for example,
(Salimans et al., 2016)). While these evaluation metrics do a reasonable job capturing sample quality, they fail to capture sample diversity. In our first 2 experiments diversity is easily evaluated via visual inspection. In our later experiments this is not the case, and we will use a variety of methods to quantify coverage of samples. Our measures are individually strongly suggestive of unrolling reducing mode-collapse and improving stability, but none of them alone are conclusive. We believe that taken together however, they provide extremely compelling evidence for the advantages of unrolling.
When doing stochastic optimization, we must choose which minibatches to use in the unrolling updates in Eq. 7. We experimented with both a fixed minibatch and re-sampled minibatches for each unrolling step, and found it did not significantly impact the result. We use fixed minibatches for all experiments in this section.
We provide a reference implementation of this technique at github.com/poolio/unrolled_gan.
To illustrate the impact of discriminator unrolling, we train a simple GAN architecture on a 2D mixture of 8 Gaussians arranged in a circle. For a detailed list of architecture and hyperparameters see Appendix A. Figure 2 shows the dynamics of this model through time. Without unrolling the generator rotates around the valid modes of the data distribution but is never able to spread out mass. When adding in unrolling steps G quickly learns to spread probability mass and the system converges to the data distribution.
In Appendix B we perform further experiments on this toy dataset. We explore how unrolling compares to historical averaging, and compares to using the unrolled discriminator to update the generator, but without backpropagating through the generator. In both cases we find that the unrolled objective performs better.
To evaluate the ability of this approach to improve trainability, we look to a traditionally challenging family of models to train – recurrent neural networks (RNNs). In this experiment we try to generate MNIST samples using an LSTM (Hochreiter & Schmidhuber, 1997)
. MNIST digits are 28x28 pixel images. At each timestep of the generator LSTM, it outputs one column of this image, so that after 28 timesteps it has output the entire sample. We use a convolutional neural network as the discriminator. See Appendix
C for the full model and training details. Unlike in all previously successful GAN models, there is no symmetry between the generator and the discriminator in this task, resulting in a more complex power balance. Results can be seen in Figure 3. Once again, without unrolling the model quickly collapses, and rotates through a sequence of single modes. Instead of rotating spatially, it cycles through proto-digit like blobs. When running with unrolling steps the generator disperses and appears to cover the whole data distribution, as in the 2D example.GANs suffer from two different types of model collapse – collapse to a subset of data modes, and collapse to a sub-manifold within the data distribution. In these experiments we isolate both effects using artificially constructed datasets, and demonstrate that unrolling can largely rescue both types of collapse.
To explore the degree to which GANs drop discrete modes in a dataset, we use a technique similar to one from (Che et al., 2016). We construct a dataset by stacking three randomly chosen MNIST digits, so as to construct an RGB image with a different MNIST digit in each color channel. This new dataset has 1,000 distinct modes, corresponding to each combination of the ten MNIST classes in the three channels.
We train a GAN on this dataset, and generate samples from the trained model (25,600 samples for all experiments). We then compute the predicted class label of each color channel using a pre-trained MNIST classifier. To evaluate performance, we use two metrics: the number of modes for which the generator produced at least one sample, and the KL divergence between the model and the expected data distribution. Within this discrete label space, a KL divergence can be estimated tractably between the generated samples and the data distribution over classes, where the data distribution is a uniform distribution over all 1,000 classes.
As presented in Table 1, as the number of unrolling steps is increased, both mode coverage and reverse KL divergence improve. Contrary to (Che et al., 2016), we found that reasonably sized models (such as the one used in Section 3.4) covered all 1,000 modes even without unrolling. As such we use smaller convolutional GAN models. Details on the models used are provided in Appendix E.
We observe an additional interesting effect in this experiment. The benefits of unrolling increase as the discriminator size is reduced. We believe unrolling effectively increases the capacity of the discriminator. The unrolled discriminator can better react to any specific way in which the generator is producing non-data-like samples. When the discriminator is weak, the positive impact of unrolling is thus larger.
In addition to discrete modes, we examine the effect of unrolling when modeling continuous manifolds. To get at this quantity, we constructed a dataset consisting of colored MNIST digits. Unlike in the previous experiment, a single MNIST digit was chosen, and then assigned a single monochromatic color. With a perfect generator, one should be able to recover the distribution of colors used to generate the digits. We use colored MNIST digits so that the generator also has to model the digits, which makes the task sufficiently complex that the generator is unable to perfectly solve it. The color of each digit is sampled from a 3D normal distribution. Details of this dataset are provided in Appendix
F. We will examine the distribution of colors in the samples generated by the trained GAN. As will also be true in the CIFAR10 example in Section 3.4, the lack of diversity in generated colors is almost invisible using only visual inspection of the samples. Samples can be found in Appendix F.In order to recover the color the GAN assigned to the digit, we used k-means with 2 clusters, to pick out the foreground color from the background. We then performed this transformation for both the training data and the generated images. Next we fit a Gaussian kernel density estimator to both distributions over digit colors. Finally, we computed the JS divergence between the model and data distributions over colors. Results can be found in Table
2 for several model sizes. Details of the models are provided in Appendix F.In general, the best performing models are unrolled for 5-10 steps, and larger models perform better than smaller models. Counter-intuitively, taking 1 unrolling step seems to hurt this measure of diversity. We suspect that this is due to it introducing oscillatory dynamics into training. Taking more unrolling steps however leads to improved performance with unrolling.
Here we test our technique on a more traditional convolutional GAN architecture and task, similar to those used in (Radford et al., 2015; Salimans et al., 2016). In the previous experiments we tested models where the standard GAN training algorithm would not converge. In this section we improve a standard model by reducing its tendency to engage in mode collapse. We ran 4 configurations of this model, varying the number of unrolling steps to be 0, 1, 5, or 10. Each configuration was run 5 times with different random seeds. For full training details see Appendix D. Samples from each of the 4 configurations can be found in Figure 4. There is no obvious difference in visual quality across these model configurations. Visual inspection however provides only a poor measure of sample diversity.
By training with an unrolled discriminator, we expect to generate more diverse samples which more closely resemble the underlying data distribution. We introduce two techniques to examine sample diversity: inference via optimization, and pairwise distance distributions.
Since likelihood cannot be tractably computed, over-fitting of GANs is typically tested by taking samples and computing the nearest-neighbor images in pixel space from the training data (Goodfellow et al., 2014). We will do the reverse, and measure the ability of the generative model to generate images that look like specific samples from the training data. If we did this by generating random samples from the model, we would need an exponentially large number of samples. We instead treat finding the nearest neighbor to a target image as an optimization task,
(13) | ||||
(14) |
This concept of backpropagating to generate images has been widely used in visualizing features from discriminative networks (Simonyan et al., 2013; Yosinski et al., 2015; Nguyen et al., 2016) and has been applied to explore the visual manifold of GANs in (Zhu et al., 2016).
We apply this technique to each of the models trained. We optimize with 3 random starts using LBFGS, which is the optimizer typically used in similar settings such as style transfer (Johnson et al., 2016; Champandard, 2016). Results comparing average mean squared errors between and in pixel space can be found in Table 3. In addition we compute the percent of images for which a certain configuration achieves the lowest loss when compared to the other configurations.
In the zero step case, there is poor reconstruction and less than 1% of the time does it obtain the lowest error of the 4 configurations. Taking 1 unrolling step results in a significant improvement in MSE. Taking 10 unrolling steps results in more modest improvement, but continues to reduce the reconstruction MSE.
To visually see this, we compare the result of the optimization process for 0, 1, 5, and 10 step configurations in Figure 5. To select for images where differences in behavior is most apparent, we sort the data by the absolute value of a fractional difference in MSE between the 0 and 10 step models, . This highlights examples where either the 0 or 10 step model cannot accurately fit the data example but the other can. In Appendix G we show the same comparison for models initialized using different random seeds. Many of the zero step images are fuzzy and ill-defined suggesting that these images cannot be generated by the standard GAN generative model, and come from a dropped mode. As more unrolling steps are added, the outlines become more clear and well defined – the model covers more of the distribution and thus can recreate these samples.
A second complementary approach is to compare statistics of data samples to the corresponding statistics for samples generated by the various models. One particularly simple and relevant statistic is the distribution over pairwise distances between random pairs of samples. In the case of mode collapse, greater probability mass will be concentrated in smaller volumes, and the distribution over inter-sample distances should be skewed towards smaller distances. We sample random pairs of images from each model, as well as from the training data, and compute histograms of the
distances between those sample pairs. As illustrated in Figure 6, the standard GAN, with zero unrolling steps, has its probability mass skewed towards smaller intersample distances, compared to real data. As the number of unrolling steps is increased, the histograms over intersample distances increasingly come to resemble that for the data distribution. This is further evidence in support of unrolling decreasing the mode collapse behavior of GANs.In this work we developed a method to stabilize GAN training and reduce mode collapse by defining the generator objective with respect to unrolled optimization of the discriminator. We then demonstrated the application of this method to several tasks, where it either rescued unstable training, or reduced the tendency of the model to drop regions of the data distribution.
The main drawback to this method is computational cost of each training step, which increases linearly with the number of unrolling steps. There is a tradeoff between better approximating the true generator loss and the computation required to make this estimate. Depending on the architecture, one unrolling step can be enough. In other more unstable models, such as the RNN case, more are needed to stabilize training. We have some initial positive results suggesting it may be sufficient to further perturb the training gradient in the same direction that a single unrolling step perturbs it. While this is more computationally efficient, further investigation is required.
The method presented here bridges some of the gap between theoretical and practical results for training of GANs. We believe developing better update rules for the generator and discriminator is an important line of work for GAN training. In this work we have only considered a small fraction of the design space. For instance, the approach could be extended to unroll when updating as well – letting the discriminator react to how the generator would move. It is also possible to unroll sequences of and updates. This would make updates that are recursive: could react to maximize performance as if and had already updated.
We would like to thank Laurent Dinh, David Dohan, Vincent Dumoulin, Liam Fedus, Ishaan Gulrajani, Julian Ibarz, Eric Jang, Matthew Johnson, Marc Lanctot, Augustus Odena, Gabriel Pereyra, Colin Raffel, Sam Schoenholz, Ayush Sekhari, Jon Shlens, and Dale Schuurmans for insightful conversation, as well as the rest of the Google Brain Team.
Proceedings of The 32nd International Conference on Machine Learning
, pp. 1462––1471, 2015. URL http://www.jmlr.org/proceedings/papers/v37/gregor15.html.Proceedings of European Conference on Computer Vision (ECCV)
, 2016.Network architecture and experimental details for the experiment in Section 3.1 are as follows:
The dataset is sampled from a mixture of 8 Gaussians of standard deviation 0.02. The means are equally spaced around a circle of radius 2.
The generator network consists of a fully connected network with 2 hidden layers of size 128 with relu activations followed by a linear projection to 2 dimensions. All weights are initialized to be orthogonal with scaling of 0.8.
The discriminator network first scales its input down by a factor of 4 (to roughly scale to (-1,1)), followed by 1 layer fully connected network with relu activations to a linear layer to of size 1 to act as the logit.
The generator minimizes and the discriminator minimizes where x is sampled from the data distribution and . Both networks are optimized using Adam (Kingma & Ba, 2014) with a learning rate of 1e-4 and =0.5.
The network is trained by alternating updates of the generator and the discriminator. One step consists of either G or D updating.
Another comparison we looked at was with regard to historical averaging based approaches. Recently similarly inspired approaches have been used in (Salimans et al., 2016) to stabilize training. For our study, we looked at taking an ensemble of discriminators over time.
First, we looked at taking an ensemble of the last N steps, as shown in Figure App.1.
To further explore this idea, we ran experiments with an ensemble of 5 discriminators, but with different periods between replacing discriminators in the ensemble. For example, if I sample at a rate of 100, it would take 500 steps to replace all 5 discriminators. Results can be seen in Figure App.2.
We observe that given longer and longer time delays, the model becomes less and less stable. We hypothesize that this is due to the initial shape of the discriminator loss surface. When training, the discriminator’s estimates of probability densities are only accurate on regions where it was trained. When fixing this discriminator, we are removing the feedback between the generator exploitation and the discriminators ability to move. As a result, the generator is able to exploit these fixed areas of poor performance for older discriminators in the ensemble. New discriminators (over)compensate for this, leading the system to diverge.
A second factor we analyzed is the effect of backpropagating the learning signal through the unrolling in Equation 12. We can turn on or off this backpropagation through the unrolling by introducing stop_gradient calls into our computation graph between each unrolling step. With the stop_gradient in place, the update signal corresponds only to the first term in Equation 12. We looked at 3 configurations: without stop_gradients; vanilla unrolled GAN, with stop gradients; and with stop gradients but taking the average over the unrolling steps instead of taking the final value. Results can be see in Figure App.3.
We initially observed no difference between unrolling with and without the second gradient, as both required 3 unrolling steps to become stable. When the discriminator is unrolled to convergence, the second gradient term becomes zero. Due to the simplicity of the problem, we suspect that the discriminator nearly converged for every generator step, and the second gradient term was thus irrelevant.
To test this, we modified the dynamics to perform five generator steps for each discriminator update. Results are shown in Figure App.4. With the discriminator now kept out of equilibrium, successful training can be achieved with half as many unrolling steps when using both terms in the gradient than when only including the first term.
The network architecture for the experiment in Section 3.2 is as follows:
The MNIST dataset is scaled to [-1, 1).
The generator first scales the 256D noise vector through a 256 unit fully connected layer with relu activation. This is then fed into the initial state of a 256D LSTM
(Hochreiter & Schmidhuber, 1997)that runs 28 steps corresponding to the number of columns in MNIST. The resulting sequence of activations is projected through a fully connected layer with 28 outputs with a tanh activation function. All weights are initialized via the ”Xavier” initialization
(Glorot & Bengio, 2010). The forget bias on the LSTM is initialized to 1.The discriminator network feeds the input into a Convolution(16, stride=2) followed by a Convolution(32, stride=2) followed by Convolution(32, stride=2). All convolutions have stride 2. As in
(Radford et al., 2015) leaky rectifiers are used with a 0.3 leak. Batch normalization is applied after each layer (Ioffe & Szegedy, 2015). The resulting 4D tensor is then flattened and a linear projection is performed to a single scalar.
The generator network minimises and the discriminator minimizes . Both networks are trained with Adam(Kingma & Ba, 2014) with learning rates of 1e-4 and =0.5. The network is trained alternating updating the generator and the discriminator for 150k steps. One step consists of just 1 network update.
The network architectures for the discriminator, generator, and encoder as as follows. All convolutions have a kernel size of 3x3 with batch normalization. The discriminator uses leaky ReLU’s with a 0.3 leak and the generator uses standard ReLU.
The generator network is defined as:
number outputs | stride | |
Input: | ||
Fully connected | 4 * 4 * 512 | |
Reshape to image 4,4,512 | ||
Transposed Convolution | 256 | 2 |
Transposed Convolution | 128 | 2 |
Transposed Convolution | 64 | 2 |
Convolution | 1 or 3 | 1 |
The discriminator network is defined as:
number outputs | stride | |
---|---|---|
Input: or | ||
Convolution | 64 | 2 |
Convolution | 128 | 2 |
Convolution | 256 | 2 |
Flatten | ||
Fully Connected | 1 |
The generator network minimises and the discriminator minimizes . The networks are trained with Adam with a generator learning rate of 1e-4, and a discriminator learning rate of 2e-4. The network is trained alternating updating the generator and the discriminator for 100k steps. One step consists of just 1 network update.
number outputs | stride | |
---|---|---|
Input: | ||
Fully connected | 4 * 4 * 64 | |
Reshape to image 4,4,64 | ||
Transposed Convolution | 32 | 2 |
Transposed Convolution | 16 | 2 |
Transposed Convolution | 8 | 2 |
Convolution | 3 | 1 |
The discriminator network is parametrized by a size X and is defined as follows. In our tests, we used X of 1/4 and 1/2.
number outputs | stride | |
---|---|---|
Input: or | ||
Convolution | 8*X | 2 |
Convolution | 16*X | 2 |
Convolution | 32*X | 2 |
Flatten | ||
Fully Connected | 1 |
To generate this dataset we first took the mnist digit, , scaled between 0 and 1. For each image we sample a color, , normally distributed with mean=0 and std=0.5. To generate a colored digit between (-1, 1) we do . Finally, we add a small amount of pixel independent noise sampled from a normal distribution with std=0.2, and the resulting values are cliped between (-1, 1). When visualized, this generates images and samples that can be seen in figure App.5. Once again it is very hard to visually see differences in sample diversity when comparing the 128 and the 512 sized models.
The models used in this section are parametrized by a variable X to control capacity. A value of X=1 is same architecture used in the cifar10 experiments. We used 1/4, 1/2 and 1 as these values.
The generator network is defined as:
number outputs | stride | |
Input: | ||
Fully connected | 4 * 4 * 512*X | |
Reshape to image 4,4,512*X | ||
Transposed Convolution | 256*X | 2 |
Transposed Convolution | 128*X | 2 |
Transposed Convolution | 64*X | 2 |
Convolution | 3 | 1 |
The discriminator network is defined as:
number outputs | stride | |
---|---|---|
Input: or | ||
Convolution | 64*X | 2 |
Convolution | 128*X | 2 |
Convolution | 256*X | 2 |
Flatten | ||
Fully Connected | 1 |
More examples of model based optimization. We performed 5 runs with different seeds of each of of the unrolling steps configuration. Bellow are comparisons for each run index. Ideally this would be a many to many comparison, but for space efficiency we grouped the runs by the index in which they were run.
Comments
There are no comments yet.