1 Introduction
Recent years have witnessed tremendous success of deep neural networks (DNNs), especially the kind of bottomup neural networks trained for discriminative tasks. In particular, Convolutional Neural Networks (CNNs) have achieved impressive accuracy on the challenging ImageNet classification benchmark
[30, 56, 57, 21, 52]. Interestingly, it has been shown that CNNs trained on ImageNet for classification can learn representations that are transferable to other tasks [55], and even to other modalities [20]. However, bottomup discriminative models are focused on learning useful representations from data, being incapable of capturing the data distribution.Learning topdown generative models that can explain complex data distribution is a longstanding problem in machine learning research. The expressive power of deep neural networks makes them natural candidates for generative models, and several recent works have shown promising results
[28, 17, 44, 36, 68, 38, 9]. While stateoftheart DNNs can rival human performance in certain discriminative tasks, current best deep generative models still fail when there are large variations in the data distribution.A natural question therefore arises: can we leverage the hierarchical representations in a discriminatively trained model to help the learning of topdown generative models? In this paper, we propose a generative model named Stacked Generative Adversarial Networks (SGAN). Our model consists of a topdown stack of GANs, each trained to generate “plausible” lowerlevel representations conditioned on higherlevel representations. Similar to the image discriminator in the original GAN model which is trained to distinguish “fake” images from “real” ones, we introduce a set of representation discriminators that are trained to distinguish “fake” representations from “real” representations. The adversarial loss introduced by the representation discriminator forces the intermediate representations of the SGAN to lie on the manifold of the bottomup DNN’s representation space. In addition to the adversarial loss, we also introduce a conditional loss that imposes each generator to use the higherlevel conditional information, and a novel entropy loss that encourages each generator to generate diverse representations. By stacking several GANs in a topdown way and using the topmost GAN to receive labels and the bottommost GAN to generate images, SGAN can be trained to model the data distribution conditioned on class labels. Through extensive experiments, we demonstrate that our SGAN is able to generate images of much higher quality than a vanilla GAN. In particular, our model obtains stateoftheart Inception scores on CIFAR10 dataset.
2 Related Work
Deep Generative Image Models.
There has been a large body of work on generative image modeling with deep learning. Some early efforts include Restricted Boltzmann Machines
[22][23]. More recently, several successful paradigms of deep generative models have emerged, including the autoregressive models [32, 16, 58, 44, 45, 19], Variational Autoencoders (VAEs) [28, 27, 50, 64, 18], and Generative Adversarial Networks (GANs) [17, 5, 47, 49, 53, 33]. Our work builds upon the GAN framework, which employs a generator that transforms a noise vector into an image and a discriminator that distinguishes between real and generated images.However, due to the vast variations in image content, it is still challenging for GANs to generate diverse images with sufficient details. To this end, several works have attempted to factorize a GAN into a series of GANs, decomposing the difficult task into several more tractable subtasks. Denton et al. [5] propose a LAPGAN model that factorizes the generative process into multiresolution GANs, with each GAN generating a higherresolution residual conditioned on a lowerresolution image. Although both LAPGAN and SGAN consist of a sequence of GANs each working at one scale, LAPGAN focuses on generating multiresolution images from coarse to fine while our SGAN aims at modeling multilevel representations from abstract to specific. Wang and Gupta [62] propose a GAN, using one GAN to generate surface normals and another GAN to generate images conditioned on surface normals. Surface normals can be viewed as a specific type of image representations, capturing the underlying 3D structure of an indoor scene. On the other hand, our framework can leverage the more general and powerful multilevel representations in a pretrained discriminative DNN.
There are several works that use a pretrained discriminative model to aid the training of a generator. [31, 7] add a regularization term that encourages the reconstructed image to be similar to the original image in the feature space of a discriminative network. [59, 26] use an additional “style loss” based on Gram matrices of feature activations. Different from our method, all the works above only add loss terms to regularize the generator’s output, without regularizing its internal representations.
Matching Intermediate Representations Between Two DNNs. There have been some works that attempt to “match” the intermediate representations between two DNNs. [51, 20] use the intermediate representations of one pretrained DNN to guide another DNN in the context of knowledge transfer. Our method can be considered as a special kind of knowledge transfer. However, we aim at transferring the knowledge in a bottomup DNN to a topdown generative model, instead of another bottomup DNN. Also, some autoencoder architectures employ layerwise reconstruction loss [60, 48, 67, 66]. The layerwise loss is usually accompanied by lateral connections from the encoder to the decodery. On the other hand, SGAN is a generative model and does not require any information from the encoder once training completes. Another important difference is that we use adversarial loss instead of reconstruction loss to match intermediate representations.
Visualizing Deep Representations. Our work is also related to the recent efforts in visualizing the internal representations of DNNs. One popular approach uses gradientbased optimization to find an image whose representation is close to the one we want to visualize [37]. Other approaches, such as [8], train a topdown deconvolutional network to reconstruct the input image from a feature representation by minimizing the Euclidean reconstruction error in image space. However, there is inherent uncertainty in the reconstruction process, since the representations in higher layers of the DNN are trained to be invariant to irrelevant transformations and to ignore lowlevel details. With Euclidean training objective, the deconvolutional network tends to produce blurry images. To alleviate this problem, Dosovitskiy abd Brox [7] further propose a feature loss and an adversarial loss that enables much sharper reconstructions. However, it still does not tackle the problem of uncertainty in reconstruction. Given a highlevel feature representation, the deconvolutional network deterministically generates a single image, despite the fact that there exist many images having the same representation. Also, there is no obvious way to sample images because the feature prior distribution is unknown. Concurrent to our work, Nguyen et al. [42] incorporate the feature prior with a variant of denoising autoencoder (DAE). Their sampling relies on an iterative optimization procedure, while we are focused on efficient feedforward sampling.
3 Methods
In this section we introduce our model architecture. In Sec. 3.1 we briefly overview the framework of Generative Adversarial Networks. We then describe our proposal for Stacked Generative Adversarial Networks in Sec. 3.2. In Sect. 3.3 and 3.4
we will focus on our two novel loss functions, conditional loss and entropy loss, respectively.
3.1 Background: Generative Adversarial Network
As shown in Fig. 1 (a), the original GAN [17] is trained using a twoplayer minmax game: a discriminator trained to distinguish generated images from real images, and a generator trained to fool . The discriminator loss and the generator loss are defined as follows:
(1) 
(2) 
In practice, and are usually updated alternately. The training process matches the generated image distribution with the real image distribution in the training set. In other words, The adversarial training forces to generate images that reside on the natural images manifold.
3.2 Stacked Generative Adversarial Networks
Pretrained Encoder. We first consider a bottomup DNN pretrained for classification, which is referred to as the encoder throughout. We define a stack of bottomup deterministic nonlinear mappings: , where , consists of a sequence of neural layers (e.g., convolution, pooling), is the number of hierarchies (stacks), are intermediate representations, is the classification result, and is the input image. Note that in our formulation, each can contain multiple layers and the way of grouping layers together into is determined by us. The number of stacks is therefore less than the number of layers in and is also determined by us.
Stacked Generators. Provided with a pretrained encoder , our goal is to train a topdown generator that inverts . Specifically, consists of a topdown stack of generators , each trained to invert a bottomup mapping . Each takes in a higherlevel feature and a noise vector as inputs, and outputs the lowerlevel feature . We first train each GAN independently and then train them jointly in an endtoend manner, as shown in Fig. 1. Each generator receives conditional input from encoders in the independent training stage, and from the upper generators in the joint training stage. In other words, during independent training and during joint training. The loss equations shown in this section are for independent training stage but can be easily modified to joint training by replacing with .
Intuitively, the total variations of images could be decomposed into multiple levels, with higherlevel semantic variations (e.g., attributes, object categories, rough shapes) and lowerlevel variations (e.g., detailed contours and textures, background clutters). Our model allows using different noise variables to represent different levels of variations.
The training procedure is shown in Fig. 1 (b). Each generator is trained with a linear combination of three loss terms: adversarial loss, conditional loss, and entropy loss.
(3) 
where , , denote adversarial loss, conditional loss, and entropy loss respectively. , , are the weights associated with different loss terms. In practice, we find it sufficient to set the weights such that the magnitude of different terms are of similar scales. In this subsection we first introduce the adversarial loss . We will then introduce and in Sec. 3.3 and 3.4 respectively.
For each generator , we introduce a representation discriminator that distinguishes generated representations , from “real” representations . Specifically, the discriminator is trained with the loss function:
(4) 
And is trained to “fool” the representation discriminator , with the adversarial loss defined by:
(5) 
During joint training, the adversarial loss provided by representational discriminators can also be regarded as a type of deep supervision [35], providing intermediate supervision signals. In our current formulation, is a discriminative model, and is a generative model conditioned on labels. However, it is also possible to train SGAN without using label information: can be trained with an unsupervised objective and can be cast into an unconditional generative model by removing the label input from the top generator. We leave this for future exploration.
Sampling. To sample images, all s are stacked together in a topdown manner, as shown in Fig. 1 (c). Our SGAN is capable of modeling the data distribution conditioned on the class label: , where each is modeled by a generator . From an informationtheoretic perspective, SGAN factorizes the total entropy of the image distribution into multiple (smaller) conditional entropy terms: , thereby decomposing one difficult task into multiple easier tasks.
3.3 Conditional Loss
At each stack, a generator is trained to capture the distribution of lowerlevel representations , conditioned on higherlevel representations . However, in the above formulation, the generator might choose to ignore , and generate plausible from scratch. Some previous works [40, 15, 5] tackle this problem by feeding the conditional information to both the generator and discriminator. This approach, however, might introduce unnecessary complexity to the discriminator and increase model instability [46, 54].
Here we adopt a different approach: we regularize the generator by adding a loss term named conditional loss. We feed the generated lowerlevel representations back to the encoder , and compute the recovered higherlevel representations. We then enforce the recovered representations to be similar to the conditional representations. Formally:
(6) 
where is a distance measure. We define to be the Euclidean distance for intermediate representations and crossentropy for labels. Our conditional loss is similar to the “feature loss” used by [7] and the “FCN loss” in [62].
3.4 Entropy Loss
Simply adding the conditional loss leads to another issue: the generator learns to ignore the noise , and compute deterministically from . This problem has been encountered in various applications of conditional GANs, e.g., synthesizing future frames conditioned on previous frames [39], generating images conditioned on label maps [25], and most related to our work, synthesizing images conditioned on feature representations [7]. All the above works attempted to generate diverse images/videos by feeding noise to the generator, but failed because the conditional generator simply ignores the noise. To our knowledge, there is still no principled way to deal with this issue. It might be tempting to think that minibatch discrimination [53], which encourages sample diversity in each minibatch, could solve this problem. However, even if the generator generates deterministically from , the generated samples in each minibatch are still diverse since generators are conditioned on different . Thus, there is no obvious way minibatch discrimination could penalize a collapsed conditional generator.
Variational Conditional Entropy Maximization. To tackle this problem, we would like to encourage the generated representation to be sufficiently diverse when conditioned on , i.e., the conditional entropy should be as high as possible. Since directly maximizing is intractable, we propose to maximize instead a variational lower bound on the conditional entropy. Specifically, we use an auxiliary distribution to approximate the true posterior , and augment the training objective with a loss term named entropy loss:
(7) 
Below we give a proof that minimizing is equivalent to maximizing a variational lower bound for .
(8) 
In practice, we parameterize with a deep network that predicts the posterior distribution of given . shares most of the parameters with
. We treat the posterior as a diagonal Gaussian with fixed standard deviations, and use the network
to only predict the posterior mean, making equivalent to the Euclidean reconstruction error. In each iteration we update both and to minimize .Our method is similar to the variational mutual information maximization technique proposed by Chen et al. [2]. A key difference is that [2] uses the network to predict only a small set of deliberately constructed “latent code”, while our tries to predict all the noise variables in each stack. The loss used in [2] therefore maximizes the mutual information between the output and the latent code, while ours maximizes the entropy of the output , conditioned on . [6, 10] also train a separate network to map images back to latent space to perform unsupervised feature learning. Independent of our work, [4] proposes to regularize EBGAN [68] with entropy maximization in order to prevent the discriminator from degenerating to uniform prediction. Our entropy loss is motivated from generating multiple possible outputs from the same conditional input.
4 Experiments
In this section, we perform experiments on a variety of datasets including MNIST [34], SVHN [41], and CIFAR10 [29]. Code and pretrained models are available at: https://github.com/xunhuang1995/SGAN. Readers may refer to our code repository for more details about experimental setup, hyperparameters, etc.
Encoder: For all datasets we use a small CNN with two convolutional layers as our encoder: conv1pool1conv2pool2fc3fc4, where fc3 is a fully connected layer and fc4 outputs classification scores before softmax. On CIFAR10 we apply horizontal flipping to train the encoder. No data augmentation is used on other datasets.
Generator: We use generators with two stacks throughout our experiments. Note that our framework is generally applicable to the setting with multiple stacks, and we hypothesize that using more stacks would be helpful for largescale and highresolution datasets. For all datasets, our top GAN generates fc3 features from some random noise , conditioned on label . The bottom GAN generates images from some noise , conditioned on fc3 features generated from GAN . We set the loss coefficient parameters and .^{1}^{1}1 The choice of the parameters are made so that the magnitude of each loss term is of the same scale.
4.1 Datasets
We thoroughly evaluate SGAN on three widely adopted datasets: MNIST [34], SVHN [41], and CIFAR10 [29]. The details of each dataset is described in the following.
MNIST: The MNIST dataset contains labeled images of handwritten digits with in the training set and in the test set. Each image is sized by .
SVHN: The SVHN dataset is composed of realworld color images of house numbers collected by Google Street View [41]. Each image is of size
and the task is to classify the digit at the center of the image. The dataset contains
training images and test images.CIFAR10: The CIFAR10 dataset consists of colored natural scene images sized at pixels. There are 50,000 training images and 10,000 test images in classes.
4.2 Samples
In Fig. 2 (a), we show MNIST samples generated by SGAN. Each row corresponds to samples conditioned on a given digit class label. SGAN is able to generate diverse images with different characteristics. The samples are visually indistinguishable from real MNIST images shown in Fig. 2 (b), but still have differences compared with corresponding nearest neighbor training images.
We further examine the effect of entropy loss. In Fig. 2 (c) we show the samples generated by bottom GAN when conditioned on a fixed fc3 feature generated by the top GAN. The samples (per row) have sufficient lowlevel variations, which reassures that bottom GAN learns to generate images without ignoring the noise . In contrast, in Fig. 2 (d), we show samples generated without using entropy loss for bottom generator, where we observe that the bottom GAN ignores the noise and instead deterministically generates images from fc3 features.
An advantage of SGAN compared with a vanilla GAN is its interpretability: it decomposes the total variations of an image into different levels. For example, in MNIST it decomposes the variations into that represents the highlevel digit label, that captures the midlevel coarse pose of the digit and that represents the lowlevel spatial details.
The samples generated on SVHN and CIFAR10 datasets can be seen in Fig. 3 and Fig. 4, respectively. Provided with the same fc3 feature, we see in each row of panel (c) that SGAN is able to generate samples with similar coarse outline but different lighting conditions and background clutters. Also, the nearest neighbor images in the training set indicate that SGAN is not simply memorizing training data, but can truly generate novel images.
4.3 Comparison with the state of the art
Here, we compare SGAN with other stateoftheart generative models on CIFAR10 dataset. The visual quality of generated images is measured by the widely used metric, Inception score [53]. Following [53], we sample images from our model and use the code provided by [53] to compute the score. As shown in Tab. 1, SGAN obtains a score of , outperforming ACGAN [43] () and Improved GAN [53] (). Also, note that the techniques introduced in [53] are not used in our implementations. Incorporating these techniques might further boost the performance of our model.
Method  Score 

Infusion training [1]  
ALI [10] (as reported in [63])  
GMAN [11] (best variant)  
EGANEntVI [4]  
LRGAN [65]  
Denoising feature matching [63]  
(with labels, as reported in [61])  
[61]  
[53] (best variant)  
[43]  
DCGAN  
DCGAN  
DCGAN  
DCGAN  
Real data 

Trained with labels.
4.4 Visual Turing test
To further verify the effectiveness of SGAN, we conduct human visual Turing test in which we ask AMT workers to distinguish between real images and images generated by our networks. We exactly follow the interface used in Improved GAN [53], in which the workers are given images at each time and can receive feedback about whether their answers are correct. With votes for each evaluated model, our AMT workers got error rate for samples from SGAN and for samples from DCGAN . This further confirms that our stacked design can significantly improve the image quality over GAN without stacking.
4.5 More ablation studies
In Sec. 4.2 we have examined the effect of entropy loss. In order to further understand the effect of different model components, we conduct extensive ablation studies by evaluating several baseline methods on CIFAR10 dataset. If not mentioned otherwise, all models below use the same training hyperparameters as the full SGAN model.

[(a)]

SGAN: The full model, as described in Sec. 3.

SGANnojoint: Same architecture as (a), but each GAN is trained independently, and there is no final joint training stage.

DCGAN (): This is a single GAN model with the same architecture as the bottom GAN in SGAN, except that the generator is conditioned on labels instead of fc3 features. Note that other techniques proposed in this paper, including conditional loss and entropy loss , are still employed. We also tried to use the full generator in SGAN as the baseline, instead of only the bottom generator . However, we failed to make it converge, possibly because is too deep to be trained without intermediate supervision from representation discriminators.

DCGAN (): Same architecture as (c), but trained without entropy loss .

DCGAN (): Same architecture as (c), but trained without conditional loss . This model therefore does not use label information.

DCGAN (): Same architecture as (c), but trained with neither conditional loss nor entropy loss . This model also does not use label information. It can be viewed as a plain unconditional DCGAN model [47] and serves as the ultimate baseline.
We compare the generated samples (Fig. 5) and Inception scores (Tab. 1) of the baseline methods. Below we summarize some of our results:

[1)]

SGAN obtains slightly higher Inception score than SGANnojoint. Yet SGANnojoint also generates very high quality samples and outperforms all previous methods in terms of Inception scores.

SGAN, either with or without joint training, achieves significantly higher Inception score and better sample quality than the baseline DCGANs. This demonstrates the effectiveness of the proposed stacked approach.

The single DCGAN () model obtains higher Inception score than the conditional DCGAN reported in [61]. This suggests that might offer some advantages compared to a plain conditional DCGAN, even without stacking.
5 Discussion and Future Work
This paper introduces a topdown generative framework named SGAN, which effectively leverages the representational information from a pretrained discriminative network. Our approach decomposes the hard problem of estimating image distribution into multiple relatively easier tasks – each generating plausible representations conditioned on higherlevel representations. The key idea is to use representation discriminators at different training hierarchies to provide intermediate supervision. We also propose a novel entropy loss to tackle the problem that conditional GANs tend to ignore the noise. Our entropy loss could be employed in other applications of conditional GANs,
e.g., synthesizing different future frames given the same past frames [39], or generating a diverse set of images conditioned on the same label map [25]. We believe this is an interesting research direction in the future.Acknowledgments
We would like to thank Danlu Chen for the help with Fig. 1. Also, we want to thank Danlu Chen, Shuai Tang, Saining Xie, Zhuowen Tu, Felix Wu and Kilian Weinberger for helpful discussions. Yixuan Li is supported by US Army Research Office W911NF1410477. Serge Belongie is supported in part by a Google Focused Research Award.
References
 [1] F. Bordes, S. Honari, and P. Vincent. Learning to generate samples from noise through infusion training. In ICLR, 2017.
 [2] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
 [3] X. Chen, Y. Sun, B. Athiwaratkun, C. Cardie, and K. Weinberger. Adversarial deep averaging networks for crosslingual sentiment classification. arXiv preprint arXiv:1606.01614, 2016.
 [4] Z. Dai, A. Almahairi, P. Bachman, E. Hovy, and A. Courville. Calibrating energybased generative adversarial networks. In ICLR, 2017.
 [5] E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015.
 [6] J. Donahue, P. Krähenbühl, and T. Darrell. Adversarial feature learning. In ICLR, 2017.
 [7] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.
 [8] A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In CVPR, 2016.
 [9] A. Dosovitskiy, J. Tobias Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015.
 [10] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville. Adversarially learned inference. In ICLR, 2017.
 [11] I. Durugkar, I. Gemp, and S. Mahadevan. Generative multiadversarial networks. In ICLR, 2017.
 [12] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domainadversarial training of neural networks. JMLR, 2016.
 [13] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In NIPS, 2015.
 [14] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.
 [15] J. Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014, 2014.

[16]
M. Germain, K. Gregor, I. Murray, and H. Larochelle.
Made: masked autoencoder for distribution estimation.
In ICML, 2015.  [17] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.

[18]
K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra.
Draw: A recurrent neural network for image generation.
In ICML, 2015.  [19] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. In ICML, 2014.
 [20] S. Gupta, J. Hoffman, and J. Malik. Cross modal distillation for supervision transfer. In CVPR, 2016.
 [21] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.

[22]
G. E. Hinton.
Training products of experts by minimizing contrastive divergence.
Neural computation, 14(8):1771–1800, 2002.  [23] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
 [24] J. Hoffman, D. Wang, F. Yu, and T. Darrell. Fcns in the wild: Pixellevel adversarial and constraintbased adaptation. arxiv, 2016.
 [25] P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros. Imagetoimage translation with conditional adversarial networks. arxiv, 2016.

[26]
J. Johnson, A. Alahi, and L. FeiFei.
Perceptual losses for realtime style transfer and superresolution.
In ECCV, 2016.  [27] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semisupervised learning with deep generative models. In NIPS, 2014.
 [28] D. P. Kingma and M. Welling. Autoencoding variational bayes. In ICLR, 2014.
 [29] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. technical report, 2009.
 [30] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
 [31] A. Lamb, V. Dumoulin, and A. Courville. Discriminative regularization for generative models. In ICML, 2016.
 [32] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.
 [33] A. B. L. Larsen, S. K. Sønderby, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016.
 [34] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [35] C.Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeplysupervised nets. In AISTATS, 2015.

[36]
Y. Li, K. Swersky, and R. Zemel.
Generative moment matching networks.
In ICML, 2015.  [37] A. Mahendran and A. Vedaldi. Visualizing deep convolutional neural networks using natural preimages. IJCV, pages 1–23, 2016.
 [38] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow. Adversarial autoencoders. In NIPS, 2016.
 [39] M. Mathieu, C. Couprie, and Y. LeCun. Deep multiscale video prediction beyond mean square error. In ICLR, 2016.
 [40] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
 [41] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011.
 [42] A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. In CVPR, 2017.
 [43] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
 [44] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. In ICML, 2016.
 [45] A. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu. Conditional image generation with pixelcnn decoders. In NIPS, 2016.
 [46] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
 [47] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
 [48] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semisupervised learning with ladder networks. In NIPS, 2015.
 [49] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016.

[50]
D. J. Rezende, S. Mohamed, and D. Wierstra.
Stochastic backpropagation and approximate inference in deep generative models.
In ICML, 2014.  [51] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015.
 [52] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015.
 [53] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
 [54] P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays. Scribbler: Controlling deep image synthesis with sketch and color. In CVPR, 2017.
 [55] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features offtheshelf: an astounding baseline for recognition. In CVPR, 2014.
 [56] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. In ICLR, 2015.
 [57] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
 [58] L. Theis and M. Bethge. Generative image modeling using spatial lstms. In NIPS, 2015.
 [59] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feedforward synthesis of textures and stylized images. In ICML, 2016.

[60]
H. Valpola.
From neural pca to deep unsupervised learning.
Adv. in Independent Component Analysis and Learning Machines
, pages 143–171, 2015.  [61] D. Wang and Q. Liu. Learning to draw samples: With application to amortized mle for generative adversarial learning. arXiv preprint arXiv:1611.01722, 2016.
 [62] X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016.
 [63] D. WardeFarley and Y. Bengio. Improving generative adversarial networks with denoising feature matching. In ICLR, 2017.
 [64] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In ECCV, 2016.
 [65] J. Yang, A. Kannan, D. Batra, and D. Parikh. Lrgan: Layered recursive generative adversarial networks for image generation. In ICLR, 2017.
 [66] Y. Zhang, K. Lee, and H. Lee. Augmenting supervised neural networks with unsupervised objectives for largescale image classification. In ICML, 2016.
 [67] J. Zhao, M. Mathieu, R. Goroshin, and Y. Lecun. Stacked whatwhere autoencoders. ICLR Workshop, 2016.
 [68] J. Zhao, M. Mathieu, and Y. LeCun. Energybased generative adversarial network. In ICLR, 2017.