1 Introduction
Recent years have witnessed the triumphant return of the feedforward neural networks, especially the convolutional neural networks (CNNs) (LeCun et al., 1989; Krizhevsky et al., 2012; Girshick et al., 2014). Despite the successes of the discriminative learning of CNNs, the generative aspect of CNNs has not been thoroughly investigated. But it can be very useful for the following reasons: (1) The generative pretraining has the potential to lead the network to a better local optimum; (2) Samples can be drawn from the generative model to reveal the knowledge learned by the CNN. Although many generative models and learning algorithms have been proposed (Hinton et al., 2006a, b; Rifai et al., 2011; Salakhutdinov & Hinton, 2009), most of them have not been applied to learning large and deep CNNs.
In this paper, we study the generative modeling of the CNNs. We start from defining probability distributions of images given the underlying object categories or class labels, such that the CNN with a final logistic regression layer serves as the corresponding conditional distribution of the class labels given the images. These distributions are in the form of exponential tilting of a reference distribution, i.e., exponential family models or energybased models relative to a reference distribution.
With such a generative model, we proceed to study it along two related themes, which differ in how to handle the reference distribution or the null model. In the first theme, we propose a nonparametric generative gradient for pretraining the CNN, where the CNN is learned by the stochastic gradient algorithm that seeks to minimize the loglikelihood of the generative model. The gradient of the loglikelihood is approximated by the importance sampling method that keeps reweighing the images that are sampled from a nonparametric implicit reference distribution, such as the distribution of all the training images. The generative gradient is fundamentally different from the commonly used discriminative gradient, and yet in batch training, it shares the same computational architecture as well as computational cost as the discriminative gradient. This generative learning scheme can be used in a pretraining stage that is to be followed by the usual discriminative training. The generative loglikelihood provides stronger driving force than the discriminative criteria for stochastic gradient by requiring the learned parameters to explain the images instead of their labels. Experiments on the MNIST (LeCun et al., 1998) and the ImageNet (Deng et al., 2009) classification benchmarks show that this generative pretraining scheme helps improve the performance of CNNs.
The second theme in our study of generative modeling is to assume an explicit parametric form of the reference distribution, such as the Gaussian white noise model, so that we can draw synthetic images from the resulting probability distributions of images. The sampling can be accomplished by the Hamiltonian Monte Carlo (HMC) algorithm
(Neal, 2011), which iterates between a bottomup convolution step and a topdown deconvolution step. The proposed visualization method can directly draw samples of synthetic images for any given node in a trained CNN, without resorting to any extra holdout images. Experiments show that meaningful and varied synthetic images can be generated for nodes of a large and deep CNN discriminatively trained on ImageNet.2 Past work
The generative model that we study is an energybased model. Such models include field of experts (Roth & Black, 2009), product of experts (Hinton, 2002)
(Hinton et al., 2006a), model based on neural networks (Hinton et al., 2006b), etc. However, most of these generative models and learning algorithms have not been applied to learning large and deep CNNs.The relationship between the generative models and the discriminative approaches has been extensively studied (Jordan, 2002; Liang & Jordan, 2008)
. The usefulness of generative pretraining for deep learning has been studied by
Erhan et al. (2010) etc. However, this issue has not been thoroughly investigated for CNNs.As to visualization, our work is related to Erhan et al. (2009); Le et al. (2012); Girshick et al. (2014); Zeiler & Fergus (2013); Long et al. (2014). In Girshick et al. (2014); Long et al. (2014), the highscoring image patches are directly presented. In Zeiler & Fergus (2013), a topdown deconvolution process is employed to understand what contents are emphasized in the highscoring input image patches. In Erhan et al. (2009); Le et al. (2012); Simonyan et al. (2014), images are synthesized by maximizing the response of a given node in the network. In our work, a generative model is formally defined. We sample from the welldefined probability distribution by the HMC algorithm, generating meaningful and varying synthetic images, without resorting to a large collection of holdout images (Girshick et al., 2014; Zeiler & Fergus, 2013; Long et al., 2014).
3 Generative model based on CNN
3.1 Probability distributions on images
Suppose we observe images from many different object categories. Let be an image from an object category . Consider the following probability distribution on ,
(1) 
where is a reference distribution common to all the categories, is a scoring function for class , collects the unknown parameters to be learned from the data, and is the normalizing constant or partition function. The distribution is in the form of an exponential tilting of the reference distribution , and can be considered an energybased model or an exponential family model. In Model (1), the reference distribution may not be unique. If we change to , then we can change to , which may correspond to a for a different if the parametrization of is flexible enough. We want to choose so that either is reasonably close to as in our nonparametric generative gradient method, or the resulting based on is easy to sample from as in our generative visualization method.
For an image , let be the underlying object category or class label, so that . Suppose the prior distribution on is . The posterior distribution of given is
(2) 
where . is in the form of a multiclass logistic regression, where
can be treated as an intercept parameter to be estimated directly if the model is trained discriminatively. Thus for notational simplicity, we shall assume that the intercept term
is already absorbed into for the rest of the paper. Note that is not unique in (2). If we change to for a that is common to all the categories, we still have the same . This nonuniqueness corresponds to the nonuniqueness of in (1) mentioned above.Given a set of labeled data , equations (1) and (2) suggest two different methods to estimate the parameters . One is to maximize the generative loglikelihood , which is the same as maximizing the full loglikelihood
, where the prior probability of
can be estimated by class frequency of category . The other is to maximize the discriminative loglikelihood . For the discriminative model (2), a popular choice ofis multilayer perceptron or CNN, with
being the connection weights, and the toplayer is a multiclass logistic regression. This is the choice we adopt throughout this paper.3.2 Generative gradient
The gradient of the discriminative loglikelihood is calculated according to
(3) 
where is absorbed into as mentioned above, and the expectation for discriminative gradient is
(4) 
The gradient of the generative loglikelihood is calculated according to
(5) 
where the expectation for generative gradient is
(6) 
which can be approximated by importance sampling. Specifically, let be a set of samples from , for instance, is the distribution of images from all the categories. Here we do not attempt to model parametrically, instead, we treat it as an implicit nonparametric distribution. Then by importance sampling,
(7) 
where the importance weight and is normalized to have sum 1. Namely,
(8) 
The discriminative gradient and the generative gradient differ subtly and yet fundamentally in calculating , whose difference from the observed provides the driving force for updating . In the discriminative gradient, the expectation is with respect to the posterior distribution of the class label while the image is fixed, whereas in the generative gradient, the expectation is with respect to the distribution of the images while the class label is fixed. In general, it is easier to adjust the parameters to predict the class labels than to reproduce the features of the images. So it is expected that the generative gradient provides stronger driving force for updating .
The nonparametric generative gradient can be especially useful in the beginning stage of training or what can be called pretraining, where is small, so that the current for each category is not very separated from , which is the overall distribution of . In this stage, the importance weights
are not very skewed and the effective sample size for importance sampling can be large. So updating
according to the generative gradient can provide useful pretraining with the potential to lead toward a good local optimum. If the importance weights start to become skewed and the effective sample size starts to dwindle, then this indicates that the categories start to separate from as well as from each other, so we can switch to discriminative training to further separate the categories.3.3 Batch training and generative loss layer
At first glance, the generative gradient appears computationally expensive due to the need to sample from . In fact, with being the collection of images from all the categories, we may use each batch of samples as an approximation to in the batch training mode.
Specifically, let be a batch set of training examples, and we seek to maximize via generative gradient. In the calculation of , can be used as samples from . In this way, the computational cost of the generative gradient is about the same as that of the discriminative gradient.
Moreover, the computation of the generative gradient can be induced to share the same back propagation architecture as the discriminative gradient. Specifically, the calculation of the generative gradient can be decoupled into the calculation at a new generative loss layer and the calculation at lower layers. To be more specific, by replacing in (8) by the batch sample , we can rewrite (8) in the following form:
(9) 
where is called the generative loss layer (to be defined below, with
being treated here as a variable in the chain rule), while the calculation of
is exactly the same as that in the discriminative gradient. This decoupling brings simplicity to programming.We use the notation for the top generative layer mainly to make it conformal to the chain rule calculation. According to (8), is defined by
(10) 
3.4 Generative visualization
Iteration 0  Iteration 10  Iteration 50  Iteration 100 
Recently, researchers are interested in understanding what the machine learns. Suppose we care about the node at the top layer (the idea can be applied to the nodes at any layer). We consider generating samples from
with already learned by discriminative training (or any other methods). For this purpose, we need to assume a parametric reference distribution , such as Gaussian white noise distribution. After discriminatively learning for all , we can sample from the corresponding by Hamiltonian Monte Carlo (HMC) (Neal, 2011).Specifically, for any category , we can write , where (
is the standard deviation of
). In physics context,is a position vector and
is the potential energy function. To implement Hamiltonian dynamics, we need to introduce an auxiliary momentum vector and the corresponding kinetic energy function , where denotes the mass. Thus, a fictitious physical system described by the canonical coordinates is defined, and its total energy is. Each iteration of HMC draws a random sample from the marginal Gaussian distribution of
, and then evolve according to the Hamiltonian dynamics that conserves the total energy.A key step in the leapfrog algorithm is the computation of the derivative of the potential energy function , which includes calculating . The computation of
involves bottomup convolution and maxpooling, followed by topdown deconvolution and argmax unpooling. The maxpooling and argmax unpooling are applied to the current synthesized image (not the input image, which is not needed by our method). The topdown derivative computation is derived from HMC, and is different from
Zeiler & Fergus (2013). The visualization sequence of a category is shown in Fig. 1.4 Experiments
4.1 Generative pretraining
In generative pretraining experiments, three different training approaches are studied: i) discriminative gradient (DG); ii) generative gradient (GG); iii) generative gradient pretraining + discriminative gradient refining (GG+DG
). We build algorithms on the code of Caffe
(Jia et al., 2014) and the experiment settings are identical to Jia et al. (2014). Experiments are performed on two commonly used image classification benchmarks: MNIST (LeCun et al., 1998) handwritten digit recognition and ImageNet ILSVRC2012 (Deng et al., 2009) natural image classification.MNIST handwritten digit recognition. We first study generative pretraining on the MNIST dataset. The “LeNet” network (LeCun et al., 1998) is utilized, which is default for MNIST in Caffe. Although higher accuracy can be achieved by utilizing deeper networks, random image distortion etc, here we stick to the baseline network for fair comparison and experimental efficiency. Network training and testing are performed on the train and test
sets respectively. For all the three training approaches, stochastic gradient descent is performed in training with a batch size of 64, a base learning rate of 0.01, a weight decay term of 0.0005, a momentum term of 0.9, and a max epoch number of 25. For
GG+DG, the pretraining stage stops after 16 epochs and the discriminative gradient tuning stage starts with a base learning rate of 0.003.The experimental results are presented in Table 1. The error rate of LeNet trained by discriminative gradient is 1.03%. When trained by generative gradient, the error rate reduces to 0.85%. When generative gradient pretraining and discriminative gradient refining are both applied, the error rate further reduces to 0.78%, which is 0.25% (24% relatively) lower than that of discriminative gradient.
Training approaches  DG  GG  GG+DG 
Error rates  1.03  0.85  0.78 
ImageNet ILSVRC2012 natural image classification. In experiments on ImageNet ILSVRC2012, two networks are utilized, namely “AlexNet” (Krizhevsky et al., 2012) and “ZeilerFergusNet” (fast) (Zeiler & Fergus, 2013). Network training and testing are performed on the train and val sets respectively. In training, a single network is trained by stochastic gradient descent with a batch size of 256, a base learning rate of 0.01, a weight decay term of 0.0005, a momentum term of 0.9, and a max epoch number of 70. For GG+DG, the pretraining stage stops after 45 epochs and the discriminative gradient tuning stage starts with a base learning rate of 0.003. In testing, top1 classification error rates are reported on the val
set by classifying the center and the four corner crops of the input images.
Training approaches  DG  GG  GG+DG 

AlexNet  40.7  45.8  39.6 
ZeilerFergusNet (fast)  38.4  44.3  37.4 
As shown in Table 2, the error rates of discriminative gradient training applied on AlexNet and ZeilerFergusNet are 40.7% and 38.4% respectively, while the error rates of generative gradient are 45.8% and 44.3% respectively. Generative gradient pretraining followed by discriminative gradient refining achieves error rates of 39.6% and 37.4% respectively, which are 1.1% and 1.0% lower than those of discriminative gradient.
Experiment results on MNIST and ImageNet ILSVRC2012 show that generative gradient pretraining followed by discriminative gradient refining improves the classification accuracies for varying networks. At the beginning stage of training, updating network parameters according to the generative gradient provides useful pretraining, which leads the network parameters toward a good local optimum.
As to the computational cost, generative gradient is on par with discriminative gradient. The computational cost of the generative loss layer itself is ignorable in the network compared to the computation at the convolutional layers and the fullyconnected layers. The total epoch numbers of GG+DG is on par with that of DG.




4.2 Generative visualization
In the generative visualization experiments, we visualize the nodes of the LeNet network and the AlexNet network trained by discriminative gradient on MNIST and ImageNet ILSVRC2012 respectively. The algorithm can visualize networks trained by generative gradient as well.
We first visualize the nodes at the final fullyconnected layer of LeNet. In the experiments, we delete the dropout layer to avoid unnecessary noise for visualization. At the beginning of visualization, is initialized by Gaussian distribution with standard deviation 10. The HMC iteration number, the leapfrog step size, the leapfrog step number, the standard deviation of the reference distribution , and the particle mass are set to be 300, 0.0001, 100, 10, and 0.0001 respectively. The visualization results are shown in Fig. 2.
We further visualize the nodes in AlexNet, which is a much larger network compared to LeNet. Both nodes from the intermediate convolutional layers (conv1 to conv5) and the final fullyconnected layer (fc8) are visualized. To visualize the intermediate layers, for instance the layer conv2 with 256 filters, all layers above conv2 are removed other than the generative visualization layer. The size of the synthesized images are designed so that the dimension of the response from conv2 is . We can visualize each filter by assigning label from 1 to 256. The leapfrog step size, the leapfrog step number, the standard deviation of the reference distribution , and the particle mass are set to be 0.000003, 50, 10, and 0.00001 respectively. The HMC iteration numbers are 100 and 500 for nodes from the intermediate convolutional and the final fullyconnected layer respectively. The synthesized images for the final layer are initialized from the zero image.
The samples from the intermediate convolutional layers and the final fullyconnected layer of AlexNet are shown in Fig. 3 and 4 respectively. The HMC algorithm produces meaningful and varied samples, which reveals what is learned by the nodes at different layers of the network. Note that such samples are generated from the trained model directly, without using a large holdout collection of images as in Girshick et al. (2014); Zeiler & Fergus (2013); Long et al. (2014).
As to the computational cost, it varies for nodes at different layers within different networks. On a desktop with GTX Titian, it takes about 0.4 minute to draw a sample for nodes at the final fullyconnected layer of LeNet. In AlexNet, for nodes at the first convolutional layer and at the final fullyconnected layer, it takes about 0.5 minute and 12 minute to draw a sample respectively. The code can be downloaded at http://www.stat.ucla.edu/~yang.lu/Project/generativeCNN/main.html
5 Conclusion
Given the recent successes of CNNs, it is worthwhile to explore their generative aspects. In this work, we show that a simple generative model can be constructed based on the CNN. The generative model helps to pretrain the CNN. It also helps to visualize the knowledge of the learned CNN.
The proposed visualizing scheme can sample from the generative model, and it may be turned into a parametric generative learning algorithm, where the generative gradient can be approximated by samples generated by the current model.
Acknowledgement
The work is supported by NSF DMS 1310391, ONR MURI N000141010933, DARPA MSEE FA86501117149.
References
 Deng et al. (2009) Deng, Jia, Dong, Wei, Socher, Richard, Li, LiJia, Li, Kai, and FeiFei, Li. Imagenet: A largescale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. IEEE, 2009.
 Erhan et al. (2009) Erhan, Dumitru, Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Visualizing higherlayer features of a deep network. Dept. IRO, Université de Montréal, Tech. Rep, 2009.
 Erhan et al. (2010) Erhan, Dumitru, Bengio, Yoshua, Courville, Aaron, Manzagol, PierreAntoine, Vincent, Pascal, and Bengio, Samy. Why does unsupervised pretraining help deep learning? The Journal of Machine Learning Research, 11:625–660, 2010.
 Girshick et al. (2014) Girshick, Ross, Donahue, Jeff, Darrell, Trevor, and Malik, Jitendra. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp. 580–587. IEEE, 2014.

Hinton (2002)
Hinton, Geoffrey.
Training products of experts by minimizing contrastive divergence.
Neural computation, 14(8):1771–1800, 2002.  Hinton et al. (2006a) Hinton, Geoffrey, Osindero, Simon, and Teh, YeeWhye. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006a.

Hinton et al. (2006b)
Hinton, Geoffrey, Osindero, Simon, Welling, Max, and Teh, YeeWhye.
Unsupervised discovery of nonlinear structure using contrastive backpropagation.
Cognitive science, 30(4):725–731, 2006b.  Jia et al. (2014) Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.

Jordan (2002)
Jordan, A.
On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes.
Advances in neural information processing systems, 14:841, 2002.  Krizhevsky et al. (2012) Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.

Le et al. (2012)
Le, Quoc V., Monga, Rajat, Devin, Matthieu, Chen, Kai, Corrado, Greg S., Dean,
Jeff, and Ng, Andrew Y.
Building highlevel features using large scale unsupervised learning.
In In International Conference on Machine Learning, 2012. 103, 2012.  LeCun et al. (1989) LeCun, Yann, Boser, Bernhard, Denker, John S, Henderson, Donnie, Howard, Richard E, Hubbard, Wayne, and Jackel, Lawrence D. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.
 LeCun et al. (1998) LeCun, Yann, Bottou, Léon, Bengio, Yoshua, and Haffner, Patrick. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

Liang & Jordan (2008)
Liang, Percy and Jordan, Michael I.
An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators.
In Proceedings of the 25th international conference on Machine learning, pp. 584–591. ACM, 2008.  Long et al. (2014) Long, Jonathan L, Zhang, Ning, and Darrell, Trevor. Do convnets learn correspondence? In Advances in Neural Information Processing Systems, pp. 1601–1609, 2014.

Neal (2011)
Neal, Radford M.
Mcmc using hamiltonian dynamics.
Handbook of Markov Chain Monte Carlo
, 2, 2011. 
Rifai et al. (2011)
Rifai, Salah, Vincent, Pascal, Muller, Xavier, Glorot, Xavier, and Bengio,
Yoshua.
Contractive autoencoders: Explicit invariance during feature extraction.
In Proceedings of the 28th International Conference on Machine Learning (ICML11), pp. 833–840, 2011.  Roth & Black (2009) Roth, Stefan and Black, Michael J. Fields of experts. International Journal of Computer Vision, 82(2):205–229, 2009.

Salakhutdinov & Hinton (2009)
Salakhutdinov, Ruslan and Hinton, Geoffrey E.
Deep boltzmann machines.
In
International Conference on Artificial Intelligence and Statistics
, pp. 448–455, 2009.  Simonyan et al. (2014) Simonyan, Karen, Vedaldi, Andrea, and Zisserman, Andrew. Deep inside convolutional networks: Visualising image classification models and saliency maps. Workshop at International Conference on Learning Representations, 2014.
 Zeiler & Fergus (2013) Zeiler, Matthew D and Fergus, Rob. Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901, 2013.
Supplementary Materials
A. Discriminative vs generative loglikelihood and gradient for batch training
During training, on a batch of training examples, , the generative loglikelihood is
The gradient with respect to is
The discriminative loglikelihood is
The gradient with respect to is
and are similar in form but different in the summation operations. In , the summation is over category while is fixed, whereas in , the summation is over example while is fixed.
In the generative gradient, we want to assign high score to as well as those observations that belong to , but assign low scores to those observations that do not belong to . This constraint is for the same , regardless of what other do for .
In the discriminative gradient, we want to work together for all different , so that assigns high score to than other for .
Apparently, the discriminative constraint is weaker because it involves all , and the generative constraint is stronger because it involves single . After generative learning, these are well behaved and then we can continue to refine them (including the intercepts for different ) to satisfy the discriminative constraint.
B. More generative visualization examples
More generative visualization examples for the nodes at the final fullyconnected layer in the fully trained AlexNet model are shown in Fig. B1, Fig. B2 and Fig. B3.












Comments
There are no comments yet.