Generative Adversarial Networks (GANs) were first introduced in 2014 as a generative model that attempts to capture the underlying distribution of complex real world data sets (Goodfellow et al., 2014). GANs are applied to many real world applications including domain transfer, super resolution, and generation of novel celebrity faces (Karras et al., 2018) (Zhu et al., 2017) (Ledig et al., 2017)
. The first GAN implemented by Ian Goodfellow, used no more than 100,000 parameters, otherwise referred to as hidden units or neurons(Goodfellow et al., 2014). In one of NVIDIA’s most recent publications, the company’s network used over 23 million parameters to generate realistic celebrity faces with high resolution (Karras et al., 2018). Most recently, Google’s BigGAN has over 158 million parameters used to generate photo-realistic still life imagery (Brock et al., 2018). We anticipate that the size of GANs will continue to increase, as their applicability continues to grow. If we want to make GANs practical for low SWaP (size, weight, and power) hardware, such as mobile devices, and for applications with real time capabilities then being able to compress models becomes an important issue to overcome.
There has been recent progress in the area of neural network compression(Liu et al., 2019) (Ba & Caurana, 2013) (Urban et al., 2017) and various compression techniques have been exercised to solve this problem (Belagiannis et al., 2018) (Xu et al., 2017) (Yim et al., 2017) (Kim & Kim, 2017). The popular compression techniques can be mostly categorized into five schemes: quantization (Li et al., 2017), pruning and sharing weights, low rank factorization, compact convolutional filter, and knowledge distillation (Cheng et al., 2017). These techniques are able to reduce the number of parameters by up to 90% to 95% while retaining performance (Cheng et al., 2017). Some of these compression techniques also accompany corollary benefits. In the case of knowledge distillation, the compressed model may be able to generalize better in addition to being significantly smaller (Hinton et al., 2015).
Despite the aforementioned work, to our knowledge, there is no result involving the compression of GANs. This leads to the work described in this paper, where we show adaptations that must be made to the knowledge distillation paradigm in order to achieve optimal compression of networks in the generative setting. Serving as motivation for these adaptations is the idea that large over-parameterized networks have nicer loss landscapes than smaller ones, and are thus able to learn better quality mappings, regardless of whether an approximately equivalent mapping exists for smaller networks. We experimentally validate these methods on several datasets and via a number of objective measurements. Lastly, we discuss the limit of compression in the GAN setting and how it appears in empirical results.
2.1 Generative Adversarial Networks
GAN was first proposed as a two player min-max optimization problem between a discriminator and a generator as in (1) (Goodfellow et al., 2014). The generator is tasked with generating realistic examples that fool the discriminator while the discriminator learns how to differentiate between the real and the generated samples.
The optimization in (1) has a global minimum and the system converges when at which point,
cannot classify a sample as being generated fromor from . Further, the optimal solution to (1) corresponds to minimizing the Jensen-Shannon (JS) divergence between the two distributions and (Goodfellow et al., 2014). However, training of GANs is often unstable because JS divergence is not well defined when and do not have the same support (Arjovsky et al., 2017). To solve the problem with using JS divergence, WGAN minimizes the Wasserstein’s distance between and in place of the JS divergence(Arjovsky et al., 2017), which is well defined even when and have disjoint support. Specifically, WGAN attempts to solve the optimization problem in (2), where is a Lipschitz bounded function. (Arjovsky et al., 2017).
WGAN was used in place of regular GAN for most of our experiments due to its favorable characteristics. However, empirically, we noticed that WGAN does not work as well for the Celeb-A dataset, so we reverted to using regular GAN for all Celeb-A related experiments.
2.2 Knowledge Distillation
Knowledge distillation refers to the technique of transferring the knowledge learned, from an ensemble of networks to a single network, or from a network with high number of parameters, to a network with relatively low number of parameters. We refer to the bigger network as the teacher network and the smaller network as the student network.
A student can learn to match any activation layer in the teacher network. Learning parameters from the final layer, called hard targets, lends itself to shorter training time but increased chance of over-fitting. The inputs to the softmax layer (logits) of the teacher network, referred to as soft targets, on the other hand, have more descriptive information about the samples and give better generalization characteristics to the student network(Hinton et al., 2015), which makes training on soft targets more beneficial.
2.3 Over-parameterization of Networks
An over-parameterized network is described as one whose number of hidden units is polynomially large relative to the number of training samples (Allen-Zhu et al., 2018b). It has been shown that training a significantly over-parameterized GAN yields dramatically better results than those generated from a smaller network (Brock et al., 2018)
. This may be explained by a finding that showed that the over-parameterization of neural networks creates optimized loss functions with many good minima spread throughout the entire loss landscape allowing for efficient training with alternating gradient descent(Allen-Zhu et al., 2018b) (Allen-Zhu et al., 2018a). This theory was bolstered by recent empirical studies of loss functions using visualization methods (Li et al., 2018). Therefore, it is necessary that a bigger network learn these mappings in a hyper-parameterized space before it can be distilled to a simpler model. Likewise, there has been empirical evidence that knowledge distillation, or model compression, is successful (Hinton et al., 2015) (Buciluǎ et al., 2006) (Yim et al., 2017). This success may be attributed to the aforementioned phenomena. Although training a teacher network might require a higher number of parameters, a reduced number of parameters is sufficient to describe the model with high fidelity.
The teacher (large, over-parameterized network) and student (small, few parameter network) GANs used either the original DCGAN architecture or a slightly modified DCGAN architecture (Radford et al., 2015), more closely resembling the WGAN (Arjovsky et al., 2017), referenced as the W-DCGAN.
The number of parameters in our networks is controlled by the depth scale factor, referenced throughout the paper as . The overall number of parameters increases approximately linearly to .
3.1 Selection of Teacher Network
We first trained W-DCGANs of various sizes until convergence, and then selected the best performing model to be the teacher network. This ensures that the teacher network has converged approximately to an optimal solution. Because there currently does not exist an exact measure of visual quality, we use Inception Score and Frechet Inception Distance as proxies for performance. Figure 2 illustrates the Inception Score performance and Frechet Inception Distance (FID) performance respectively, to be discussed in Section 4, with respect to layer depth, .
3.2 Training of Student Networks
We train several student networks with smaller capacities than the teacher network using two training schemes. Results for all three datasets were produced using the MSE Loss training scheme. Due to the complexity of the Celeb-A dataset, the joint loss training scheme was designed to combat observed blur artifacts of compression. Both training schemes are described below. For both functions, we monitored the convergence of the student network based on the generated outputs and the loss trajectory.
Mean Squared Error (MSE) loss. This method uses the MSE as the student training loss function using a pre-trained teacher W-DCGAN. A schematic of the training framework is illustrated in Figure 3. The MSE loss minimizes the pixel-level error between the images generated from the student and the teacher. Specifically, we train the student by solving the following optimization problem:
Joint loss. The generated images tend to be slightly blurry when using the MSE loss, especially for the Celeb-A dataset. To combat the blurriness, we propose a joint loss function that supervises regular GAN training with MSE loss. Specifically, the joint loss train the student by solving the following optimization problem:
The parameter controls the weight between the MSE loss and the regular GAN training. A schematic of the training framework is illustrated in Figure 4.
In the case of classification networks, the performance can be measured by the classification accuracy. Unlike classification networks, GANs do not have an explicit measure for performance. The performance of GANs could be naively measured by human judgment of visual quality (Goodfellow et al., 2014). For example, one could collect scores (1 to 10) of visual quality from various subjects and average the scores to understand the performance of GANs. However, the method is very expensive. The score could also vary significantly based on the design of the interface used to collect the data (Goodfellow et al., 2014)
. To evaluate the performance of GANs more systematically, the field has developed several quantitative metrics. Some of the popular metrics are Inception Score and Frechet Inception Distance (FID). Additionally, we used Variance of Laplacian to evaluate the blurring artifacts inherent to compressing GANs trained on complex datasets.
4.1 Inception Score (IS)
There are two important things that we would like to see in images generated from good GANs. First, we would like it to generate diverse images. We would like to be relatively equal across different classes (Goodfellow et al., 2014). Secondly, given a generated image, we would like to be confident of the class in which the image belongs. Given a generated image x, we would like to be very concentrated in a particular class (Goodfellow et al., 2014). To take both of the desired qualities into account, the cross entropy, , between and can be taken, otherwise known as the Inception Score.
If is similar across classes and is very concentrated in a particular class, then the cross entropy between the two distributions will be high. Consequently, the Inception Score will be high.
The Inception Scores makes a few assumptions. First, it assumes that the image can be classified yielding and
, but not all images can be classified. For example, in our experiments with the Celeb-A dataset, we could not use Inception Score because the data set does not have labels associated with them. Second, the Inception Score is sensitive to the weights of the classification network. The Inception Score assumes a network architecture designed for classification, and the Inception Score could change drastically with different networks. Further, the calculation of Inception Score assumes that the soft-max layer of a neural network is equivalent to the probability distribution. It can be said that soft-max layer of a neural network is not necessarily the probability distribution despite summing to unity.
Finally, the Inception Score is not able to detect memorization of examples. For example, if a GAN remembers exactly one image from each class, then the Inception Score will be very high as will be exactly the same across all classes and will be very concentrated.
Despite the aforementioned, it is one of the most commonly used metrics for GAN evaluation. It is also found to correlate well with human judgment of image qualities (Goodfellow et al., 2014). Therefore, we used Inception Score to select the best MNIST teacher model and to evaluate the MNIST student models.
4.2 Frechet Inception Distance (FID)
To improve upon the Inception Score, Frechet Inception Distance was introduced to identify GANs that simply memorized a few images from each class (Heusel et al., 2017). The Frechet Inception Distance assumes that when differing images are fed through the same network, their corresponding values from the same activation layer will have different distributions. If the activation distributions of the generated images and the real images differ greatly, then it is likely that the generated images look significantly different from the real images and vice versa. Formally, Frechet Inception Distance measures the difference of the activation distributions with Frechet distance (Heusel et al., 2017):
Empirically, it has been shown that Frechet Inception Distance almost always increases monotonically as you increase the distortion to a real images (such that they look less like real images), regardless of the type of distortion applied (Heusel et al., 2017). Additionally, it is robust in detecting mode collapse in GANs (Lucic et al., 2018).
Though Frechet Inception Distance still suffers from similar downsides as Inception Score, its ability to identify mode collapse makes it more robust compared to Inception Score. It is used in our experiments to select the best teacher GAN and to evaluate the performance of the student GANs for CIFAR-10 and Celeb-A datasets.
4.3 Variance of Laplacian (VoL)
The variance of laplacian of an image gives a measure of the sharpness of the image (Pech-Pacheco et al., 2000). The Laplacian filter, when applied on an image, gives the second order derivative of the discrete image function. It thus highlights the regions of an image containing rapid intensity changes - the edges. A sharper image will have more well defined edges than a blurred image. The Variance of Laplacian metric quantifies the amount of edges in an image. Higher VoL corresponds to sharper images. We use this metric to compare the outputs of the student generator trained using the MSE loss versus those trained using the joint loss.
5.1 MSE Loss Training
Quantitative Results. We compare the performance of the compressed GANs with the teacher WGAN and the regular WGANs of the same sizes. Ideally, the compressed GAN will perform close to the teacher WGAN and better than the regular WGANs of the same sizes. Again, because there currently does not exist an exact measure of visual quality, we use Inception Score and Frechet Inception Distance as proxies for performance.
From Figure 1 and Table 1, we can see that the student GANs consistently outperform the regular GAN across all compression level on the MNIST data set. Also, the student GANs perform comparable to the teacher GAN which has significantly larger capacity. In the most extreme case for MNIST, we were able to compress the student model by 1,669 times while retaining 83% of the teacher’s Inception Score.
Similarly, we see that the student GANs consistently outperform the regular GANs of the same sizes, on the CIFAR-10 and Celeb-A dataset. However, because the CIFAR-10 and Celeb-A datasets have images with more complex features than the MNIST dataset, we were unable to achieve the same magnitude of compression as that of the MNIST dataset.
We found that the compression strategy is extremely robust to hyperparameter tuning, datasets, and evaluation metrics. The compressed student models consistently outperform W-DCGANs of similar size with respect to all changes in our setups.
|GAN Size ()|
|No. of Parameters||28,351||62,077||145,657||377,329||109,8721||216,4177||3,573,697||12,652,417|
Qualitative Results. Because of the deficiencies of the Inception Score and Frechet Inception Distance, it is important to qualitatively review the results. In Figures 8, 8, and 8, we are able to see a direct comparison between the teacher, student, and a regular GAN of comparable size to the student. A visual review shows that the student is able to approximate the teacher without compromising the visual integrity of the image despite having a high compression ratio. One drawback however, is the presence of blur in the outputs from the student generator. This blur becomes more prominent as the compression ratio increases. The student generator is able to generate the basic structure of the image but fails to add details. This is unlike the control generator of the same size, which, although generates sharp images, misses on the basic image structure.
The student GAN thus outperforms the control GAN of the same size. This demonstrates the superiority of compressing an over-parameterized GAN, rather than training a small sized generator using the adversarial framework. Figure 5
shows the output of interpolating the student and the teacher between two input vectors. We can see that the student is able to generate comparable images to that of the teacher’s for each interpolation delta. This demonstrates that the student is learning to approximate the teacher’s generation function, and not memorizing specific trained outputs.
5.2 Joint Loss Training
In Figure 9, student GANs trained with joint loss outperform networks trained with only MSE loss in terms of slightly better FID scores. In Figure 10, the joint loss GANs perform significantly better in terms of VoL metric, meaning that they produce much sharper images compared to the MSE loss GANs. This can also be observed visually through comparisons of the generated images in Figure 11.
6.1 Limit to Compression
Through visual examination, the degradation of generated images seem to happen at higher compression ratio for more complex datasets, assuming that complexity grows in the following order: MNIST, CIFAR-10, Celeb-A. The bottom row of Figure 1 show the outputs as a result of different compression ratios. The compressed MNIST GANs seem to have minimal compression artifacts across all compression levels whereas the compressed CIFAR-10 GANs seem to suffer significant degradation when . For the compressed Celeb-A GANs, the degradation happens at an even lower compression ratio for Celeb-A GANs at . The compression ratios referenced in the abstract is based on the smallest compressed GANs before significant observable degradation is present in the generated images. This observation suggests a potential limit to compression depending on the complexity of the data set.
6.2 Our Contributions
Our work contributes to the topic of GAN compression. To summarize, we have made the following contributions in this paper:
We have evaluated the proposed compression methods over MNIST, CIFAR-10, and Celeb-A datasets. Our results show that the quality of generated imagery is maintained at high compression rates (1669:1, 58:1, 87:1 respectively) as measured by the Inception Score and Frechet Inception Distance metrics.
We show that training a GAN of the same size without knowledge distillation produces comparatively diminished results, supporting the conjecture that over-parameterization is both helpful and necessary for neural networks to find a good function for GANs.
We observe a qualitative limit to GAN’s compression for all the aforementioned datasets. We conjecture that there exists a fundamental compression limit of GANs similar to Shannon’s compression theory (MacKay, 2002).
Overall, we have demonstrated that applying the knowledge distillation method to GAN training can produce compressed generators without loss of quality or generalization. More specifically, we demonstrated that the student generators are able to outperform a traditionally trained GAN of the same size and approximate the underlying function of the teacher generator for the whole latent space. This further supports the necessity for over-parameterization when training an effective generator prior to distillation. Further, a qualitative limit to GAN compression has been observed for MNIST, CIFAR-10 and Celeb-A datasets.
- Allen-Zhu et al. (2018a) Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and generalization in overparameterized neural networks, going beyond two layers. CoRR, abs/1811.04918, 2018a. URL http://arxiv.org/abs/1811.04918.
- Allen-Zhu et al. (2018b) Allen-Zhu, Z., Li, Y., and Song, Z. A convergence theory for deep learning via over-parameterization. CoRR, abs/1811.03962, 2018b. URL http://arxiv.org/abs/1811.03962.
- Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
- Ba & Caurana (2013) Ba, L. J. and Caurana, R. Do deep nets really need to be deep? CoRR, abs/1312.6184, 2013. URL http://arxiv.org/abs/1312.6184.
- Belagiannis et al. (2018) Belagiannis, V., Farshad, A., and Galasso, F. Adversarial network compression. CoRR, abs/1803.10750, 2018. URL http://arxiv.org/abs/1803.10750.
- Brock et al. (2018) Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. CoRR, abs/1809.11096, 2018. URL http://arxiv.org/abs/1809.11096.
- Buciluǎ et al. (2006) Buciluǎ, C., Caruana, R., and Niculescu-Mizil, A. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535–541. ACM, 2006.
- Cheng et al. (2017) Cheng, Y., Wang, D., Zhou, P., and Zhang, T. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282, 2017.
- Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
- Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637, 2017.
- Hinton et al. (2015) Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Karras et al. (2018) Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk99zCeAb.
- Kim & Kim (2017) Kim, S. W. and Kim, H.-E. Transferring knowledge to smaller network with class-distance loss. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=ByXrfaGFe.
- Ledig et al. (2017) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A. P., Tejani, A., Totz, J., Wang, Z., et al. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, volume 2, pp. 4, 2017.
- Li et al. (2017) Li, H., De, S., Xu, Z., Studer, C., Samet, H., and Goldstein, T. Training quantized nets: A deeper understanding. CoRR, abs/1706.02379, 2017. URL http://arxiv.org/abs/1706.02379.
- Li et al. (2018) Li, H., Xu, Z., Taylor, G., Studer, C., and Goldstein, T. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pp. 6391–6401, 2018.
- Liu et al. (2019) Liu, R., Fusi, N., and Mkey, L. Model compression with generative adversarial netwos, 2019. URL https://openreview.net/forum?id=Bz4n09tQ.
- Lucic et al. (2018) Lucic, M., Kurach, K., Michalski, M., Gelly, S., and Bousquet, O. Are gans created equal? a large-scale study. In Advances in neural information processing systems, pp. 697–706, 2018.
- MacKay (2002) MacKay, D. J. C. Information Theory, Inference & Learning Algorithms. Cambridge University Press, New York, NY, USA, 2002. ISBN 0521642981.
Pech-Pacheco et al. (2000)
Pech-Pacheco, J., Cristobal, G., Chamorro-Martinez, J., and Fernandez-Valdivia,
Diatom autofocusing in brightfield microscopy: a comparative study.
Proceedings 15th International Conference on Pattern Recognition, 2000. URL https://ieeexplore.ieee.org/document/903548.
- Radford et al. (2015) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Urban et al. (2017) Urban, G., Geras, K. J., Kahou, S. E., Aslan, O., Wang, S., Mohamed, A., Philipose, M., Richardson, M., and Caruana, R. Do deep convolutional nets really need to be deep and convolutional. CoRR, abs/1603.05691, 2017. URL https://arxiv.org/abs/1603.05691.
- Xu et al. (2017) Xu, Z., Hsu, Y., and Huang, J. Learning loss for knowledge distillation with conditional adversarial networks. CoRR, abs/1709.00513, 2017. URL http://arxiv.org/abs/1709.00513.
Yim et al. (2017)
Yim, J., Joo, D., Bae, J., and Kim, J.
A gift from knowledge distillation: Fast optimization, network minimization and transfer learning.In
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7130–7138, July 2017. doi: 10.1109/CVPR.2017.754.
Zhu et al. (2017)
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A.
Unpaired image-to-image translation using cycle-consistent adversarial networkss.In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.