Distilling portable Generative Adversarial Networks for Image Translation

Despite Generative Adversarial Networks (GANs) have been widely used in various image-to-image translation tasks, they can be hardly applied on mobile devices due to their heavy computation and storage cost. Traditional network compression methods focus on visually recognition tasks, but never deal with generation tasks. Inspired by knowledge distillation, a student generator of fewer parameters is trained by inheriting the low-level and high-level information from the original heavy teacher generator. To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators. An adversarial learning process is therefore established to optimize student generator and student discriminator. Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance.


page 2

page 5

page 6

page 7


Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation

Generative adversarial networks (GANs) have shown significant potential ...

Region-aware Knowledge Distillation for Efficient Image-to-Image Translation

Recent progress in image-to-image translation has witnessed the success ...

Co-Evolutionary Compression for Unpaired Image Translation

Generative adversarial networks (GANs) have been successfully used for c...

Data-Free Learning of Student Networks

Learning portable neural networks is very essential for computer vision ...

Transferring Knowledge with Attention Distillation for Multi-Domain Image-to-Image Translation

Gradient-based attention modeling has been used widely as a way to visua...

Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation

Remarkable achievements have been attained with Generative Adversarial N...

Teachers Do More Than Teach: Compressing Image-to-Image Models

Generative Adversarial Networks (GANs) have achieved huge success in gen...


Generative Adversarial Networks (GANs) have been successfully applied to a number of image-to-image translation tasks such as image synthesis [19], domain translation [34, 17, 7, 16, 21], image denoising [3]

and image super-resolution 


. The success of generative networks relies not only on the careful design of adversarial strategies but also on the growth of the computational capacities of neural networks. Executing most of the widely used GANs requires enormous computational resources, which limits GANs on PCs with modern GPUs. For example, 

[34] uses a heavy GANs model that needs about 47.19G FLOPs for high fidelity image synthesis. However, many fancy applications of GANs such as style transfer [22] and image enhancement [5] are urgently required by portable devices, e.g. mobile phones and cameras. Considering the limited storage and CPU performance of mainstream mobile devices, it is essential to compress and accelerate generative networks.

Tremendous efforts have been made recently to compress and speed-up heavy deep models. For example, [11]

utilized vector quantization approach to represent similar weights as cluster centers. 

[29] introduced versatile filters to replace conventional filters and achieve high speed-up ratio. [10] exploited low-rank decomposition to process the weight matrices of fully-connected layers. [4] proposed a hashing based method to encode parameters in CNNs. [32]

proposed to packing neural networks in frequency domain

[13] employed pruning, quantization and Huffman coding to obtain a compact deep CNN with lower computational complexity. [30] introduced circulant matrix to learn compact feature map of CNNs.  [9, 24] explored neural networks with binary weights, which drastically reduced the memory usage. Although these approaches can provide very high compression and speed-up ratios with slight degradation on performance, most of them are devoted to processing neural networks for image classification and object detection tasks.

Existing neural network compression methods cannot be straightforwardly applied to compress GANs models, because of the following major reasons. First, compared with classification models, it is more challenging to identify redundant weights in generative networks, as the generator requires a large number of parameters to establish a high-dimensional mapping of extremely complex structures (e.g. image-to-image translation [34]). Second, different from visual recognition and detection tasks which usually have ground-truth (e.g. labels and bounding boxes) for the training data, GAN is a generative model that usually does not have specific ground-truth for evaluating the output images, e.g. super-resolution and style transfer. Thus, conventional methods cannot easily excavate redundant weights or filters in GANs. Finally, GANs have a more complex framework that consists of a generator and a discriminator and the two networks are simultaneously trained following a minimax two-player game, which is fundamentally different to the training procedure of ordinary deep neural networks for classification. To this end, it is necessary to develop a specific framework for compressing and accelerating GANs. [1] proposed to minimize the MSE Loss between tracher and student to compress GANs, which only deal with the noise-to-image task, yet most usage of GANs in mobile devices are based on image-to-image translation task. Moreover, they do not distill knowledge to the discriminator, which takes an important part in GANs’ training.

In this paper, we proposed a novel framework for learning portable generative networks by utlizing the knowledge distillation scheme. In practice, the teacher generator is utlized for minimizing the pixel-wise and perceptual difference between images generated by student and teacher networks. The discriminator in the student GAN is then optimized by learning the relationship between true samples and generated samples from teacher and student networks. By following a minimax optimization, the student GAN can fully inhert knowledge from the teacher GAN. Extensive experiments conducted on several benchmark datasets and generative models demonstrate that generators learned by the proposed method can achieve a comparable performance with significantly lower memory usage and computational cost compared to the original heavy networks.

Figure 1: The diagram of the proposed framework for learning an efficient generative network by distilling knowledge from the orginal heavy network. Images generated by the student generator will be compared with those generated by the teacher generator through several metrics to fully inherit useful information from the teacher GAN.


To illustrate the proposed method, here we focus on the image-to-image translation problem and take the pix2pix [17] as an example framework. Note that the proposed algorithm does not require special component of image translation and therefore can be easily embedded to any generative adversarial networks.

In practice, the image translation problem aims to convert an input image in the source domain to a output image in the target domain (e.g. a semantic label map to an RGB image). The goal of pix2pix is to learn mapping functions between domains and . Denote the training samples in as and the corresponding samples in as , the generator is optimized to maps to (i.e. ), which cannot be distinguished by the discriminator D. The discriminator is trained to detect the fake images generated by . The objective of the GAN can be expressed as:


Besides fooling the discriminator, the generator is to generate images which are close to the ground truth output. Therefore, the MSE loss is introduced for :


The entire objective of pix2pix is


To optimize the generator and discriminator in adversarial manner, the training of GAN is following a two-player minimax game. We alternate between optimizing with fixed and optimizing with fixed . With the help of the discriminator and loss in Fcn. (3), the generator can translate images from the source domain to the target domain.

Although GANs have already achieved satisfactory performance on domain translation tasks, the generators are designed to have a large number of parameters to generate images of high-dimensional semantic information, which prevents the applications of these networks in edge devices. Therefore, an effective method to learn portable GANs is urgently required.

However, GANs consisting of a generator and a discriminator, has a completely different architecture and training procedures with the vanilla CNN. It is therefore difficult to adopt existing model compression algorithms, which are developed for image recognition tasks, to handle heavy GANs model directly. Moreover, the aim of GANs is to generate images which have complex structures instead of classification or detection results. Thus, we are motivated to develop a novel framework for compressing generative models.

There are a variety of schemes for network compression such as pruning and quantization. However, these methods need special supports for achieving satisfactory compression ratio and speed improvement, which cannot be directly embedded into mobile devices. Besides eliminating redundancy in pre-trained deep models, Knowledge Distillation presents an alternative approach to learn a portable student network with comparable performance and fewer parameters by inheriting knowledge from the teacher network [15, 25, 33, 31, 14], i.e. pre-trained heavy network. Therefore, we introduce the teacher-student learning paradigm (i.e. knowledge distillation) to learn portable GANs with fewer parameters and FLOPs.

However, the existing teacher-student learning paradigm can only be applied to classification tasks and needs to be redesigned for the generative models which have no ground truth. Denote as the pretrained teacher generator and as the portable student generator, a straightforward method, which was proposed in  [1], to adopt knowledge distillation to the student generator could be formulated as:


where is the conventional -norm. By minimizing Fcn. (4), images resulting from the student generator can be similar with those of the teacher generator in a pixel wise. However, this vanilla approach asking to minimize the Euclidean distance between the synthesis images of the teacher and student, which tend to produce blurry results [17]. This is because that the goal of Euclidean distance is to minimize all averaged plausible outputs. Moreover, GAN consists of a generator and a discriminator. Only considering the generator is not enough. Therefore, it is necessary to advance knowledge distillation to learn efficient generators.

Knowledge Distillation for GANs

In this section, we propose a novel algorithm to obtain portable GANs utilizing the teacher-student paradigm. To transfer the useful information from the teacher GAN to the student GAN, we introduce loss functions by excavating relationship between samples and features in generators and discriminators.

Distilling Generator

As mentioned above, the straightforward method of utilizing the knowledge of the teacher generator is to minimize the Euclidean distance between generated images from the teacher and student generators (i.e. Fcn. (4)). However, the solutions of MSE optimization problems often lose high-frequency content, which will result in images with over-smooth textures. Instead of optimizing the pix-wise objective function, [18]

define the perceptual loss function based on the 19-th activation layer of the pertrained VGG network 


Motivated by this distance measure, we ask the teacher discriminator to assist the student generator to produce high-level features as the teacher generator. Compared with the VGG network which is trained for image classification, the discriminator is more relevant to the task of the generator. Therefore, we extract features of images generated by the teacher and student generators using the teacher discriminator and introduce the objective function guided by the teacher discriminator for training :


where is the first several layers of the discriminator of the teacher network. Since has been well trained to discriminate the true and fake samples, it can capture the manifold of the target domain. The above function is more like a “soft target” in knowledge distillation than directly matching the generated images of the teacher and student generators and therefore is more flexible for transferring knowledge of the teacher generator. In order to learn not only low-level but also high-level information from the teacher generator, we merge the two above loss functions. Therefore, the knowledge distillation function of the proposed method for is


where is a trade-off parameter to balance the two terms of the objective.

0:  A given teacher GAN consists of a generator and a discriminator , the training set from domain and from domain , hyper-parameters for knowledge distillation: and .
1:  Initialize the student generator and the student discriminator , where the number of parameters in in significantly fewer than that in ;
2:  repeat
3:     Randomly select a batch of paired samples from and from ;
4:     Employ and on the mini-batch:     ;
5:     Employ and to compute:  ;
6:     Calculate the loss function (Fcn. (4)) and (Fcn. (5))
7:     Update weights in using back-propagation;
8:     Calculate the loss function (Fcn. (7)) and (Fcn. (8))
9:     Update weights in according to the gradient;
10:  until convergence
10:  The portable generative model .
Algorithm 1 Portable GAN learning via distillation.

Distilling Discriminator

Besides the generator, the discriminator also plays an important role in GANs training. It is necessary to distill the student discriminator to assist training of the student generator. Different from the vanilla knowledge distillation algorithms which directly match the output of the teacher and student network, we introduce a adversarial teacher-student learning paradigm: the student discriminator is trained under the supervision of the teacher network, which will help the training of the student discriminator.Given a well-trained GANs model, images generated by the teacher generator network can mix the spurious with the genuine. The generated images of the teacher generator can be seen as an expansion of the target domain . Moreover, the ability of the teacher network exceeds that of the student network definitely. Therefore, images from teacher generator can be regarded as real samples for the student discriminator and the loss function for can be defined as:


In the training of traditional GANs, the discriminator aims to classify the real images as the true samples while the fake images as the false samples, and the goal of the generator is to generate images whose outputs in the discriminator is true (

i.e. to generate real images). By considering images from teacher generator as real samples, Fcn. (7) allows the student generator to imitate real images as well as the images generated by the teacher network, which makes the training of much more easier with abundant data.

As mentioned above, we regard the true images and images generated by teacher generator as the same class (i.e. true labels) in . The distance between true images and images generated by teacher generator should be smaller than that between true images and the images generated by student generator. It is natural to use triplet loss to address this problem. Triplet loss, proposed by [2]

, optimizes the black space such that samples with the same identity are closer to each other than those with different identity. It has been widely used in various fields of computer vision such as face recognition 

[27] and person-ReID [6]. Therefore, we propose the triplet loss for :


where the is the triplet margin to decide the distance between different classes, and is obtained by removing the last layer of the discriminator . The advantage of this formulation is that the discriminator can construct a more specific manifold for the true samples than the traditional loss and then the generator will achieve higher performance with the help of the stronger discriminator.

By exploiting knowledge distillation to the student generator and discriminator, we can learn strong and efficient GANs. The overall structure of the proposed method is illstratedillustrated in Fig. (1). Specifically, the objective function for the student GAN can be written as follows:


where the denotes the traditional GAN loss for the generator and discriminator while , , and is the trade-off hyper-parameter to balance different objective. Note that this teacher-student learning paradigm does not require any specific architecture of GAN, and it can be easily adapted to other variants of GANs.

Following the optimization of GANs [12], and are trained alternatively. The objective of the proposed method is:


By optimizing the minimax problem, the student generator can not only work cooperatively with the teacher generator but also compete adversarially with the student discriminator. In conclusion, the procedure is formally presented in Alg. (1).

Proposition 1.

Denote the teacher generator, the student generator training with the teacher-student learning paradigm and the student generator trained without the guide of teacher as , and , the number of parameters in and as and , the number of training sample as . The upper bound of the expected error of () is smaller than that of (), when .

The proof of Proposition (1) can be found in the supplementary materials. The inequality

can be easily hold for deep learning whose number of training samples is large. For example, in our experiments, the number of parameters of teachers is 2 or 4 times as that of students, where

or . The number of training samples is larger than in our experiments (e.g. in Cityscapes, in horse to zebra task).


In this section, we evaluated the proposed method on several benchmark datasets with two mainstream generative models on domain translation: CycleGAN and pix2pix. To demonstrate the superiority of the proposed algorithm, we will not only show the generated images for perceptual studies but also exploit the “FCN-score” introduced by [17] for the quantitative evaluation. Note that [1] is the same as vanilla distillation in our experiments.

Input Ground truth Scratch Aguinaldo et.al. Ours Teacher

(a)Student GANs with 1/2 channels of the teacher GAN.

(b)Student GANs with 1/4 channels of the teacher GAN.

Figure 2: Different methods for mapping labelsphotos trained on Cityscapes images using pix2pix.

We first conducted the semantic labelphoto task on Cityscapes dataset [8] using pix2pix, which consists of street scenes from different cities with high quality pixel-level annotations. The dataset is divided into about 3,000 training images, 500 validation images and about 1,500 test images, which are all paired data.

We followed the settings in [17] to use U-net [26] as the generator. The hyper-parameter in Fcn. (3) is set to 1. For the discriminator networks, we use PatchGANs, whose goal is to classify image patches instead of the whole image. When optimizing the networks, the objective value is divided by 2 while optimizing

. The networks are trained for 200 epochs using the Adam solver with the learning rate of 0.0002. When testing the GANs, the generator was run in the same manner as training but without dropout.

To demonstrate the effectiveness of the proposed method, we used the U-net whose number of channels are 64 as the teacher network. We evaluated two different sizes of the student generator to have omnibearing results of the proposed method: the student generators with half channels of the teacher generator andwith channels. The student generator has half of the filters of the teacher. Since the discriminator is not required at inference time, we kept the structure of the student discriminator same as that of the teacher discriminator. We studied the performance of different generators: the teacher generator, the student generator trained from scratch, the student generator optimized using vanilla distillation (i.e. Fcn. (4)), and the student generator trained utilizing the proposed method.

Fig. (2) shows the qualitative results of these variants on the labelsphotos task. The teacher generator achieved satisfactory results yet required enormous parameters and computational resources. The student generator, although has fewer FLOPs and parameters, generated simple images with repeated patches, which look fake. Using vanilla distillation to minimize the -norm improved the performance of the student generator, but causes blurry results. The images generated by the proposed method are much sharper and look realistic, which demonstrated that the proposed method can learn portable generative model with high quality.

Algorithm FLOPs Parameters Per-pixel acc. Per-class acc. Class IOU
Teacher 18.15G 54.41M 52.17 12.39 8.20
Student from scratch 4.65G 13.61M 51.62 12.10 7.93
 [1] 50.42 12.30 8.00
Student(Ours) 52.22 12.37 8.11
Student from scratch 1.22G 3.4M 50.80 11.86 7.95
 [1] 50.32 11.98 7.96
Student(Ours) 51.57 11.98 8.06
Ground truth - - 80.42 26.72 21.13
Table 1: FCN-scores for different methods on Cityscapes dataset using pix2pix.
Input Student (Scratch) Aguinaldo et.al. Student (Ours) Teacher

Figure 3: Different methods for mapping horse

zebra trained on ImageNet images using CycleGAN.

Quantitative Evaluation Besides the qualitative experiments, we also conducted quantitative evaluation of the proposed method. Evaluating the quality of images generated by GANs is a difficult problem. Naive metrics such as -norm error cannot evaluate the visual quality of the images. To this end, we used the metrics following [17], i.e. the “FCN-score”, which uses a pretrained semantic segmentation model to classify the synthesized images as a pseudo metric. The intuition is that if the generated images have the same manifold structure as the true images, the segmentation model which trained on true samples would achieve comparable performance. Therefore, we adopt the pretrained FCN-8s [23] model on cityscapes dataset to the generated images. The results included per-pixel accuracy, per-class accuracy and mean class IOU.

Tab. (1) reported the quantitative results of different methods. The teacher GAN achieved high performance. However, the huge FLOPs and heavy parameters of this generator prevent its application on real-world edge devices. Therefore, we conducted a portable GANs model of fewer parameters by removing half of the filters in the teacher generator. Reasonably, the student generator trained from scratch suffered degradation on all the three FCN-scores. To maintain the performance of the generator, we minimized the Euclidean distance between the images generated by the teacher network and the student network, which is shown as vanilla distillation in Tab. (1). However, the vanilla distillation performed worse than the student generator trained from scratch, which suggests the MSE loss cannot be directly used in GAN. The proposed method utilized not only low-level but also high-level information of the teacher network and achieved a 52.22% per-pixel accuracy, which was even higher than that of the teacher generator.

Loss Per-pixel acc. Per-class acc. IOU
baseline 51.62 12.10 7.93
51.22 12.20 8.01
51.82 12.32 8.06
51.66 12.12 8.05
52.05 12.15 8.08
52.22 12.37 8.11
Table 2: FCN-scores for different losses on Cityscapes dataset.

Ablation Study We have evaluated and verified the effectiveness of the proposed method for learning portable GANs qualitatively and quantitatively. Since there are a number of components in the proposed approach, we further conducted ablation experiments for an explicit understanding. The settings are the same as the above experiments.

The loss functions of the proposed method can be divided into two parts and , i.e. the objective functions of the generator and the discriminator. We first evaluated the two objectives separately. As shown in Tab. (2), the generator using loss performed better than the baseline student which was trained from scratch. By combining the perceptual loss, the student generator can learn high-level semantic information from the teacher network and achieved higher score. For the discriminator, applying the images generated from the teacher network can make the student discriminator learn a better black of the target domain. Moreover, the triplet loss can further improve the performance of the student GAN. Finally, by exploiting all the proposed loss functions, the student network achieved the highest score. The results of the ablation study demonstrate the effectiveness of the components in the proposed objective functions.

Generalization Ability In the above experiments, we have verified the performance of the proposed method on paired image-to-image translation by using pix2pix. In order to illustrate the generalization ability of the proposed algorithm, we further apply it on unpaired image-to-to image translation, which is more complex than paired translation, using CycleGAN [34]. We evaluate two datasets for CycleGAN: horsezebra and labelphoto.

Input Student (Scratch) Aguinaldo et.al. Student (Ours) Teacher

Figure 4: Different methods for mapping summerwinter using CycleGAN.

For the teacher-student learning paradigm, the structure of the teacher generator was followed [34]. Note that CycleGAN has two generators to translate from domain to and to , the number of filters of all the two student generators was set to half or quarter of that of the teacher generator. We use the same discriminator for the teacher and student network.

Fig. 3 presented the images generated by different methods on the horsezebra task. Since the task is not very hard, we use an extremely portable student generators, which have only 1/4 channels of the teacher generator. The teacher generator has about 11.38M parameters and 47.19G FLOPs while the student generator has only about 715.65K parameters and 3.19G FLOPs. The images generated by the teacher network performed well while the student network trained from the scratch resulted in poor performance. The student network utilizing vanilla distillation achieved better performance, but the images were blurry. By using the proposed method, the student network learned abundant information from the teacher network and generated images better than other methods with the same architecture. The proposed method achieved comparable performance with the teacher network but with fewer parameters, which demonstrates the effectiveness of the proposed algorithm.

We also conduct the experiments to translate summer to winter. The student generator trained using the proposed algorithm achieved similar performance with the teacher network but with only about 1/16 parameters. Therefore, the proposed method can learn from the teacher network effectively and generate images, which mix the spurious with the genuine, with relatively few parameters.


Various algorithms have achieved good performance on compressing deep neural networks, but these works ignore the adaptation in GANs. Therefore, we propose a novel framework to learning efficient generative models with significantly fewer parameters and computations by exploiting the teacher-student learning paradigm. The overall structure can be divided into two parts: knowledge distillation for the student generator and the student discriminator. We utilize the information of the teacher GAN as much as possible to help the training process of not only the student generator but also the student discriminator. Experiments on several benchmark datasets demonstrate the effectiveness of the proposed method for learning portable generative models.

Acknowledgement This work is supported by National Natural Science Foundation of China under Grant No. 61876007, 61872012 and Australian Research Council Project DE-180101438.


  • [1] A. Aguinaldo, P. Chiang, A. Gain, A. Patil, K. Pearson, and S. Feizi (2019) Compressing gans using knowledge distillation. arXiv preprint arXiv:1902.00159. Cited by: Introduction, Preliminaries, Table 1, Experiments.
  • [2] V. Balntas, E. Riba, D. Ponsa, and K. Mikolajczyk (2016)

    Learning local feature descriptors with triplets and shallow convolutional neural networks.

    In BMVC, pp. 3. Cited by: Distilling Discriminator.
  • [3] J. Chen, J. Chen, H. Chao, and M. Yang (2018) Image blind denoising with generative adversarial network based noise modeling. In CVPR, pp. 3155–3164. Cited by: Introduction.
  • [4] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen (2015) Compressing neural networks with the hashing trick. In ICML, Cited by: Introduction.
  • [5] Y. Chen, Y. Wang, M. Kao, and Y. Chuang (2018) Deep photo enhancer: unpaired learning for image enhancement from photographs with gans. In CVPR, pp. 6306–6314. Cited by: Introduction.
  • [6] D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng (2016) Person re-identification by multi-channel parts-based cnn with improved triplet loss function. In CVPR, pp. 1335–1344. Cited by: Distilling Discriminator.
  • [7] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018) Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, pp. 8789–8797. Cited by: Introduction.
  • [8] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    In CVPR, pp. 3213–3223. Cited by: Experiments.
  • [9] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio (2016) Binarized neural networks: training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830. Cited by: Introduction.
  • [10] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus (2014) Exploiting linear structure within convolutional networks for efficient evaluation. In NeuriPS, Cited by: Introduction.
  • [11] Y. Gong, L. Liu, M. Yang, and L. Bourdev (2014) Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115. Cited by: Introduction.
  • [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NeuriPS, pp. 2672–2680. Cited by: Distilling Discriminator.
  • [13] S. Han, H. Mao, and W. J. Dally (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Cited by: Introduction.
  • [14] B. Heo, M. Lee, S. Yun, and J. Y. Choi (2019)

    Knowledge transfer via distillation of activation boundaries formed by hidden neurons

    In AAAI, Vol. 33, pp. 3779–3787. Cited by: Preliminaries.
  • [15] G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: Preliminaries.
  • [16] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In ECCV, pp. 172–189. Cited by: Introduction.
  • [17] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    arXiv preprint. Cited by: Introduction, Preliminaries, Preliminaries, Experiments, Experiments, Experiments.
  • [18] J. Johnson, A. Alahi, and L. Fei-Fei (2016) Perceptual losses for real-time style transfer and super-resolution. In ECCV, pp. 694–711. Cited by: Distilling Generator.
  • [19] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: Introduction.
  • [20] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network.. In CVPR, pp. 4. Cited by: Introduction.
  • [21] H. Lee, H. Tseng, J. Huang, M. Singh, and M. Yang (2018) Diverse image-to-image translation via disentangled representations. In ECCV, pp. 35–51. Cited by: Introduction.
  • [22] C. Li and M. Wand (2016) Precomputed real-time texture synthesis with markovian generative adversarial networks. In ECCV, pp. 702–716. Cited by: Introduction.
  • [23] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431–3440. Cited by: Experiments.
  • [24] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi (2016) Xnor-net: imagenet classification using binary convolutional neural networks. In ECCV, pp. 525–542. Cited by: Introduction.
  • [25] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio (2014) Fitnets: hints for thin deep nets. arXiv preprint arXiv:1412.6550. Cited by: Preliminaries.
  • [26] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: Experiments.
  • [27] F. Schroff, D. Kalenichenko, and J. Philbin (2015) Facenet: a unified embedding for face recognition and clustering. In CVPR, pp. 815–823. Cited by: Distilling Discriminator.
  • [28] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: Distilling Generator.
  • [29] Y. Wang, C. Xu, X. Chunjing, C. Xu, and D. Tao (2018) Learning versatile filters for efficient convolutional neural networks. In NeuriPS, pp. 1608–1618. Cited by: Introduction.
  • [30] Y. Wang, C. Xu, C. Xu, and D. Tao (2017) Beyond filters: compact feature map for portable deep model. In

    International Conference on Machine Learning

    pp. 3703–3711. Cited by: Introduction.
  • [31] Y. Wang, C. Xu, C. Xu, and D. Tao (2018) Adversaial learning of portable student networks. In AAAI, Cited by: Preliminaries.
  • [32] Y. Wang, C. Xu, C. Xu, and D. Tao (2018) Packing convolutional neural networks in the frequency domain. IEEE transactions on pattern analysis and machine intelligence. Cited by: Introduction.
  • [33] S. You, C. Xu, C. Xu, and D. Tao (2017) Learning from multiple teacher networks. In ACM SIGKDD, Cited by: Preliminaries.
  • [34] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint. Cited by: Introduction, Introduction, Experiments, Experiments.