Compressing GANs using Knowledge Distillation

02/01/2019
by   Angeline Aguinaldo, et al.
0

Generative Adversarial Networks (GANs) have been used in several machine learning tasks such as domain transfer, super resolution, and synthetic data generation. State-of-the-art GANs often use tens of millions of parameters, making them expensive to deploy for applications in low SWAP (size, weight, and power) hardware, such as mobile devices, and for applications with real time capabilities. There has been no work found to reduce the number of parameters used in GANs. Therefore, we propose a method to compress GANs using knowledge distillation techniques, in which a smaller "student" GAN learns to mimic a larger "teacher" GAN. We show that the distillation methods used on MNIST, CIFAR-10, and Celeb-A datasets can compress teacher GANs at ratios of 1669:1, 58:1, and 87:1, respectively, while retaining the quality of the generated image. From our experiments, we observe a qualitative limit for GAN's compression. Moreover, we observe that, with a fixed parameter budget, compressed GANs outperform GANs trained using standard training methods. We conjecture that this is partially owing to the optimization landscape of over-parameterized GANs which allows efficient training using alternating gradient descent. Thus, training an over-parameterized GAN followed by our proposed compression scheme provides a high quality generative model with a small number of parameters.

READ FULL TEXT

page 3

page 5

page 6

page 8

research
07/14/2020

P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection

One-class novelty detection is to identify anomalous instances that do n...
research
09/29/2020

TinyGAN: Distilling BigGAN for Conditional Image Generation

Generative Adversarial Networks (GANs) have become a powerful approach f...
research
12/05/2018

Model Compression with Generative Adversarial Networks

More accurate machine learning models often demand more computation and ...
research
01/07/2022

Microdosing: Knowledge Distillation for GAN based Compression

Recently, significant progress has been made in learned image and video ...
research
06/15/2020

AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks

The compression of Generative Adversarial Networks (GANs) has lately dra...
research
04/06/2021

Content-Aware GAN Compression

Generative adversarial networks (GANs), e.g., StyleGAN2, play a vital ro...
research
09/15/2021

New Perspective on Progressive GANs Distillation for One-class Novelty Detection

One-class novelty detection is conducted to identify anomalous instances...

Please sign up or login with your details

Forgot password? Click here to reset