ACCV_TinyGAN
BigGAN; Knowledge Distillation; Black-Box; Fast Training; 16x compression
view repo
Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having 16× fewer parameters.
READ FULL TEXT
Generative Adversarial Networks (GANs) have been used in several machine...
read it
Denoising score matching with Annealed Langevin Sampling (DSM-ALS) is a
...
read it
While Generative Adversarial Networks (GANs) show increasing performance...
read it
Human-computer image generation using Generative Adversarial Networks (G...
read it
Generative adversarial networks (GANs) have achieved remarkable progress...
read it
While generative adversarial networks (GANs) have revolutionized machine...
read it
Autoregressive models recently achieved comparable results versus
state-...
read it
BigGAN; Knowledge Distillation; Black-Box; Fast Training; 16x compression
Comments
There are no comments yet.