TinyGAN: Distilling BigGAN for Conditional Image Generation

09/29/2020 ∙ by Ting-Yun Chang, et al. ∙ 0

Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having 16× fewer parameters.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 11

page 13

page 17

page 18

page 19

page 20

page 21

Code Repositories

ACCV_TinyGAN

BigGAN; Knowledge Distillation; Black-Box; Fast Training; 16x compression


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.