Mind the Gap in Distilling StyleGANs

08/18/2022
by   Guodong Xu, et al.
9

StyleGAN family is one of the most popular Generative Adversarial Networks (GANs) for unconditional generation. Despite its impressive performance, its high demand on storage and computation impedes their deployment on resource-constrained devices. This paper provides a comprehensive study of distilling from the popular StyleGAN-like architecture. Our key insight is that the main challenge of StyleGAN distillation lies in the output discrepancy issue, where the teacher and student model yield different outputs given the same input latent code. Standard knowledge distillation losses typically fail under this heterogeneous distillation scenario. We conduct thorough analysis about the reasons and effects of this discrepancy issue, and identify that the mapping network plays a vital role in determining semantic information of generated images. Based on this finding, we propose a novel initialization strategy for the student model, which can ensure the output consistency to the maximum extent. To further enhance the semantic consistency between the teacher and student model, we present a latent-direction-based distillation loss that preserves the semantic relations in latent space. Extensive experiments demonstrate the effectiveness of our approach in distilling StyleGAN2 and StyleGAN3, outperforming existing GAN distillation methods by a large margin.

READ FULL TEXT

page 8

page 13

page 14

page 19

page 21

page 22

page 23

page 24

research
11/27/2022

Class-aware Information for Logit-based Knowledge Distillation

Knowledge distillation aims to transfer knowledge to the student model b...
research
12/23/2019

Data-Free Adversarial Distillation

Knowledge Distillation (KD) has made remarkable progress in the last few...
research
07/14/2020

P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection

One-class novelty detection is to identify anomalous instances that do n...
research
05/09/2023

DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing

The knowledge distillation uses a high-performance teacher network to gu...
research
04/30/2021

Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation

Generative adversarial networks (GANs) have shown significant potential ...
research
06/07/2022

Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation

Knowledge graph embedding (KGE) has been intensively investigated for li...
research
09/15/2021

New Perspective on Progressive GANs Distillation for One-class Novelty Detection

One-class novelty detection is conducted to identify anomalous instances...

Please sign up or login with your details

Forgot password? Click here to reset