KD-DLGAN: Data Limited Image Generation via Knowledge Distillation

03/30/2023
by   Kaiwen Cui, et al.
0

Generative Adversarial Networks (GANs) rely heavily on large-scale training data for training high-quality image generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which directly leads to degraded generation especially in generation diversity. Inspired by the recent advances in knowledge distillation (KD), we propose KD-DLGAN, a knowledge-distillation based generation framework that introduces pre-trained vision-language models for training effective data-limited generation models. KD-DLGAN consists of two innovative designs. The first is aggregated generative KD that mitigates the discriminator overfitting by challenging the discriminator with harder learning tasks and distilling more generalizable knowledge from the pre-trained models. The second is correlated generative KD that improves the generation diversity by distilling and preserving the diverse image-text correlation within the pre-trained models. Extensive experiments over multiple benchmarks show that KD-DLGAN achieves superior image generation with limited training data. In addition, KD-DLGAN complements the state-of-the-art with consistent and substantial performance gains.

READ FULL TEXT

page 3

page 5

page 7

research
10/04/2021

GenCo: Generative Co-training on Data-Limited Image Generation

Training effective Generative Adversarial Networks (GANs) requires large...
research
09/20/2023

Language-Oriented Communication with Semantic Coding and Knowledge Distillation for Text-to-Image Generation

By integrating recent advances in large language models (LLMs) and gener...
research
03/02/2023

3D generation on ImageNet

Existing 3D-from-2D generators are typically designed for well-curated s...
research
08/23/2023

Efficient Transfer Learning in Diffusion Models via Adversarial Noise

Diffusion Probabilistic Models (DPMs) have demonstrated substantial prom...
research
12/27/2022

Knowledge-Guided Data-Centric AI in Healthcare: Progress, Shortcomings, and Future Directions

The success of deep learning is largely due to the availability of large...
research
09/05/2022

Exploiting Pre-trained Feature Networks for Generative Adversarial Networks in Audio-domain Loop Generation

While generative adversarial networks (GANs) have been widely used in re...
research
05/16/2022

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

Data-free knowledge distillation (DFKD) conducts knowledge distillation ...

Please sign up or login with your details

Forgot password? Click here to reset