Learning Efficient GANs via Differentiable Masks and co-Attention Distillation

11/17/2020
by   Shaojie Li, et al.
0

Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computational and storage costs impede the deployment on mobile devices. Prevalent methods for CNN compression cannot be directly applied to GANs due to the complicated generator architecture and the unstable adversarial training. To solve these, in this paper, we introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation. The former searches for a light-weight generator architecture in a training-adaptive manner. To overcome channel inconsistency when pruning the residual connections, an adaptive cross-block group sparsity is further incorporated. The latter simultaneously distills informative attention maps from both the generator and discriminator of a pre-trained model to the searched generator, effectively stabilizing the adversarial training of our light-weight model. Experiments show that DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model. Code is available at https://github.com/SJLeo/DMAD.

READ FULL TEXT
research
12/29/2022

Discriminator-Cooperated Feature Map Distillation for GAN Compression

Despite excellent performance in image generation, Generative Adversaria...
research
04/18/2023

Look ATME: The Discriminator Mean Entropy Needs Attention

Generative adversarial networks (GANs) are successfully used for image s...
research
10/27/2021

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

Recently, a series of algorithms have been explored for GAN compression,...
research
03/16/2022

PPCD-GAN: Progressive Pruning and Class-Aware Distillation for Large-Scale Conditional GANs Compression

We push forward neural network compression research by exploiting a nove...
research
07/03/2020

Self-Supervised GAN Compression

Deep learning's success has led to larger and larger models to handle mo...
research
12/21/2022

Exploring Content Relationships for Distilling Efficient GANs

This paper proposes a content relationship distillation (CRD) to tackle ...
research
04/02/2023

A Unified Compression Framework for Efficient Speech-Driven Talking-Face Generation

Virtual humans have gained considerable attention in numerous industries...

Please sign up or login with your details

Forgot password? Click here to reset