Region-aware Knowledge Distillation for Efficient Image-to-Image Translation

05/25/2022
by   Linfeng Zhang, et al.
0

Recent progress in image-to-image translation has witnessed the success of generative adversarial networks (GANs). However, GANs usually contain a huge number of parameters, which lead to intolerant memory and computation consumption and limit their deployment on edge devices. To address this issue, knowledge distillation is proposed to transfer the knowledge from a cumbersome teacher model to an efficient student model. However, most previous knowledge distillation methods are designed for image classification and lead to limited performance in image-to-image translation. In this paper, we propose Region-aware Knowledge Distillation ReKo to compress image-to-image translation models. Firstly, ReKo adaptively finds the crucial regions in the images with an attention module. Then, patch-wise contrastive learning is adopted to maximize the mutual information between students and teachers in these crucial regions. Experiments with eight comparison methods on nine datasets demonstrate the substantial effectiveness of ReKo on both paired and unpaired image-to-image translation. For instance, our 7.08X compressed and 6.80X accelerated CycleGAN student outperforms its teacher by 1.33 and 1.04 FID scores on Horse to Zebra and Zebra to Horse, respectively. Codes will be released on GitHub.

READ FULL TEXT
research
04/30/2021

Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation

Generative adversarial networks (GANs) have shown significant potential ...
research
03/10/2022

Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation

Image-to-image translation models are shown to be vulnerable to the Memb...
research
03/12/2022

Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation

Remarkable achievements have been attained with Generative Adversarial N...
research
03/07/2020

Distilling portable Generative Adversarial Networks for Image Translation

Despite Generative Adversarial Networks (GANs) have been widely used in ...
research
08/17/2021

Transferring Knowledge with Attention Distillation for Multi-Domain Image-to-Image Translation

Gradient-based attention modeling has been used widely as a way to visua...
research
03/05/2021

Teachers Do More Than Teach: Compressing Image-to-Image Models

Generative Adversarial Networks (GANs) have achieved huge success in gen...
research
07/25/2019

Co-Evolutionary Compression for Unpaired Image Translation

Generative adversarial networks (GANs) have been successfully used for c...

Please sign up or login with your details

Forgot password? Click here to reset