Feature-domain Adaptive Contrastive Distillation for Efficient Single Image Super-Resolution

11/29/2022
by   HyeonCheol Moon, et al.
0

Recently, CNN-based SISR has numerous parameters and high computational cost to achieve better performance, limiting its applicability to resource-constrained devices such as mobile. As one of the methods to make the network efficient, Knowledge Distillation (KD), which transfers teacher's useful knowledge to student, is currently being studied. More recently, KD for SISR utilizes Feature Distillation (FD) to minimize the Euclidean distance loss of feature maps between teacher and student networks, but it does not sufficiently consider how to effectively and meaningfully deliver knowledge from teacher to improve the student performance at given network capacity constraints. In this paper, we propose a feature-domain adaptive contrastive distillation (FACD) method for efficiently training lightweight student SISR networks. We show the limitations of the existing FD methods using Euclidean distance loss, and propose a feature-domain contrastive loss that makes a student network learn richer information from the teacher's representation in the feature domain. In addition, we propose an adaptive distillation that selectively applies distillation depending on the conditions of the training patches. The experimental results show that the student EDSR and RCAN networks with the proposed FACD scheme improves not only the PSNR performance of the entire benchmark datasets and scales, but also the subjective image quality compared to the conventional FD approaches.

READ FULL TEXT

page 1

page 2

page 4

page 7

research
12/03/2019

QUEST: Quantized embedding space for transferring knowledge

Knowledge distillation refers to the process of training a compact stude...
research
11/22/2021

Local-Selective Feature Distillation for Single Image Super-Resolution

Recent improvements in convolutional neural network (CNN)-based single i...
research
07/18/2022

Learning Knowledge Representation with Meta Knowledge Distillation for Single Image Super-Resolution

Knowledge distillation (KD), which can efficiently transfer knowledge fr...
research
06/28/2023

Hybrid Distillation: Connecting Masked Autoencoders with Contrastive Learners

Representation learning has been evolving from traditional supervised tr...
research
03/29/2021

Complementary Relation Contrastive Distillation

Knowledge distillation aims to transfer representation ability from a te...
research
07/15/2020

Learning with Privileged Information for Efficient Image Super-Resolution

Convolutional neural networks (CNNs) have allowed remarkable advances in...
research
03/16/2023

Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval

Previous Knowledge Distillation based efficient image retrieval methods ...

Please sign up or login with your details

Forgot password? Click here to reset