Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution

09/07/2021
by   Chuanguang Yang, et al.
0

Knowledge distillation (KD) is an effective framework that aims to transfer meaningful information from a large teacher to a smaller student. Generally, KD often involves how to define and transfer knowledge. Previous KD methods often focus on mining various forms of knowledge, for example, feature maps and refined information. However, the knowledge is derived from the primary supervised task and thus is highly task-specific. Motivated by the recent success of self-supervised representation learning, we propose an auxiliary self-supervision augmented task to guide networks to learn more meaningful features. Therefore, we can derive soft self-supervision augmented distributions as richer dark knowledge from this task for KD. Unlike previous knowledge, this distribution encodes joint knowledge from supervised and self-supervised feature learning. Beyond knowledge exploration, another crucial aspect is how to learn and distill our proposed knowledge effectively. To fully take advantage of hierarchical feature maps, we propose to append several auxiliary branches at various hidden layers. Each auxiliary branch is guided to learn self-supervision augmented task and distill this distribution from teacher to student. Thus we call our KD method as Hierarchical Self-Supervision Augmented Knowledge Distillation (HSSAKD). Experiments on standard image classification show that both offline and online HSSAKD achieves state-of-the-art performance in the field of KD. Further transfer experiments on object detection further verify that HSSAKD can guide the network to learn better features, which can be attributed to learn and distill an auxiliary self-supervision augmented task effectively.

READ FULL TEXT

page 1

page 7

page 11

page 12

research
07/29/2021

Hierarchical Self-supervised Augmented Knowledge Distillation

Knowledge distillation often involves how to define and transfer knowled...
research
06/12/2020

Knowledge Distillation Meets Self-Supervision

Knowledge distillation, which involves extracting the "dark knowledge" f...
research
05/31/2023

Feature Learning in Image Hierarchies using Functional Maximal Correlation

This paper proposes the Hierarchical Functional Maximal Correlation Algo...
research
08/11/2022

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

Unlike the conventional Knowledge Distillation (KD), Self-KD allows a ne...
research
04/26/2018

Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification

Video representation learning is a vital problem for classification task...
research
08/25/2023

Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features

Whilst contrastive learning yields powerful representations by matching ...
research
12/17/2021

Weakly Supervised Semantic Segmentation via Alternative Self-Dual Teaching

Current weakly supervised semantic segmentation (WSSS) frameworks usuall...

Please sign up or login with your details

Forgot password? Click here to reset