Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

07/23/2022
by   Chuanguang Yang, et al.
0

The teacher-free online Knowledge Distillation (KD) aims to train an ensemble of multiple student models collaboratively and distill knowledge from each other. Although existing online KD methods achieve desirable performance, they often focus on class probabilities as the core knowledge type, ignoring the valuable feature representational information. We present a Mutual Contrastive Learning (MCL) framework for online KD. The core idea of MCL is to perform mutual interaction and transfer of contrastive distributions among a cohort of networks in an online manner. Our MCL can aggregate cross-network embedding information and maximize the lower bound to the mutual information between two networks. This enables each network to learn extra contrastive knowledge from others, leading to better feature representations, thus improving the performance of visual recognition tasks. Beyond the final layer, we extend MCL to several intermediate layers assisted by auxiliary feature refinement modules. This further enhances the ability of representation learning for online KD. Experiments on image classification and transfer learning to visual recognition tasks show that MCL can lead to consistent performance gains against state-of-the-art online KD approaches. The superiority demonstrates that MCL can guide the network to generate better feature representations. Our code is publicly available at https://github.com/winycg/MCL.

READ FULL TEXT

page 10

page 14

research
04/26/2021

Mutual Contrastive Learning for Visual Representation Learning

We present a collaborative learning method called Mutual Contrastive Lea...
research
07/01/2021

Revisiting Knowledge Distillation: An Inheritance and Exploration Framework

Knowledge Distillation (KD) is a popular technique to transfer knowledge...
research
08/18/2020

Knowledge Transfer via Dense Cross-Layer Mutual-Distillation

Knowledge Distillation (KD) based methods adopt the one-way Knowledge Tr...
research
10/29/2021

Estimating and Maximizing Mutual Information for Knowledge Distillation

In this work, we propose Mutual Information Maximization Knowledge Disti...
research
04/28/2023

Ensemble Modeling with Contrastive Knowledge Distillation for Sequential Recommendation

Sequential recommendation aims to capture users' dynamic interest and pr...
research
06/07/2020

Multi-view Contrastive Learning for Online Knowledge Distillation

Existing Online Knowledge Distillation (OKD) aims to perform collaborati...
research
10/25/2021

MUSE: Feature Self-Distillation with Mutual Information and Self-Information

We present a novel information-theoretic approach to introduce dependenc...

Please sign up or login with your details

Forgot password? Click here to reset