MOMA:Distill from Self-Supervised Teachers

02/04/2023
by   Yuchong Yao, et al.
0

Contrastive Learning and Masked Image Modelling have demonstrated exceptional performance on self-supervised representation learning, where Momentum Contrast (i.e., MoCo) and Masked AutoEncoder (i.e., MAE) are the state-of-the-art, respectively. In this work, we propose MOMA to distill from pre-trained MoCo and MAE in a self-supervised manner to collaborate the knowledge from both paradigms. We introduce three different mechanisms of knowledge transfer in the propsoed MOMA framework. : (1) Distill pre-trained MoCo to MAE. (2) Distill pre-trained MAE to MoCo (3) Distill pre-trained MoCo and MAE to a random initialized student. During the distillation, the teacher and the student are fed with original inputs and masked inputs, respectively. The learning is enabled by aligning the normalized representations from the teacher and the projected representations from the student. This simple design leads to efficient computation with extremely high mask ratio and dramatically reduced training epochs, and does not require extra considerations on the distillation target. The experiments show MOMA delivers compact student models with comparable performance to existing state-of-the-art methods, combining the power of both self-supervised learning paradigms. It presents competitive results against different benchmarks in computer vision. We hope our method provides an insight on transferring and adapting the knowledge from large-scale pre-trained models in a computationally efficient way.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/23/2022

Distilling Knowledge from Self-Supervised Teacher by Embedding Graph Alignment

Recent advances have indicated the strengths of self-supervised pre-trai...
research
07/21/2022

KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS

Supervised multi-view stereo (MVS) methods have achieved remarkable prog...
research
01/19/2021

Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning

In this paper, we present a novel approach, Momentum^2 Teacher, for stud...
research
09/08/2022

Exploring Target Representations for Masked Autoencoders

Masked autoencoders have become popular training paradigms for self-supe...
research
01/22/2023

Unifying Synergies between Self-supervised Learning and Dynamic Computation

Self-supervised learning (SSL) approaches have made major strides forwar...
research
02/23/2023

Random Teachers are Good Teachers

In this work, we investigate the implicit regularization induced by teac...
research
09/07/2022

Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations

We present Neural Feature Fusion Fields (N3F), a method that improves de...

Please sign up or login with your details

Forgot password? Click here to reset