Simple Distillation Baselines for Improving Small Self-supervised Models

06/21/2021
by   Jindong Gu, et al.
0

While large self-supervised models have rivalled the performance of their supervised counterparts, small models still struggle. In this report, we explore simple baselines for improving small self-supervised models via distillation, called SimDis. Specifically, we present an offline-distillation baseline, which establishes a new state-of-the-art, and an online-distillation baseline, which achieves similar performance with minimal computational overhead. We hope these baselines will provide useful experience for relevant future research. Code is available at: https://github.com/JindongGu/SimDis/

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2022

Attention Distillation: self-supervised vision transformer students need more guidance

Self-supervised learning has been widely applied to train high-quality v...
research
07/30/2021

On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

It is a consensus that small models perform quite poorly under the parad...
research
07/04/2021

Bag of Instances Aggregation Boosts Self-supervised Learning

Recent advances in self-supervised learning have experienced remarkable ...
research
07/20/2023

SLPD: Slide-level Prototypical Distillation for WSIs

Improving the feature representation ability is the foundation of many w...
research
07/24/2023

CLIP-KD: An Empirical Study of Distilling CLIP Models

CLIP has become a promising language-supervised visual pre-training fram...
research
04/04/2023

VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue Distribution

Since the introduction of deep learning, a wide scope of representation ...
research
12/02/2021

InsCLR: Improving Instance Retrieval with Self-Supervision

This work aims at improving instance retrieval with self-supervision. We...

Please sign up or login with your details

Forgot password? Click here to reset