A Simple Recipe for Competitive Low-compute Self supervised Vision Models

01/23/2023
by   Quentin Duval, et al.
0

Self-supervised methods in vision have been mostly focused on large architectures as they seem to suffer from a significant performance drop for smaller architectures. In this paper, we propose a simple self-supervised distillation technique that can train high performance low-compute neural networks. Our main insight is that existing joint-embedding based SSL methods can be repurposed for knowledge distillation from a large self-supervised teacher to a small student model. Thus, we call our method Replace one Branch (RoB) as it simply replaces one branch of the joint-embedding training with a large teacher model. RoB is widely applicable to a number of architectures such as small ResNets, MobileNets and ViT, and pretrained models such as DINO, SwAV or iBOT. When pretraining on the ImageNet dataset, RoB yields models that compete with supervised knowledge distillation. When applied to MSN, RoB produces students with strong semi-supervised capabilities. Finally, our best ViT-Tiny models improve over prior SSL state-of-the-art on ImageNet by 2.3% and are on par or better than a supervised distilled DeiT on five downstream transfer tasks (iNaturalist, CIFAR, Clevr/Count, Clevr/Dist and Places). We hope RoB enables practical self-supervision at smaller scale.

READ FULL TEXT
research
10/03/2022

Attention Distillation: self-supervised vision transformer students need more guidance

Self-supervised learning has been widely applied to train high-quality v...
research
10/06/2022

Effective Self-supervised Pre-training on Low-compute networks without Distillation

Despite the impressive progress of self-supervised learning (SSL), its a...
research
03/25/2023

Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression

Recent vision architectures and self-supervised training methods enable ...
research
01/13/2022

SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation

Feature regression is a simple way to distill large neural network model...
research
07/30/2021

On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

It is a consensus that small models perform quite poorly under the parad...
research
07/13/2022

DSPNet: Towards Slimmable Pretrained Networks based on Discriminative Self-supervised Learning

Self-supervised learning (SSL) has achieved promising downstream perform...
research
04/19/2021

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

While self-supervised representation learning (SSL) has received widespr...

Please sign up or login with your details

Forgot password? Click here to reset