SEED: Self-supervised Distillation For Visual Representation

01/12/2021
by   Zhiyuan Fang, et al.
2

This paper is concerned with self-supervised learning for small models. The problem is motivated by our empirical studies that while the widely used contrastive self-supervised learning method has shown great progress on large model training, it does not work well for small models. To address this problem, we propose a new learning paradigm, named SElf-SupErvised Distillation (SEED), where we leverage a larger network (as Teacher) to transfer its representational knowledge into a smaller architecture (as Student) in a self-supervised fashion. Instead of directly learning from unlabeled data, we train a student encoder to mimic the similarity score distribution inferred by a teacher over a set of instances. We show that SEED dramatically boosts the performance of small networks on downstream tasks. Compared with self-supervised baselines, SEED improves the top-1 accuracy from 42.2 on EfficientNet-B0 and from 36.3 ImageNet-1k dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2020

CompRess: Self-Supervised Learning by Compressing Representations

Self-supervised learning aims to learn good representations with unlabel...
research
07/05/2021

Continual Contrastive Self-supervised Learning for Image Classification

For artificial learning systems, continual learning over time from a str...
research
11/17/2022

Self-Supervised Visual Representation Learning via Residual Momentum

Self-supervised learning (SSL) approaches have shown promising capabilit...
research
05/28/2023

LowDINO – A Low Parameter Self Supervised Learning Model

This research aims to explore the possibility of designing a neural netw...
research
04/26/2023

Hopfield model with planted patterns: a teacher-student self-supervised learning model

While Hopfield networks are known as paradigmatic models for memory stor...
research
02/18/2022

Masked prediction tasks: a parameter identifiability view

The vast majority of work in self-supervised learning, both theoretical ...
research
10/26/2020

Refactoring Policy for Compositional Generalizability using Self-Supervised Object Proposals

We study how to learn a policy with compositional generalizability. We p...

Please sign up or login with your details

Forgot password? Click here to reset