Slimmable Networks for Contrastive Self-supervised Learning

09/30/2022
by   Shuai Zhao, et al.
57

Self-supervised learning makes great progress in large model pre-training but suffers in training small models. Previous solutions to this problem mainly rely on knowledge distillation and indeed have a two-stage learning procedure: first train a large teacher model, then distill it to improve the generalization ability of small ones. In this work, we present a new one-stage solution to obtain pre-trained small models without extra teachers: slimmable networks for contrastive self-supervised learning (SlimCLR). A slimmable network contains a full network and several weight-sharing sub-networks. We can pre-train for only one time and obtain various networks including small ones with low computation costs. However, in self-supervised cases, the interference between weight-sharing networks leads to severe performance degradation. One evidence of the interference is gradient imbalance: a small proportion of parameters produces dominant gradients during backpropagation, and the main parameters may not be fully optimized. The divergence in gradient directions of various networks may also cause interference between networks. To overcome these problems, we make the main parameters produce dominant gradients and provide consistent guidance for sub-networks via three techniques: slow start training of sub-networks, online distillation, and loss re-weighting according to model sizes. Besides, a switchable linear probe layer is applied during linear evaluation to avoid the interference of weight-sharing linear layers. We instantiate SlimCLR with typical contrastive learning frameworks and achieve better performance than previous arts with fewer parameters and FLOPs.

READ FULL TEXT
research
08/01/2020

Distilling Visual Priors from Self-Supervised Learning

Convolutional Neural Networks (CNNs) are prone to overfit small training...
research
07/30/2021

On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

It is a consensus that small models perform quite poorly under the parad...
research
05/22/2023

EnSiam: Self-Supervised Learning With Ensemble Representations

Recently, contrastive self-supervised learning, where the proximity of r...
research
07/05/2021

Continual Contrastive Self-supervised Learning for Image Classification

For artificial learning systems, continual learning over time from a str...
research
12/14/2022

Establishing a stronger baseline for lightweight contrastive models

Recent research has reported a performance degradation in self-supervise...
research
10/06/2022

Effective Self-supervised Pre-training on Low-compute networks without Distillation

Despite the impressive progress of self-supervised learning (SSL), its a...
research
03/13/2020

Interference and Generalization in Temporal Difference Learning

We study the link between generalization and interference in temporal-di...

Please sign up or login with your details

Forgot password? Click here to reset