Effective Self-supervised Pre-training on Low-compute networks without Distillation

10/06/2022
by   Fuwen Tan, et al.
0

Despite the impressive progress of self-supervised learning (SSL), its applicability to low-compute networks has received limited attention. Reported performance has trailed behind standard supervised pre-training by a large margin, barring self-supervised learning from making an impact on models that are deployed on device. Most prior works attribute this poor performance to the capacity bottleneck of the low-compute networks and opt to bypass the problem through the use of knowledge distillation (KD). In this work, we revisit SSL for efficient neural networks, taking a closer at what are the detrimental factors causing the practical limitations, and whether they are intrinsic to the self-supervised low-compute setting. We find that, contrary to accepted knowledge, there is no intrinsic architectural bottleneck, we diagnose that the performance bottleneck is related to the model complexity vs regularization strength trade-off. In particular, we start by empirically observing that the use of local views can have a dramatic impact on the effectiveness of the SSL methods. This hints at view sampling being one of the performance bottlenecks for SSL on low-capacity networks. We hypothesize that the view sampling strategy for large neural networks, which requires matching views in very diverse spatial scales and contexts, is too demanding for low-capacity architectures. We systematize the design of the view sampling mechanism, leading to a new training methodology that consistently improves the performance across different SSL methods (e.g. MoCo-v2, SwAV, DINO), different low-size networks (e.g. MobileNetV2, ResNet18, ResNet34, ViT-Ti), and different tasks (linear probe, object detection, instance segmentation and semi-supervised learning). Our best models establish a new state-of-the-art for SSL methods on low-compute networks despite not using a KD loss term.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2023

A Simple Recipe for Competitive Low-compute Self supervised Vision Models

Self-supervised methods in vision have been mostly focused on large arch...
research
04/14/2022

DeiT III: Revenge of the ViT

A Vision Transformer (ViT) is a simple neural architecture amenable to s...
research
07/22/2020

Instance-aware Self-supervised Learning for Nuclei Segmentation

Due to the wide existence and large morphological variances of nuclei, a...
research
03/13/2023

Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning

We propose universally slimmable self-supervised learning (dubbed as US3...
research
09/30/2022

Slimmable Networks for Contrastive Self-supervised Learning

Self-supervised learning makes great progress in large model pre-trainin...
research
07/07/2022

An Embedding-Dynamic Approach to Self-supervised Learning

A number of recent self-supervised learning methods have shown impressiv...
research
12/03/2020

Using Cross-Loss Influence Functions to Explain Deep Network Representations

As machine learning is increasingly deployed in the real world, it is ev...

Please sign up or login with your details

Forgot password? Click here to reset