Rethinking Self-Supervised Learning: Small is Beautiful

03/25/2021
by   Yun-Hao Cao, et al.
0

Self-supervised learning (SSL), in particular contrastive learning, has made great progress in recent years. However, a common theme in these methods is that they inherit the learning paradigm from the supervised deep learning scenario. Current SSL methods are often pretrained for many epochs on large-scale datasets using high resolution images, which brings heavy computational cost and lacks flexibility. In this paper, we demonstrate that the learning paradigm for SSL should be different from supervised learning and the information encoded by the contrastive loss is expected to be much less than that encoded in the labels in supervised learning via the cross entropy loss. Hence, we propose scaled-down self-supervised learning (S3L), which include 3 parts: small resolution, small architecture and small data. On a diverse set of datasets, SSL methods and backbone architectures, S3L achieves higher accuracy consistently with much less training cost when compared to previous SSL learning paradigm. Furthermore, we show that even without a large pretraining dataset, S3L can achieve impressive results on small data alone. Our code has been made publically available at https://github.com/CupidJay/Scaled-down-self-supervised-learning.

READ FULL TEXT

page 5

page 12

research
12/04/2020

Super-Selfish: Self-Supervised Learning on Images with PyTorch

Super-Selfish is an easy to use PyTorch framework for image-based self-s...
research
11/27/2020

Self supervised contrastive learning for digital histopathology

Unsupervised learning has been a long-standing goal of machine learning ...
research
04/01/2022

WavFT: Acoustic model finetuning with labelled and unlabelled data

Unsupervised and self-supervised learning methods have leveraged unlabel...
research
09/12/2022

Action-based Early Autism Diagnosis Using Contrastive Feature Learning

Autism, also known as Autism Spectrum Disorder (or ASD), is a neurologic...
research
09/05/2023

Prototype-based Dataset Comparison

Dataset summarisation is a fruitful approach to dataset inspection. Howe...
research
09/06/2022

Robust and Efficient Imbalanced Positive-Unlabeled Learning with Self-supervision

Learning from positive and unlabeled (PU) data is a setting where the le...
research
02/17/2021

S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration

Previous studies dominantly target at self-supervised learning on real-v...

Please sign up or login with your details

Forgot password? Click here to reset