ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets

08/31/2021
by   Vitaliy Kinakh, et al.
0

In this paper, we consider a problem of self-supervised learning for small-scale datasets based on contrastive loss between multiple views of the data, which demonstrates the state-of-the-art performance in classification task. Despite the reported results, such factors as the complexity of training requiring complex architectures, the needed number of views produced by data augmentation, and their impact on the classification accuracy are understudied problems. To establish the role of these factors, we consider an architecture of contrastive loss system such as SimCLR, where baseline model is replaced by geometrically invariant "hand-crafted" network ScatNet with small trainable adapter network and argue that the number of parameters of the whole system and the number of views can be considerably reduced while practically preserving the same classification accuracy. In addition, we investigate the impact of regularization strategies using pretext task learning based on an estimation of parameters of augmentation transform such as rotation and jigsaw permutation for both traditional baseline models and ScatNet based models. Finally, we demonstrate that the proposed architecture with pretext task learning regularization achieves the state-of-the-art classification performance with a smaller number of trainable parameters and with reduced number of views.

READ FULL TEXT

page 2

page 3

page 7

research
05/20/2020

What makes for good views for contrastive learning

Contrastive learning between multiple views of the data has recently ach...
research
07/26/2021

AAVAE: Augmentation-Augmented Variational Autoencoders

Recent methods for self-supervised learning can be grouped into two para...
research
06/29/2021

SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption

Self-supervised contrastive representation learning has proved incredibl...
research
06/10/2021

Self-Supervised 3D Hand Pose Estimation from monocular RGB via Contrastive Learning

Acquiring accurate 3D annotated data for hand pose estimation is a notor...
research
09/13/2020

Contrastive Self-supervised Learning for Graph Classification

Graph classification is a widely studied problem and has broad applicati...
research
11/17/2020

Dual-stream Multiple Instance Learning Network for Whole Slide Image Classification with Self-supervised Contrastive Learning

Whole slide images (WSIs) have large resolutions and usually lack locali...
research
06/01/2022

Contrastive Principal Component Learning: Modeling Similarity by Augmentation Overlap

Traditional self-supervised contrastive learning methods learn embedding...

Please sign up or login with your details

Forgot password? Click here to reset