An Embedding-Dynamic Approach to Self-supervised Learning

07/07/2022
by   Suhong Moon, et al.
2

A number of recent self-supervised learning methods have shown impressive performance on image classification and other tasks. A somewhat bewildering variety of techniques have been used, not always with a clear understanding of the reasons for their benefits, especially when used in combination. Here we treat the embeddings of images as point particles and consider model optimization as a dynamic process on this system of particles. Our dynamic model combines an attractive force for similar images, a locally dispersive force to avoid local collapse, and a global dispersive force to achieve a globally-homogeneous distribution of particles. The dynamic perspective highlights the advantage of using a delayed-parameter image embedding (a la BYOL) together with multiple views of the same image. It also uses a purely-dynamic local dispersive force (Brownian motion) that shows improved performance over other methods and does not require knowledge of other particle coordinates. The method is called MSBReg which stands for (i) a Multiview centroid loss, which applies an attractive force to pull different image view embeddings toward their centroid, (ii) a Singular value loss, which pushes the particle system toward spatially homogeneous density, (iii) a Brownian diffusive loss. We evaluate downstream classification performance of MSBReg on ImageNet as well as transfer learning tasks including fine-grained classification, multi-class object classification, object detection, and instance segmentation. In addition, we also show that applying our regularization term to other methods further improves their performance and stabilize the training by preventing a mode collapse.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2021

How Transferable Are Self-supervised Features in Medical Image Classification Tasks?

Transfer learning has become a standard practice to mitigate the lack of...
research
05/10/2022

CoDo: Contrastive Learning with Downstream Background Invariance for Detection

The prior self-supervised learning researches mainly select image-level ...
research
01/11/2023

GraVIS: Grouping Augmented Views from Independent Sources for Dermatology Analysis

Self-supervised representation learning has been extremely successful in...
research
06/20/2020

Unsupervised Image Classification for Deep Representation Learning

Deep clustering against self-supervised learning is a very important and...
research
03/15/2022

InsCon:Instance Consistency Feature Representation via Self-Supervised Learning

Feature representation via self-supervised learning has reached remarkab...
research
10/06/2022

Effective Self-supervised Pre-training on Low-compute networks without Distillation

Despite the impressive progress of self-supervised learning (SSL), its a...
research
10/07/2022

An Investigation into Whitening Loss for Self-supervised Learning

A desirable objective in self-supervised learning (SSL) is to avoid feat...

Please sign up or login with your details

Forgot password? Click here to reset