ReSSL: Relational Self-Supervised Learning with Weak Augmentation

07/20/2021
by   Mingkai Zheng, et al.
0

Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most of methods mainly focus on the instance level information (, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduced a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as relation metric, which is thus utilized to match the feature embeddings of different augmentations. Moreover, to boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. Experimental results show that our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency. Code is available at <https://github.com/KyleZheng1997/ReSSL>.

READ FULL TEXT
research
03/16/2022

Relational Self-Supervised Learning

Self-supervised Learning (SSL) including the mainstream contrastive lear...
research
08/21/2022

Relational Self-Supervised Learning on Graphs

Over the past few years, graph representation learning (GRL) has been a ...
research
11/04/2020

Learning and Evaluating Representations for Deep One-class Classification

We present a two-stage framework for deep one-class classification. We f...
research
09/05/2023

Prototype-based Dataset Comparison

Dataset summarisation is a fruitful approach to dataset inspection. Howe...
research
05/10/2022

Student Collaboration Improves Self-Supervised Learning: Dual-Loss Adaptive Masked Autoencoder for Brain Cell Image Analysis

Self-supervised learning leverages the underlying data structure as the ...
research
05/18/2022

Global Contrast Masked Autoencoders Are Powerful Pathological Representation Learners

Based on digital whole slide scanning technique, artificial intelligence...
research
05/26/2023

Modulate Your Spectrum in Self-Supervised Learning

Whitening loss provides theoretical guarantee in avoiding feature collap...

Please sign up or login with your details

Forgot password? Click here to reset