MSVQ: Self-Supervised Learning with Multiple Sample Views and Queues

05/09/2023
by   Chen Peng, et al.
0

Self-supervised methods based on contrastive learning have achieved great success in unsupervised visual representation learning. However, most methods under this framework suffer from the problem of false negative samples. Inspired by mean shift for self-supervised learning, we propose a new simple framework, namely Multiple Sample Views and Queues (MSVQ). We jointly construct a soft label on-the-fly by introducing two complementary and symmetric ways: multiple augmented positive views and two momentum encoders forming various semantic features of negative samples. Two teacher networks perform similarity relationship calculations with negative samples and then transfer this knowledge to the student. Let the student mimic the similar relationship between the samples, thus giving the student a more flexible ability to identify false negative samples in the dataset. The classification results on four benchmark image datasets demonstrate the high effectiveness and efficiency of our approach compared to some classical methods. Source code and pretrained models are available at $\href{https://github.com/pc-cp/MSVQ}{this~http~URL}$.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2020

ISD: Self-Supervised Learning by Iterative Similarity Distillation

Recently, contrastive learning has achieved great results in self-superv...
research
03/09/2021

SimTriplet: Simple Triplet Representation Learning with a Single GPU

Contrastive learning is a key technique of modern self-supervised learni...
research
11/22/2020

Run Away From your Teacher: Understanding BYOL by a Novel Self-Supervised Approach

Recently, a newly proposed self-supervised framework Bootstrap Your Own ...
research
11/23/2020

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

Self-supervised representation learning has witnessed significant leaps ...
research
02/19/2021

Mine Your Own vieW: Self-Supervised Learning Through Across-Sample Prediction

State-of-the-art methods for self-supervised learning (SSL) build repres...
research
11/15/2022

False: False Negative Samples Aware Contrastive Learning for Semantic Segmentation of High-Resolution Remote Sensing Image

The existing SSCL of RSI is built based on constructing positive and neg...
research
07/17/2022

Fast-MoCo: Boost Momentum-based Contrastive Learning with Combinatorial Patches

Contrastive-based self-supervised learning methods achieved great succes...

Please sign up or login with your details

Forgot password? Click here to reset