Self-Supervised Visual Representation Learning via Residual Momentum

11/17/2022
by   Trung X Pham, et al.
0

Self-supervised learning (SSL) approaches have shown promising capabilities in learning the representation from unlabeled data. Amongst them, momentum-based frameworks have attracted significant attention. Despite being a great success, these momentum-based SSL frameworks suffer from a large gap in representation between the online encoder (student) and the momentum encoder (teacher), which hinders performance on downstream tasks. This paper is the first to investigate and identify this invisible gap as a bottleneck that has been overlooked in the existing SSL frameworks, potentially preventing the models from learning good representation. To solve this problem, we propose "residual momentum" to directly reduce this gap to encourage the student to learn the representation as close to that of the teacher as possible, narrow the performance gap with the teacher, and significantly improve the existing SSL. Our method is straightforward, easy to implement, and can be easily plugged into other SSL frameworks. Extensive experimental results on numerous benchmark datasets and diverse network architectures have demonstrated the effectiveness of our method over the state-of-the-art contrastive learning baselines.

READ FULL TEXT

page 8

page 15

page 16

page 17

page 18

research
01/12/2021

SEED: Self-supervised Distillation For Visual Representation

This paper is concerned with self-supervised learning for small models. ...
research
08/11/2022

On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning

Exponential Moving Average (EMA or momentum) is widely used in modern se...
research
01/19/2021

Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning

In this paper, we present a novel approach, Momentum^2 Teacher, for stud...
research
05/28/2023

LowDINO – A Low Parameter Self Supervised Learning Model

This research aims to explore the possibility of designing a neural netw...
research
04/19/2021

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

While self-supervised representation learning (SSL) has received widespr...
research
11/22/2020

Run Away From your Teacher: Understanding BYOL by a Novel Self-Supervised Approach

Recently, a newly proposed self-supervised framework Bootstrap Your Own ...
research
07/17/2022

Fast-MoCo: Boost Momentum-based Contrastive Learning with Combinatorial Patches

Contrastive-based self-supervised learning methods achieved great succes...

Please sign up or login with your details

Forgot password? Click here to reset