Self-Distilled Self-Supervised Representation Learning

11/25/2021
by   Jiho Jang, et al.
0

State-of-the-art frameworks in self-supervised learning have recently shown that fully utilizing transformer-based models can lead to performance boost compared to conventional CNN models. Thriving to maximize the mutual information of two views of an image, existing works apply a contrastive loss to the final representations. In our work, we further exploit this by allowing the intermediate representations to learn from the final layers via the contrastive loss, which is maximizing the upper bound of the original goal and the mutual information between two layers. Our method, Self-Distilled Self-Supervised Learning (SDSSL), outperforms competitive baselines (SimCLR, BYOL and MoCo v3) using ViT on various tasks and datasets. In the linear evaluation and k-NN protocol, SDSSL not only leads to superior performance in the final layers, but also in most of the lower layers. Furthermore, positive and negative alignments are used to explain how representations are formed more effectively. Code will be available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2020

Whitening for Self-Supervised Representation Learning

Recent literature on self-supervised learning is based on the contrastiv...
research
10/21/2021

CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP

Contrastive learning with the InfoNCE objective is exceptionally success...
research
05/06/2022

The NT-Xent loss upper bound

Self-supervised learning is a growing paradigm in deep representation le...
research
10/01/2021

Do Self-Supervised and Supervised Methods Learn Similar Visual Representations?

Despite the success of a number of recent techniques for visual self-sup...
research
08/07/2023

Feature-Suppressed Contrast for Self-Supervised Food Pre-training

Most previous approaches for analyzing food images have relied on extens...
research
06/15/2021

Self-Supervised Learning with Kernel Dependence Maximization

We approach self-supervised learning of image representations from a sta...
research
03/12/2021

Information Maximization Clustering via Multi-View Self-Labelling

Image clustering is a particularly challenging computer vision task, whi...

Please sign up or login with your details

Forgot password? Click here to reset