A Study of the Generalizability of Self-Supervised Representations

09/19/2021
by   Atharva Tendle, et al.
13

Recent advancements in self-supervised learning (SSL) made it possible to learn generalizable visual representations from unlabeled data. The performance of Deep Learning models fine-tuned on pretrained SSL representations is on par with models fine-tuned on the state-of-the-art supervised learning (SL) representations. Irrespective of the progress made in SSL, its generalizability has not been studied extensively. In this article, we perform a deeper analysis of the generalizability of pretrained SSL and SL representations by conducting a domain-based study for transfer learning classification tasks. The representations are learned from the ImageNet source data, which are then fine-tuned using two types of target datasets: similar to the source dataset, and significantly different from the source dataset. We study generalizability of the SSL and SL-based models via their prediction accuracy as well as prediction confidence. In addition to this, we analyze the attribution of the final convolutional layer of these models to understand how they reason about the semantic identity of the data. We show that the SSL representations are more generalizable as compared to the SL representations. We explain the generalizability of the SSL representations by investigating its invariance property, which is shown to be better than that observed in the SL representations.

READ FULL TEXT

page 18

page 19

page 20

page 26

page 27

page 28

page 41

page 42

research
05/01/2018

Boosting Self-Supervised Learning via Knowledge Transfer

In self-supervised learning, one trains a model to solve a so-called pre...
research
10/11/2020

MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

Self-supervised approaches such as Momentum Contrast (MoCo) can leverage...
research
03/13/2023

Self-supervised based general laboratory progress pretrained model for cardiovascular event detection

Regular surveillance is an indispensable aspect of managing cardiovascul...
research
11/29/2022

BARTSmiles: Generative Masked Language Models for Molecular Representations

We discover a robust self-supervised strategy tailored towards molecular...
research
03/14/2023

Feature representations useful for predicting image memorability

Predicting image memorability has attracted interest in various fields. ...
research
04/07/2023

Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning

Linear probing (LP) (and k-NN) on the upstream dataset with labels (e.g....
research
02/12/2023

Policy-Induced Self-Supervision Improves Representation Finetuning in Visual RL

We study how to transfer representations pretrained on source tasks to t...

Please sign up or login with your details

Forgot password? Click here to reset