Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

11/22/2021
by   Linus Ericsson, et al.
0

Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A wealth of effective new methods based on instance matching rely on data augmentation to drive learning, and these have reached a rough agreement on an augmentation scheme that optimises popular recognition benchmarks. However, there is strong reason to suspect that different tasks in computer vision require features to encode different (in)variances, and therefore likely require different augmentation strategies. In this paper, we measure the invariances learned by contrastive methods and confirm that they do learn invariance to the augmentations used and further show that this invariance largely transfers to related real-world changes in pose and lighting. We show that learned invariances strongly affect downstream task performance and confirm that different downstream tasks benefit from polar opposite (in)variances, leading to performance loss when the standard augmentation strategy is used. Finally, we demonstrate that a simple fusion of representations with complementary invariances ensures wide transferability to all the diverse downstream tasks considered.

READ FULL TEXT
research
05/29/2023

MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations

Contrastive self-supervised learning has gained attention for its abilit...
research
04/11/2022

Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

Self-supervised learning (SSL), as a newly emerging unsupervised represe...
research
06/01/2022

Rethinking the Augmentation Module in Contrastive Learning: Learning Hierarchical Augmentation Invariance with Expanded Views

A data augmentation module is utilized in contrastive learning to transf...
research
06/16/2022

Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning

By leveraging contrastive learning, clustering, and other pretext tasks,...
research
03/07/2023

MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors

Recent Self-Supervised Learning (SSL) methods are able to learn feature ...
research
03/07/2022

Comparing representations of biological data learned with different AI paradigms, augmenting and cropping strategies

Recent advances in computer vision and robotics enabled automated large-...
research
11/18/2021

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Recent unsupervised representation learning methods have shown to be eff...

Please sign up or login with your details

Forgot password? Click here to reset