EquiMod: An Equivariance Module to Improve Self-Supervised Learning

11/02/2022
by   Alexandre Devillers, et al.
0

Self-supervised visual representation methods are closing the gap with supervised learning performance. These methods rely on maximizing the similarity between embeddings of related synthetic inputs created through data augmentations. This can be seen as a task that encourages embeddings to leave out factors modified by these augmentations, i.e. to be invariant to them. However, this only considers one side of the trade-off in the choice of the augmentations: they need to strongly modify the images to avoid simple solution shortcut learning (e.g. using only color histograms), but on the other hand, augmentations-related information may be lacking in the representations for some downstream tasks (e.g. color is important for birds and flower classification). Few recent works proposed to mitigate the problem of using only an invariance task by exploring some form of equivariance to augmentations. This has been performed by learning additional embeddings space(s), where some augmentation(s) cause embeddings to differ, yet in a non-controlled way. In this work, we introduce EquiMod a generic equivariance module that structures the learned latent space, in the sense that our module learns to predict the displacement in the embedding space caused by the augmentations. We show that applying that module to state-of-the-art invariance models, such as SimCLR and BYOL, increases the performances on CIFAR10 and ImageNet datasets. Moreover, while our model could collapse to a trivial equivariance, i.e. invariance, we observe that it instead automatically learns to keep some augmentations-related information beneficial to the representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2022

TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning

We present Transformation Invariance and Covariance Contrast (TiCo) for ...
research
11/18/2021

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Recent unsupervised representation learning methods have shown to be eff...
research
02/16/2022

Planckian jitter: enhancing the color quality of self-supervised visual representations

Several recent works on self-supervised learning are trained by mapping ...
research
03/07/2023

MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors

Recent Self-Supervised Learning (SSL) methods are able to learn feature ...
research
04/07/2023

Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning

Linear probing (LP) (and k-NN) on the upstream dataset with labels (e.g....
research
06/13/2022

Virtual embeddings and self-consistency for self-supervised learning

Self-supervised Learning (SSL) has recently gained much attention due to...
research
06/24/2023

Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning

Self-supervised learning converts raw perceptual data such as images to ...

Please sign up or login with your details

Forgot password? Click here to reset