TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning

06/21/2022
by   Jiachen Zhu, et al.
13

We present Transformation Invariance and Covariance Contrast (TiCo) for self-supervised visual representation learning. Similar to other recent self-supervised learning methods, our method is based on maximizing the agreement among embeddings of different distorted versions of the same image, which pushes the encoder to produce transformation invariant representations. To avoid the trivial solution where the encoder generates constant vectors, we regularize the covariance matrix of the embeddings from different images by penalizing low rank solutions. By jointly minimizing the transformation invariance loss and covariance contrast loss, we get an encoder that is able to produce useful representations for downstream tasks. We analyze our method and show that it can be viewed as a variant of MoCo with an implicit memory bank of unlimited size at no extra memory cost. This makes our method perform better than alternative methods when using small batch sizes. TiCo can also be seen as a modification of Barlow Twins. By connecting the contrastive and redundancy-reduction methods together, TiCo gives us new insights into how joint embedding methods work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2021

VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

Recent self-supervised methods for image representation learning are bas...
research
11/02/2022

EquiMod: An Equivariance Module to Improve Self-Supervised Learning

Self-supervised visual representation methods are closing the gap with s...
research
09/29/2022

Towards General-Purpose Representation Learning of Polygonal Geometries

Neural network representation learning for spatial data is a common need...
research
07/15/2022

HOME: High-Order Mixed-Moment-based Embedding for Representation Learning

Minimum redundancy among different elements of an embedding in a latent ...
research
12/09/2022

Predictor networks and stop-grads provide implicit variance regularization in BYOL/SimSiam

Self-supervised learning (SSL) learns useful representations from unlabe...
research
10/18/2021

TLDR: Twin Learning for Dimensionality Reduction

Dimensionality reduction methods are unsupervised approaches which learn...
research
02/14/2022

A Generic Self-Supervised Framework of Learning Invariant Discriminative Features

Self-supervised learning (SSL) has become a popular method for generatin...

Please sign up or login with your details

Forgot password? Click here to reset