The SSL Interplay: Augmentations, Inductive Bias, and Generalization

02/06/2023
by   Vivien Cabannes, et al.
0

Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in a theory friendly setup, and highlight several insights for SSL practitioners that arise from our theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2023

MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations

Contrastive self-supervised learning has gained attention for its abilit...
research
03/07/2023

MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors

Recent Self-Supervised Learning (SSL) methods are able to learn feature ...
research
09/07/2023

Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction

The choice of the objective function is crucial in emerging high-quality...
research
06/27/2022

Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head

One unexpected technique that emerged in recent years consists in traini...
research
06/14/2021

A Self-Supervised Framework for Function Learning and Extrapolation

Understanding how agents learn to generalize – and, in particular, to ex...
research
07/16/2022

On the Importance of Hyperparameters and Data Augmentation for Self-Supervised Learning

Self-Supervised Learning (SSL) has become a very active area of Deep Lea...

Please sign up or login with your details

Forgot password? Click here to reset