MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations

05/29/2023
by   Calum Heggan, et al.
0

Contrastive self-supervised learning has gained attention for its ability to create high-quality representations from large unlabelled data sets. A key reason that these powerful features enable data-efficient learning of downstream tasks is that they provide augmentation invariance, which is often a useful inductive bias. However, the amount and type of invariances preferred is not known apriori, and varies across different downstream tasks. We therefore propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner. Our multi-task representation provides a strong and flexible feature that benefits diverse downstream tasks. We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance on all of them

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2021

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

Self-supervised learning is a powerful paradigm for representation learn...
research
03/07/2023

MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors

Recent Self-Supervised Learning (SSL) methods are able to learn feature ...
research
06/14/2021

Self-Supervised Metric Learning in Multi-View Data: A Downstream Task Perspective

Self-supervised metric learning has been a successful approach for learn...
research
02/06/2023

The SSL Interplay: Augmentations, Inductive Bias, and Generalization

Self-supervised learning (SSL) has emerged as a powerful framework to le...
research
09/07/2023

Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction

The choice of the objective function is crucial in emerging high-quality...
research
08/04/2023

Efficient Labelling of Affective Video Datasets via Few-Shot Multi-Task Contrastive Learning

Whilst deep learning techniques have achieved excellent emotion predicti...
research
03/10/2023

Ignorance is Bliss: Robust Control via Information Gating

Informational parsimony – i.e., using the minimal information required f...

Please sign up or login with your details

Forgot password? Click here to reset