On the Informativeness of Supervision Signals

11/02/2022
by   Ilia Sucholutsky, et al.
10

Learning transferable representations by training a classifier is a well-established technique in deep learning (e.g., ImageNet pretraining), but it remains an open theoretical question why this kind of task-specific pre-training should result in ”good” representations that actually capture the underlying structure of the data. We conduct an information-theoretic analysis of several commonly-used supervision signals from contrastive learning and classification to determine how they contribute to representation learning performance and how the dynamics of learning are affected by training parameters such as the number of labels, classes, and dimensions in the training dataset. We validate these results empirically in a series of simulations and conduct a cost-benefit analysis to establish a tradeoff curve that enables users to optimize the cost of supervising representation learning on their own datasets.

READ FULL TEXT

page 2

page 7

research
06/17/2020

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training

Graph representation learning has emerged as a powerful technique for re...
research
08/04/2021

Semi-weakly Supervised Contrastive Representation Learning for Retinal Fundus Images

We explore the value of weak labels in learning transferable representat...
research
02/24/2023

Amortised Invariance Learning for Contrastive Self-Supervision

Contrastive self-supervised learning methods famously produce high quali...
research
08/03/2020

SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning

A steady momentum of innovations and breakthroughs has convincingly push...
research
05/28/2021

Self-supervised Detransformation Autoencoder for Representation Learning in Open Set Recognition

The objective of Open set recognition (OSR) is to learn a classifier tha...
research
08/22/2020

Supervision Levels Scale (SLS)

We propose a three-dimensional discrete and incremental scale to encode ...
research
07/30/2022

Revisiting the Critical Factors of Augmentation-Invariant Representation Learning

We focus on better understanding the critical factors of augmentation-in...

Please sign up or login with your details

Forgot password? Click here to reset