About contrastive unsupervised representation learning for classification and its convergence

12/02/2020
by   Ibrahim Merad, et al.
0

Contrastive representation learning has been recently proved to be very efficient for self-supervised training. These methods have been successfully used to train encoders which perform comparably to supervised training on downstream classification tasks. A few works have started to build a theoretical framework around contrastive learning in which guarantees for its performance can be proven. We provide extensions of these results to training with multiple negative samples and for multiway classification. Furthermore, we provide convergence guarantees for the minimization of the contrastive training error with gradient descent of an overparametrized deep neural encoder, and provide some numerical experiments that complement our theoretical findings

READ FULL TEXT

page 1

page 2

page 3

page 4

10/06/2021

The Power of Contrast for Feature Learning: A Theoretical Analysis

Contrastive learning has achieved state-of-the-art performance in variou...
02/25/2022

Raman Spectrum Matching with Contrastive Representation Learning

Raman spectroscopy is an effective, low-cost, non-intrusive technique of...
02/25/2019

A Theoretical Analysis of Contrastive Unsupervised Representation Learning

Recent empirical works have successfully used unlabeled data to learn fe...
10/05/2020

A Simple Framework for Uncertainty in Contrastive Learning

Contrastive approaches to representation learning have recently shown gr...
02/17/2020

Convergence of End-to-End Training in Deep Unsupervised Contrasitive Learning

Unsupervised contrastive learning has gained increasing attention in the...
09/29/2022

Understanding Collapse in Non-Contrastive Learning

Contrastive methods have led a recent surge in the performance of self-s...
10/10/2019

PAC-Bayesian Contrastive Unsupervised Representation Learning

Contrastive unsupervised representation learning (CURL) is the state-of-...