About contrastive unsupervised representation learning for classification and its convergence

12/02/2020
by   Ibrahim Merad, et al.
0

Contrastive representation learning has been recently proved to be very efficient for self-supervised training. These methods have been successfully used to train encoders which perform comparably to supervised training on downstream classification tasks. A few works have started to build a theoretical framework around contrastive learning in which guarantees for its performance can be proven. We provide extensions of these results to training with multiple negative samples and for multiway classification. Furthermore, we provide convergence guarantees for the minimization of the contrastive training error with gradient descent of an overparametrized deep neural encoder, and provide some numerical experiments that complement our theoretical findings

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2021

The Power of Contrast for Feature Learning: A Theoretical Analysis

Contrastive learning has achieved state-of-the-art performance in variou...
research
09/05/2023

Representation Learning Dynamics of Self-Supervised Models

Self-Supervised Learning (SSL) is an important paradigm for learning rep...
research
02/25/2022

Raman Spectrum Matching with Contrastive Representation Learning

Raman spectroscopy is an effective, low-cost, non-intrusive technique of...
research
05/25/2023

Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression

Contrastive learning (CL) has emerged as a powerful technique for repres...
research
10/05/2020

A Simple Framework for Uncertainty in Contrastive Learning

Contrastive approaches to representation learning have recently shown gr...
research
02/17/2020

Convergence of End-to-End Training in Deep Unsupervised Contrasitive Learning

Unsupervised contrastive learning has gained increasing attention in the...
research
08/04/2020

LoCo: Local Contrastive Representation Learning

Deep neural nets typically perform end-to-end backpropagation to learn t...

Please sign up or login with your details

Forgot password? Click here to reset