DeepAI AI Chat
Log In Sign Up

About contrastive unsupervised representation learning for classification and its convergence

12/02/2020
by   Ibrahim Merad, et al.
0

Contrastive representation learning has been recently proved to be very efficient for self-supervised training. These methods have been successfully used to train encoders which perform comparably to supervised training on downstream classification tasks. A few works have started to build a theoretical framework around contrastive learning in which guarantees for its performance can be proven. We provide extensions of these results to training with multiple negative samples and for multiway classification. Furthermore, we provide convergence guarantees for the minimization of the contrastive training error with gradient descent of an overparametrized deep neural encoder, and provide some numerical experiments that complement our theoretical findings

READ FULL TEXT

page 1

page 2

page 3

page 4

10/06/2021

The Power of Contrast for Feature Learning: A Theoretical Analysis

Contrastive learning has achieved state-of-the-art performance in variou...
09/05/2023

Representation Learning Dynamics of Self-Supervised Models

Self-Supervised Learning (SSL) is an important paradigm for learning rep...
02/25/2022

Raman Spectrum Matching with Contrastive Representation Learning

Raman spectroscopy is an effective, low-cost, non-intrusive technique of...
10/05/2020

A Simple Framework for Uncertainty in Contrastive Learning

Contrastive approaches to representation learning have recently shown gr...
02/17/2020

Convergence of End-to-End Training in Deep Unsupervised Contrasitive Learning

Unsupervised contrastive learning has gained increasing attention in the...
08/04/2020

LoCo: Local Contrastive Representation Learning

Deep neural nets typically perform end-to-end backpropagation to learn t...