Contrasting the landscape of contrastive and non-contrastive learning

03/29/2022
by   Ashwini Pokle, et al.
4

A lot of recent advances in unsupervised feature learning are based on designing features which are invariant under semantic data augmentations. A common way to do this is contrastive learning, which uses positive and negative samples. Some recent works however have shown promising results for non-contrastive learning, which does not require negative samples. However, the non-contrastive losses have obvious "collapsed" minima, in which the encoders output a constant feature embedding, independent of the input. A folk conjecture is that so long as these collapsed solutions are avoided, the produced feature representations should be good. In our paper, we cast doubt on this story: we show through theoretical results and controlled experiments that even on simple data models, non-contrastive losses have a preponderance of non-collapsed bad minima. Moreover, we show that the training process does not avoid these minima.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/03/2022

Do More Negative Samples Necessarily Hurt in Contrastive Learning?

Recent investigations in noise contrastive estimation suggest, both empi...
research
01/11/2022

Feature Extraction Framework based on Contrastive Learning with Adaptive Positive and Negative Samples

In this study, we propose a feature extraction framework based on contra...
research
12/08/2021

Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework

As a seminal tool in self-supervised representation learning, contrastiv...
research
11/19/2022

Local Contrastive Feature learning for Tabular Data

Contrastive self-supervised learning has been successfully used in many ...
research
10/27/2021

Robust Contrastive Learning Using Negative Samples with Diminished Semantics

Unsupervised learning has recently made exceptional progress because of ...
research
04/21/2023

Learn What NOT to Learn: Towards Generative Safety in Chatbots

Conversational models that are generative and open-domain are particular...
research
04/28/2022

Keep the Caption Information: Preventing Shortcut Learning in Contrastive Image-Caption Retrieval

To train image-caption retrieval (ICR) methods, contrastive loss functio...

Please sign up or login with your details

Forgot password? Click here to reset