Understanding Contrastive Learning Requires Incorporating Inductive Biases

02/28/2022
by   Nikunj Saunshi, et al.
35

Contrastive learning is a popular form of self-supervised learning that encourages augmentations (views) of the same input to have more similar representations compared to augmentations of different inputs. Recent attempts to theoretically explain the success of contrastive learning on downstream classification tasks prove guarantees depending on properties of augmentations and the value of contrastive loss of representations. We demonstrate that such analyses, that ignore inductive biases of the function class and training algorithm, cannot adequately explain the success of contrastive learning, even provably leading to vacuous guarantees in some settings. Extensive experiments on image and text domains highlight the ubiquity of this problem – different function classes and algorithms behave very differently on downstream tasks, despite having the same augmentations and contrastive losses. Theoretical analysis is presented for the class of linear representations, where incorporating inductive biases of the function class allows contrastive learning to work with less stringent conditions compared to prior analyses.

READ FULL TEXT
research
11/27/2022

A Theoretical Study of Inductive Biases in Contrastive Learning

Understanding self-supervised learning is important but challenging. Pre...
research
12/16/2022

Feature Dropout: Revisiting the Role of Augmentations in Contrastive Learning

What role do augmentations play in contrastive learning? Recent work sug...
research
06/06/2023

Unraveling Projection Heads in Contrastive Learning: Insights from Expansion and Shrinkage

We investigate the role of projection heads, also known as projectors, w...
research
04/27/2022

Executive Function: A Contrastive Value Policy for Resampling and Relabeling Perceptions via Hindsight Summarization?

We develop the few-shot continual learning task from first principles an...
research
12/01/2021

GANORCON: Are Generative Models Useful for Few-shot Segmentation?

Advances in generative modeling based on GANs has motivated the communit...
research
02/17/2021

Contrastive Learning Inverts the Data Generating Process

Contrastive learning has recently seen tremendous success in self-superv...
research
01/27/2023

Leveraging the Third Dimension in Contrastive Learning

Self-Supervised Learning (SSL) methods operate on unlabeled data to lear...

Please sign up or login with your details

Forgot password? Click here to reset