A Theoretical Study of Inductive Biases in Contrastive Learning

11/27/2022
by   Jeff Z. HaoChen, et al.
0

Understanding self-supervised learning is important but challenging. Previous theoretical works study the role of pretraining losses, and view neural networks as general black boxes. However, the recent work of Saunshi et al. argues that the model architecture – a component largely ignored by previous works – also has significant influences on the downstream performance of self-supervised learning. In this work, we provide the first theoretical analysis of self-supervised learning that incorporates the effect of inductive biases originating from the model class. In particular, we focus on contrastive learning – a popular self-supervised learning method that is widely used in the vision domain. We show that when the model has limited capacity, contrastive representations would recover certain special clustering structures that are compatible with the model architecture, but ignore many other clustering structures in the data distribution. As a result, our theory can capture the more realistic setting where contrastive representations have much lower dimensionality than the number of clusters in the data distribution. We instantiate our theory on several synthetic data distributions, and provide empirical evidence to support the theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2022

Understanding Contrastive Learning Requires Incorporating Inductive Biases

Contrastive learning is a popular form of self-supervised learning that ...
research
10/06/2021

The Power of Contrast for Feature Learning: A Theoretical Analysis

Contrastive learning has achieved state-of-the-art performance in variou...
research
06/02/2022

Understanding the Role of Nonlinearity in Training Dynamics of Contrastive Learning

While the empirical success of self-supervised learning (SSL) heavily re...
research
04/23/2021

Inductive biases and Self Supervised Learning in modelling a physical heating system

Model Predictive Controllers (MPC) require a good model for the controll...
research
06/03/2022

On the duality between contrastive and non-contrastive self-supervised learning

Recent approaches in self-supervised learning of image representations c...
research
02/17/2021

Contrastive Learning Inverts the Data Generating Process

Contrastive learning has recently seen tremendous success in self-superv...
research
05/23/2022

Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods

Self-Supervised Learning (SSL) surmises that inputs and pairwise positiv...

Please sign up or login with your details

Forgot password? Click here to reset