DeepAI
Log In Sign Up

Deep Contrastive Learning is Provably (almost) Principal Component Analysis

01/29/2022
by   Yuandong Tian, et al.
31

We show that Contrastive Learning (CL) under a family of loss functions (including InfoNCE) has a game-theoretical formulation, where the max player finds representation to maximize contrastiveness, and the min player puts weights on pairs of samples with similar representation. We show that the max player who does representation learning reduces to Principal Component Analysis for deep linear network, and almost all local minima are global, recovering optimal PCA solutions. Experiments show that the formulation yields comparable (or better) performance on CIFAR10 and STL-10 when extending beyond InfoNCE, yielding novel contrastive losses. Furthermore, we extend our theoretical analysis to 2-layer ReLU networks, showing its difference from linear ones, and proving that feature composition is preferred over picking single dominant feature under strong augmentation.

READ FULL TEXT
11/14/2022

An online algorithm for contrastive Principal Component Analysis

Finding informative low-dimensional representations that can be computed...
12/08/2021

Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework

As a seminal tool in self-supervised representation learning, contrastiv...
06/01/2022

Contrastive Principal Component Learning: Modeling Similarity by Augmentation Overlap

Traditional self-supervised contrastive learning methods learn embedding...
09/20/2017

Contrastive Principal Component Analysis

We present a new technique called contrastive principal component analys...
12/14/2020

Probabilistic Contrastive Principal Component Analysis

Dimension reduction is useful for exploratory data analysis. In many app...
10/01/2020

EigenGame: PCA as a Nash Equilibrium

We present a novel view on principal component analysis (PCA) as a compe...