Log In Sign Up

Deep Contrastive Learning is Provably (almost) Principal Component Analysis

by   Yuandong Tian, et al.

We show that Contrastive Learning (CL) under a family of loss functions (including InfoNCE) has a game-theoretical formulation, where the max player finds representation to maximize contrastiveness, and the min player puts weights on pairs of samples with similar representation. We show that the max player who does representation learning reduces to Principal Component Analysis for deep linear network, and almost all local minima are global, recovering optimal PCA solutions. Experiments show that the formulation yields comparable (or better) performance on CIFAR10 and STL-10 when extending beyond InfoNCE, yielding novel contrastive losses. Furthermore, we extend our theoretical analysis to 2-layer ReLU networks, showing its difference from linear ones, and proving that feature composition is preferred over picking single dominant feature under strong augmentation.


An online algorithm for contrastive Principal Component Analysis

Finding informative low-dimensional representations that can be computed...

Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework

As a seminal tool in self-supervised representation learning, contrastiv...

Contrastive Principal Component Learning: Modeling Similarity by Augmentation Overlap

Traditional self-supervised contrastive learning methods learn embedding...

Contrastive Principal Component Analysis

We present a new technique called contrastive principal component analys...

Probabilistic Contrastive Principal Component Analysis

Dimension reduction is useful for exploratory data analysis. In many app...

EigenGame: PCA as a Nash Equilibrium

We present a novel view on principal component analysis (PCA) as a compe...