Towards understanding neural collapse in supervised contrastive learning with the information bottleneck method

05/19/2023
by   Siwei Wang, et al.
0

Neural collapse describes the geometry of activation in the final layer of a deep neural network when it is trained beyond performance plateaus. Open questions include whether neural collapse leads to better generalization and, if so, why and how training beyond the plateau helps. We model neural collapse as an information bottleneck (IB) problem in order to investigate whether such a compact representation exists and discover its connection to generalization. We demonstrate that neural collapse leads to good generalization specifically when it approaches an optimal IB solution of the classification problem. Recent research has shown that two deep neural networks independently trained with the same contrastive loss objective are linearly identifiable, meaning that the resulting representations are equivalent up to a matrix transformation. We leverage linear identifiability to approximate an analytical solution of the IB problem. This approximation demonstrates that when class means exhibit K-simplex Equiangular Tight Frame (ETF) behavior (e.g., K=10 for CIFAR10 and K=100 for CIFAR100), they coincide with the critical phase transitions of the corresponding IB problem. The performance plateau occurs once the optimal solution for the IB problem includes all of these phase transitions. We also show that the resulting K-simplex ETF can be packed into a K-dimensional Gaussian distribution using supervised contrastive learning with a ResNet50 backbone. This geometry suggests that the K-simplex ETF learned by supervised contrastive learning approximates the optimal features for source coding. Hence, there is a direct correspondence between optimal IB solutions and generalization in contrastive learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/07/2019

Contrastive Learning for Lifted Networks

In this work we address supervised learning via lifted network formulati...
research
02/17/2020

Convergence of End-to-End Training in Deep Unsupervised Contrasitive Learning

Unsupervised contrastive learning has gained increasing attention in the...
research
02/17/2021

Dissecting Supervised Constrastive Learning

Minimizing cross-entropy over the softmax scores of a linear map compose...
research
02/24/2023

Generalization Analysis for Contrastive Representation Learning

Recently, contrastive learning has found impressive success in advancing...
research
06/05/2017

Emergence of Invariance and Disentangling in Deep Representations

Using established principles from Information Theory and Statistics, we ...
research
03/31/2023

Generalized Information Bottleneck for Gaussian Variables

The information bottleneck (IB) method offers an attractive framework fo...
research
08/18/2020

Prevalence of Neural Collapse during the terminal phase of deep learning training

Modern practice for training classification deepnets involves a Terminal...

Please sign up or login with your details

Forgot password? Click here to reset