An Unconstrained Layer-Peeled Perspective on Neural Collapse

10/06/2021
by   Wenlong Ji, et al.
0

Neural collapse is a highly symmetric geometric pattern of neural networks that emerges during the terminal phase of training, with profound implications on the generalization performance and robustness of the trained networks. To understand how the last-layer features and classifiers exhibit this recently discovered implicit bias, in this paper, we introduce a surrogate model called the unconstrained layer-peeled model (ULPM). We prove that gradient flow on this model converges to critical points of a minimum-norm separation problem exhibiting neural collapse in its global minimizer. Moreover, we show that the ULPM with the cross-entropy loss has a benign global landscape for its loss function, which allows us to prove that all the critical points are strict saddle points except the global minimizers that exhibit the neural collapse phenomenon. Empirically, we show that our results also hold during the training of neural networks in real-world tasks when explicit regularization or weight decay is not used.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2021

A Geometric Analysis of Neural Collapse with Unconstrained Features

We provide the first global optimization landscape analysis of Neural Co...
research
11/23/2020

Neural collapse with unconstrained features

Neural collapse is an emergent phenomenon in deep learning that was rece...
research
01/01/2023

Neural Collapse in Deep Linear Network: From Balanced to Imbalanced Data

Modern deep neural networks have achieved superhuman performance in task...
research
09/19/2022

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

When training overparameterized deep networks for classification tasks, ...
research
10/04/2022

Are All Losses Created Equal: A Neural Collapse Perspective

While cross entropy (CE) is the most commonly used loss to train deep ne...
research
10/05/2017

Porcupine Neural Networks: (Almost) All Local Optima are Global

Neural networks have been used prominently in several machine learning a...
research
06/11/2022

Memorization-Dilation: Modeling Neural Collapse Under Noise

The notion of neural collapse refers to several emergent phenomena that ...

Please sign up or login with your details

Forgot password? Click here to reset