A Geometric Analysis of Neural Collapse with Unconstrained Features

05/06/2021
by   Zhihui Zhu, et al.
18

We provide the first global optimization landscape analysis of Neural Collapse – an intriguing empirical phenomenon that arises in the last-layer classifiers and features of neural networks during the terminal phase of training. As recently reported by Papyan et al., this phenomenon implies that (i) the class means and the last-layer classifiers all collapse to the vertices of a Simplex Equiangular Tight Frame (ETF) up to scaling, and (ii) cross-example within-class variability of last-layer activations collapses to zero. We study the problem based on a simplified unconstrained feature model, which isolates the topmost layers from the classifier of the neural network. In this context, we show that the classical cross-entropy loss with weight decay has a benign global landscape, in the sense that the only global minimizers are the Simplex ETFs while all other critical points are strict saddles whose Hessian exhibit negative curvature directions. In contrast to existing landscape analysis for deep neural networks which is often disconnected from practice, our analysis of the simplified model not only does it explain what kind of features are learned in the last layer, but it also shows why they can be efficiently optimized in the simplified settings, matching the empirical observations in practical deep network architectures. These findings could have profound implications for optimization, generalization, and robustness of broad interests. For example, our experiments demonstrate that one may set the feature dimension equal to the number of classes and fix the last-layer classifier to be a Simplex ETF for network training, which reduces memory cost by over 20% on ResNet18 without sacrificing the generalization performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2022

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

When training deep neural networks for classification tasks, an intrigui...
research
10/06/2021

An Unconstrained Layer-Peeled Perspective on Neural Collapse

Neural collapse is a highly symmetric geometric pattern of neural networ...
research
08/18/2020

Prevalence of Neural Collapse during the terminal phase of deep learning training

Modern practice for training classification deepnets involves a Terminal...
research
09/19/2022

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

When training overparameterized deep networks for classification tasks, ...
research
10/04/2022

Are All Losses Created Equal: A Neural Collapse Perspective

While cross entropy (CE) is the most commonly used loss to train deep ne...
research
11/23/2020

Neural collapse with unconstrained features

Neural collapse is an emergent phenomenon in deep learning that was rece...
research
10/29/2022

Perturbation Analysis of Neural Collapse

Training deep neural networks for classification often includes minimizi...

Please sign up or login with your details

Forgot password? Click here to reset