Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

09/19/2022
by   Can Yaras, et al.
0

When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon. More specifically, for the output features of the penultimate layer, for each class the within-class features converge to their means, and the means of different classes exhibit a certain tight frame structure, which is also aligned with the last layer's classifier. As feature normalization in the last layer becomes a common practice in modern representation learning, in this work we theoretically justify the neural collapse phenomenon for normalized features. Based on an unconstrained feature model, we simplify the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classifiers over the sphere. In this context, we analyze the nonconvex landscape of the Riemannian optimization problem over the product of spheres, showing a benign global landscape in the sense that the only global minimizers are the neural collapse solutions while all other critical points are strict saddles with negative curvature. Experimental results on practical deep networks corroborate our theory and demonstrate that better representations can be learned faster via feature normalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2022

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

When training deep neural networks for classification tasks, an intrigui...
research
05/06/2021

A Geometric Analysis of Neural Collapse with Unconstrained Features

We provide the first global optimization landscape analysis of Neural Co...
research
02/16/2022

Extended Unconstrained Features Model for Exploring Deep Neural Collapse

The modern strategy for training deep neural networks for classification...
research
10/04/2022

Are All Losses Created Equal: A Neural Collapse Perspective

While cross entropy (CE) is the most commonly used loss to train deep ne...
research
10/06/2021

An Unconstrained Layer-Peeled Perspective on Neural Collapse

Neural collapse is a highly symmetric geometric pattern of neural networ...
research
07/10/2023

On the curvature of the loss landscape

One of the main challenges in modern deep learning is to understand why ...
research
10/29/2022

Perturbation Analysis of Neural Collapse

Training deep neural networks for classification often includes minimizi...

Please sign up or login with your details

Forgot password? Click here to reset