DeepAI AI Chat
Log In Sign Up

Memorization-Dilation: Modeling Neural Collapse Under Noise

by   Duc Anh Nguyen, et al.

The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the unconstrained feature representation, in which the model is assumed to have "infinite expressivity" and can map each data point to any arbitrary representation. In this work, we propose a more realistic variant of the unconstrained feature representation that takes the limited expressivity of the network into account. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.


page 1

page 2

page 3

page 4


Extended Unconstrained Features Model for Exploring Deep Neural Collapse

The modern strategy for training deep neural networks for classification...

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

When training deep neural networks for classification tasks, an intrigui...

Neural Collapse in Deep Linear Network: From Balanced to Imbalanced Data

Modern deep neural networks have achieved superhuman performance in task...

Neural collapse with unconstrained features

Neural collapse is an emergent phenomenon in deep learning that was rece...

Learning to Combat Noisy Labels via Classification Margins

A deep neural network trained on noisy labels is known to quickly lose i...

Are All Losses Created Equal: A Neural Collapse Perspective

While cross entropy (CE) is the most commonly used loss to train deep ne...

On the Behavior of Convolutional Nets for Feature Extraction

Deep neural networks are representation learning techniques. During trai...