Are All Losses Created Equal: A Neural Collapse Perspective

10/04/2022
by   Jinxin Zhou, et al.
8

While cross entropy (CE) is the most commonly used loss to train deep neural networks for classification tasks, many alternative losses have been developed to obtain better empirical performance. Among them, which one is the best to use is still a mystery, because there seem to be multiple factors affecting the answer, such as properties of the dataset, the choice of network architecture, and so on. This paper studies the choice of loss function by examining the last-layer features of deep networks, drawing inspiration from a recent line work showing that the global optimal solution of CE and mean-square-error (MSE) losses exhibits a Neural Collapse phenomenon. That is, for sufficiently large networks trained until convergence, (i) all features of the same class collapse to the corresponding class mean and (ii) the means associated with different classes are in a configuration where their pairwise distances are all equal and maximized. We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse. Hence, all relevant losses(i.e., CE, LS, FL, MSE) produce equivalent features on training data. Based on the unconstrained feature model assumption, we provide either the global landscape analysis for LS loss or the local landscape analysis for FL loss and show that the (only!) global minimizers are neural collapse solutions, while all other critical points are strict saddles whose Hessian exhibit negative curvature directions either in the global scope for LS loss or in the local scope for FL loss near the optimal solution. The experiments further show that Neural Collapse features obtained from all relevant losses lead to largely identical performance on test data as well, provided that the network is sufficiently large and trained until convergence.

READ FULL TEXT

page 10

page 18

page 20

research
03/02/2022

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

When training deep neural networks for classification tasks, an intrigui...
research
05/06/2021

A Geometric Analysis of Neural Collapse with Unconstrained Features

We provide the first global optimization landscape analysis of Neural Co...
research
09/19/2022

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

When training overparameterized deep networks for classification tasks, ...
research
10/06/2021

An Unconstrained Layer-Peeled Perspective on Neural Collapse

Neural collapse is a highly symmetric geometric pattern of neural networ...
research
05/31/2020

Global Convergence of MAML for LQR

The paper studies the performance of the Model-Agnostic Meta-Learning (M...
research
04/27/2022

Gleo-Det: Deep Convolution Feature-Guided Detector with Local Entropy Optimization for Salient Points

Feature detection is an important procedure for image matching, where un...
research
06/11/2022

Memorization-Dilation: Modeling Neural Collapse Under Noise

The notion of neural collapse refers to several emergent phenomena that ...

Please sign up or login with your details

Forgot password? Click here to reset