Do We Really Need a Learnable Classifier at the End of Deep Neural Network?

03/17/2022
by   Yibo Yang, et al.
1

Modern deep neural networks for classification usually jointly learn a backbone for representation and a linear classifier to output the logit of each class. A recent study has shown a phenomenon called neural collapse that the within-class means of features and the classifier vectors converge to the vertices of a simplex equiangular tight frame (ETF) at the terminal phase of training on a balanced dataset. Since the ETF geometric structure maximally separates the pair-wise angles of all classes in the classifier, it is natural to raise the question, why do we spend an effort to learn a classifier when we know its optimal geometric structure? In this paper, we study the potential of learning a neural network for classification with the classifier randomly initialized as an ETF and fixed during training. Our analytical work based on the layer-peeled model indicates that the feature learning with a fixed ETF classifier naturally leads to the neural collapse state even when the dataset is imbalanced among classes. We further show that in this case the cross entropy (CE) loss is not necessary and can be replaced by a simple squared loss that shares the same global optimality but enjoys a more accurate gradient and better convergence property. Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets, and bring significant improvements in the long-tailed and fine-grained classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2023

Inducing Neural Collapse to a Fixed Hierarchy-Aware Frame for Reducing Mistake Severity

There is a recently discovered and intriguing phenomenon called Neural C...
research
09/18/2023

Neural Collapse for Unconstrained Feature Model under Cross-entropy Loss with Imbalanced Data

Recent years have witnessed the huge success of deep neural networks (DN...
research
01/03/2023

Understanding Imbalanced Semantic Segmentation Through Neural Collapse

A recent study has shown a phenomenon called neural collapse in that the...
research
02/16/2022

Extended Unconstrained Features Model for Exploring Deep Neural Collapse

The modern strategy for training deep neural networks for classification...
research
01/29/2021

Layer-Peeled Model: Toward Understanding Well-Trained Deep Neural Networks

In this paper, we introduce the Layer-Peeled Model, a nonconvex yet anal...
research
02/27/2019

Fix Your Features: Stationary and Maximally Discriminative Embeddings using Regular Polytope (Fixed Classifier) Networks

Neural networks are widely used as a model for classification in a large...
research
03/29/2021

Regular Polytope Networks

Neural networks are widely used as a model for classification in a large...

Please sign up or login with your details

Forgot password? Click here to reset