Trap of Feature Diversity in the Learning of MLPs

12/02/2021
by   Dongrui Liu, et al.
Shanghai Jiao Tong University
3

In this paper, we discover a two-phase phenomenon in the learning of multi-layer perceptrons (MLPs). I.e., in the first phase, the training loss does not decrease significantly, but the similarity of features between different samples keeps increasing, which hurts the feature diversity. We explain such a two-phase phenomenon in terms of the learning dynamics of the MLP. Furthermore, we propose two normalization operations to eliminate the two-phase phenomenon, which avoids the decrease of the feature diversity and speeds up the training process.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/12/2023

Phase Diagram of Initial Condensation for Two-layer Neural Networks

The phenomenon of distinct behaviors exhibited by neural networks under ...
05/31/2022

Polarization Diversity-enabled LOS/NLOS Identification via Carrier Phase Measurements

Provision of accurate localization is an increasingly important feature ...
12/08/2020

Two-Phase Learning for Overcoming Noisy Labels

To counter the challenge associated with noise labels, the learning stra...
09/15/2017

Improving the Diversity of Top-N Recommendation via Determinantal Point Process

Recommender systems take the key responsibility to help users discover i...
06/26/2018

Phase transition in the knapsack problem

We examine the phase transition phenomenon for the Knapsack problem from...
04/18/2023

A Study of Neural Collapse Phenomenon: Grassmannian Frame, Symmetry, Generalization

In this paper, we extends original Neural Collapse Phenomenon by proving...
04/09/2023

RD-DPP: Rate-Distortion Theory Meets Determinantal Point Process to Diversify Learning Data Samples

In some practical learning tasks, such as traffic video analysis, the nu...

Please sign up or login with your details

Forgot password? Click here to reset