Phase diagram for two-layer ReLU neural networks at infinite-width limit

07/15/2020
by   Tao Luo, et al.
0

How neural network behaves during the training over different choices of hyperparameters is an important question in the study of neural networks. However, except for specific examples with particular choices of hyperparameters, e.g., neural tangent kernel (NTK), mean-field model, this question is largely unanswered. In this work, inspired by the phase diagram in statistical mechanics, we draw the phase diagram for the two-layer ReLU neural network at the infinite-width limit for a complete characterization of its dynamical regimes and their dependence on hyperparameters. Through both experimental and theoretical approaches, we identify three regimes in the phase diagram, i.e., linear regime, critical regime and condensed regime, based on the relative change of input weights as the width approaches infinity, which tends to 0, O(1) and +∞, respectively. In the linear regime, NN training dynamics is approximately linear similar to a random feature model with an exponential loss decay. In the condensed regime, we demonstrate through experiments that active neurons are condensed at several discrete orientations. The critical regime serves as the boundary between above two regimes, which exhibits an intermediate nonlinear behavior with the mean-field model as a typical example. Overall, our phase diagram for the two-layer ReLU NN serves as a map for the future studies and is a first step towards a more systematical investigation of the training behavior and the implicit regularization of NNs of different structures.

READ FULL TEXT
research
05/24/2022

Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width

Substantial work indicates that the dynamics of neural networks (NNs) is...
research
10/28/2022

A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer Neural Networks

To understand the training dynamics of neural networks (NNs), prior stud...
research
02/23/2023

Phase diagram of training dynamics in deep neural networks: effect of learning rate, depth, and width

We systematically analyze optimization dynamics in deep neural networks ...
research
06/25/2020

The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models

A numerical and phenomenological study of the gradient descent (GD) algo...
research
03/09/2023

Statistical mechanics of the maximum-average submatrix problem

We study the maximum-average submatrix problem, in which given an N × N ...
research
06/18/2020

On Sparsity in Overparametrised Shallow ReLU Networks

The analysis of neural network training beyond their linearization regim...
research
12/02/2019

Interpolating between boolean and extremely high noisy patterns through Minimal Dense Associative Memories

Recently, Hopfield and Krotov introduced the concept of dense associati...

Please sign up or login with your details

Forgot password? Click here to reset