Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Renormalization Group

12/07/2020
by   Qianyi Li, et al.
16

The success of deep learning in many real-world tasks has triggered an effort to theoretically understand the power and limitations of deep learning in training and generalization of complex tasks, so far with limited progress. In this work, we study the statistical mechanics of learning in Deep Linear Neural Networks (DLNNs) in which the input-output function of an individual unit is linear. Despite the linearity of the units, learning in DLNNs is highly nonlinear, hence studying its properties reveals some of the essential features of nonlinear Deep Neural Networks (DNNs). We solve exactly the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space. To do this, we introduce the Back-Propagating Renormalization Group (BPRG) which allows for the incremental integration of the network weights layer by layer from the network output layer and progressing backward. This procedure allows us to evaluate important network properties such as its generalization error, the role of network width and depth, the impact of the size of the training set, and the effects of weight regularization and learning stochasticity. Furthermore, by performing partial integration of layers, BPRG allows us to compute the emergent properties of the neural representations across the different hidden layers. We have proposed a heuristic extension of the BPRG to nonlinear DNNs with rectified linear units (ReLU). Surprisingly, our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks with modest depth, in a wide regime of parameters. Our work is the first exact statistical mechanical study of learning in a family of Deep Neural Networks, and the first development of the Renormalization Group approach to the weight space of these systems.

READ FULL TEXT

page 1

page 3

page 13

page 29

research
03/22/2018

Deep Learning using Rectified Linear Units (ReLU)

We introduce the use of rectified linear units (ReLU) as the classificat...
research
10/31/2022

Globally Gated Deep Linear Networks

Recently proposed Gated Linear Networks present a tractable nonlinear ne...
research
12/20/2013

Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

Despite the widespread practical success of deep learning methods, our t...
research
02/15/2023

Spatially heterogeneous learning by a deep student machine

Despite the spectacular successes, deep neural networks (DNN) with a hug...
research
06/25/2018

Analysis of Invariance and Robustness via Invertibility of ReLU-Networks

Studying the invertibility of deep neural networks (DNNs) provides a pri...
research
10/22/2021

The Equilibrium Hypothesis: Rethinking implicit regularization in Deep Neural Networks

Modern Deep Neural Networks (DNNs) exhibit impressive generalization pro...
research
12/21/2013

Intriguing properties of neural networks

Deep neural networks are highly expressive models that have recently ach...

Please sign up or login with your details

Forgot password? Click here to reset