Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

05/17/2022
by   Hoil Lee, et al.
29

This article studies the infinite-width limit of deep feedforward neural networks whose weights are dependent, and modelled via a mixture of Gaussian distributions. Each hidden node of the network is assigned a nonnegative random variable that controls the variance of the outgoing weights of that node. We make minimal assumptions on these per-node random variables: they are iid and their sum, in each layer, converges to some finite random variable in the infinite-width limit. Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a Lévy measure on the positive reals. If the scalar parameters are strictly positive and the Lévy measures are trivial at all hidden layers, then one recovers the classical Gaussian process (GP) limit, obtained with iid Gaussian weights. More interestingly, if the Lévy measure of at least one layer is non-trivial, we obtain a mixture of Gaussian processes (MoGP) in the large-width limit. The behaviour of the neural network in this regime is very different from the GP regime. One obtains correlated outputs, with non-Gaussian distributions, possibly with heavy tails. Additionally, we show that, in this regime, the weights are compressible, and feature learning is possible. Many sparsity-promoting neural network models can be recast as special cases of our approach, and we discuss their infinite-width limits; we also present an asymptotic analysis of the pruning error. We illustrate some of the benefits of the MoGP regime over the GP regime in terms of representation learning and compressibility on simulated, MNIST and Fashion MNIST datasets.

READ FULL TEXT

page 40

page 41

research
09/04/2023

Les Houches Lectures on Deep Learning at Large Infinite Width

These lectures, presented at the 2022 Les Houches Summer School on Stati...
research
05/18/2023

Posterior Inference on Infinitely Wide Bayesian Neural Networks under Weights with Unbounded Variance

From the classical and influential works of Neal (1996), it is known tha...
research
01/03/2020

Wide Neural Networks with Bottlenecks are Deep Gaussian Processes

There is recently much work on the "wide limit" of neural networks, wher...
research
06/11/2021

The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective

Large width limits have been a recent focus of deep learning research: m...
research
02/14/2021

Double-descent curves in neural networks: a new perspective using Gaussian processes

Double-descent curves in neural networks describe the phenomenon that th...
research
04/02/2020

Predicting the outputs of finite networks trained with noisy gradients

A recent line of studies has focused on the infinite width limit of deep...
research
08/19/2020

Neural Networks and Quantum Field Theory

We propose a theoretical understanding of neural networks in terms of Wi...

Please sign up or login with your details

Forgot password? Click here to reset