A Rainbow in Deep Network Black Boxes

05/29/2023
by   Florentin Guth, et al.
0

We introduce rainbow networks as a probabilistic model of trained deep neural networks. The model cascades random feature maps whose weight distributions are learned. It assumes that dependencies between weights at different layers are reduced to rotations which align the input activations. Neuron weights within a layer are independent after this alignment. Their activations define kernels which become deterministic in the infinite-width limit. This is verified numerically for ResNets trained on the ImageNet dataset. We also show that the learned weight distributions have low-rank covariances. Rainbow networks thus alternate between linear dimension reductions and non-linear high-dimensional embeddings with white random features. Gaussian rainbow networks are defined with Gaussian weight distributions. These models are validated numerically on image classification on the CIFAR-10 dataset, with wavelet scattering networks. We further show that during training, SGD updates the weight covariances while mostly preserving the Gaussian initialization.

READ FULL TEXT
research
09/04/2023

Les Houches Lectures on Deep Learning at Large Infinite Width

These lectures, presented at the 2022 Les Houches Summer School on Stati...
research
06/15/2021

Gradient-trained Weights in Wide Neural Networks Align Layerwise to Error-scaled Input Correlations

Recent works have examined how deep neural networks, which can solve a v...
research
07/04/2022

BiTAT: Neural Network Binarization with Task-dependent Aggregated Transformation

Neural network quantization aims to transform high-precision weights and...
research
09/10/2018

Probabilistic Binary Neural Networks

Low bit-width weights and activations are an effective way of combating ...
research
04/19/2018

Low Rank Structure of Learned Representations

A key feature of neural networks, particularly deep convolutional neural...
research
01/11/2021

Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks

Infinite width limits of deep neural networks often have tractable forms...
research
11/15/2022

REPAIR: REnormalizing Permuted Activations for Interpolation Repair

In this paper we look into the conjecture of Entezari et al.(2021) which...

Please sign up or login with your details

Forgot password? Click here to reset