On Random Matrices Arising in Deep Neural Networks: General I.I.D. Case

11/20/2020
by   L. Pastur, et al.
0

We study the distribution of singular values of product of random matrices pertinent to the analysis of deep neural networks. The matrices resemble the product of the sample covariance matrices, however, an important difference is that the population covariance matrices assumed to be non-random or random but independent of the random data matrix in statistics and random matrix theory are now certain functions of random data matrices (synaptic weight matrices in the deep neural network terminology). The problem has been treated in recent work [25, 13] by using the techniques of free probability theory. Since, however, free probability theory deals with population covariance matrices which are independent of the data matrices, its applicability has to be justified. The justification has been given in [22] for Gaussian data matrices with independent entries, a standard analytical model of free probability, by using a version of the techniques of random matrix theory. In this paper we use another, more streamlined, version of the techniques of random matrix theory to generalize the results of [22] to the case where the entries of the synaptic weight matrices are just independent identically distributed random variables with zero mean and finite fourth moment. This, in particular, extends the property of the so-called macroscopic universality on the considered random matrices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/12/2022

Eigenvalue Distribution of Large Random Matrices Arising in Deep Neural Networks: Orthogonal Case

The paper deals with the distribution of singular values of the input-ou...
research
03/28/2022

Random matrix analysis of deep neural network weight matrices

Neural networks have been used successfully in a variety of fields, whic...
research
05/11/2021

Analysis of One-Hidden-Layer Neural Networks via the Resolvent Method

We compute the asymptotic empirical spectral distribution of a non-linea...
research
10/01/2013

Graph connection Laplacian and random matrices with random blocks

Graph connection Laplacian (GCL) is a modern data analysis technique tha...
research
07/21/2023

What can a Single Attention Layer Learn? A Study Through the Random Features Lens

Attention layers – which map a sequence of inputs to a sequence of outpu...
research
03/24/2021

Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case

Free Probability Theory (FPT) provides rich knowledge for handling mathe...
research
04/05/2019

Eigenvalue distribution of nonlinear models of random matrices

This paper is concerned with the asymptotic empirical eigenvalue distrib...

Please sign up or login with your details

Forgot password? Click here to reset