A Johnson–Lindenstrauss Framework for Randomly Initialized CNNs

11/03/2021
by   Ido Nachum, et al.
0

How does the geometric representation of a dataset change after the application of each randomly initialized layer of a neural network? The celebrated Johnson–Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved. For FNNs with the ReLU activation, the angle between two inputs contracts according to a known mapping. The question for non-linear convolutional neural networks (CNNs) becomes much more intricate. To answer this question, we introduce a geometric framework. For linear CNNs, we show that the Johnson–Lindenstrauss lemma continues to hold, namely, that the angle between two inputs is preserved. For CNNs with ReLU activation, on the other hand, the behavior is richer: The angle between the outputs contracts, where the level of contraction depends on the nature of the inputs. In particular, after one layer, the geometry of natural images is essentially preserved, whereas for Gaussian correlated inputs, CNNs exhibit the same contracting behavior as FNNs with ReLU activation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2020

On the Number of Linear Regions of Convolutional Neural Networks

One fundamental problem in deep learning is understanding the outstandin...
research
09/01/2021

Approximation Properties of Deep ReLU CNNs

This paper is devoted to establishing L^2 approximation properties for d...
research
11/15/2018

Mathematical Analysis of Adversarial Attacks

In this paper, we analyze efficacy of the fast gradient sign method (FGS...
research
10/23/2021

ConformalLayers: A non-linear sequential neural network with associative layers

Convolutional Neural Networks (CNNs) have been widely applied. But as th...
research
02/20/2023

Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization

Stacking many layers to create truly deep neural networks is arguably wh...
research
11/08/2017

Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels

In this paper, we consider parameter recovery for non-overlapping convol...
research
09/26/2018

Rediscovering Deep Neural Networks in Finite-State Distributions

We propose a new way of thinking about deep neural networks, in which th...

Please sign up or login with your details

Forgot password? Click here to reset