Deep ReLU Networks Preserve Expected Length

02/21/2021
by   Boris Hanin, et al.
0

Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length – if the network takes a unit-length curve as input, what is the length of the resulting curve of outputs? It has been widely believed that this length grows exponentially in network depth. We prove that in fact this is not the case: the expected length distortion does not grow with depth, and indeed shrinks slightly, for ReLU networks with standard random initialization. We also generalize this result by proving upper bounds both for higher moments of the length distortion and for the distortion of higher-dimensional volumes. These theoretical results are corroborated by our experiments, which indicate that length distortion remains modest even after training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2023

Expected Gradients of Maxout Networks and Consequences to Parameter Initialization

We study the gradients of a maxout network with respect to inputs and pa...
research
08/18/2023

Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures

We study the representation capacity of deep hyperbolic neural networks ...
research
05/24/2022

Approximation speed of quantized vs. unquantized ReLU neural networks and beyond

We consider general approximation families encompassing ReLU neural netw...
research
06/03/2019

Deep ReLU Networks Have Surprisingly Few Activation Patterns

The success of deep networks has been attributed in part to their expres...
research
06/16/2016

On the Expressive Power of Deep Neural Networks

We propose a new approach to the problem of neural network expressivity,...
research
11/25/2019

Trajectory growth lower bounds for random sparse deep ReLU networks

This paper considers the growth in the length of one-dimensional traject...
research
04/08/2019

On the Learnability of Deep Random Networks

In this paper we study the learnability of deep random networks from bot...

Please sign up or login with your details

Forgot password? Click here to reset