Survey of Expressivity in Deep Neural Networks

11/24/2016
by   Jojo Yun, et al.
0

We survey results on neural network expressivity described in "On the Expressive Power of Deep Neural Networks". The paper motivates and develops three natural measures of expressiveness, which all display an exponential dependence on the depth of the network. In fact, all of these measures are related to a fourth quantity, trajectory length. This quantity grows exponentially in the depth of the network, and is responsible for the depth sensitivity observed. These results translate to consequences for networks during and after training. They suggest that parameters earlier in a network have greater influence on its expressive power -- in particular, given a layer, its influence on expressivity is determined by the remaining depth of the network after that layer. This is verified with experiments on MNIST and CIFAR-10. We also explore the effect of training on the input-output map, and find that it trades off between the stability and expressivity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2016

On the Expressive Power of Deep Neural Networks

We propose a new approach to the problem of neural network expressivity,...
research
11/22/2015

Gradual DropIn of Layers to Train Very Deep Neural Networks

We introduce the concept of dynamically growing a neural network during ...
research
08/16/2017

BitNet: Bit-Regularized Deep Neural Networks

We present a novel regularization scheme for training deep neural networ...
research
06/24/2020

Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks

With increasing expressive power, deep neural networks have significantl...
research
05/29/2019

On the Expressive Power of Deep Polynomial Neural Networks

We study deep neural networks with polynomial activations, particularly ...
research
12/12/2017

Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks

The feature map obtained from the denoising autoencoder (DAE) is investi...
research
06/17/2021

Layer Folding: Neural Network Depth Reduction using Activation Linearization

Despite the increasing prevalence of deep neural networks, their applica...

Please sign up or login with your details

Forgot password? Click here to reset