Rethink Depth Separation with Intra-layer Links

05/11/2023
by   Feng-Lei Fan, et al.
0

The depth separation theory is nowadays widely accepted as an effective explanation for the power of depth, which consists of two parts: i) there exists a function representable by a deep network; ii) such a function cannot be represented by a shallow network whose width is lower than a threshold. However, this theory is established for feedforward networks. Few studies, if not none, considered the depth separation theory in the context of shortcuts which are the most common network types in solving real-world problems. Here, we find that adding intra-layer links can modify the depth separation theory. First, we report that adding intra-layer links can greatly improve a network's representation capability through bound estimation, explicit construction, and functional space analysis. Then, we modify the depth separation theory by showing that a shallow network with intra-layer links does not need to go as wide as before to express some hard functions constructed by a deep network. Such functions include the renowned "sawtooth" functions. Moreover, the saving of width is up to linear. Our results supplement the existing depth separation theory by examining its limit in the shortcut domain. Also, the mechanism we identify can be translated into analyzing the expressivity of popular shortcut networks such as ResNet and DenseNet, e.g., residual connections empower a network to represent a sawtooth function efficiently.

READ FULL TEXT
research
04/03/2023

Depth Separation with Multilayer Mean-Field Networks

Depth separation – why a deeper network is more powerful than a shallowe...
research
01/30/2021

Size and Depth Separation in Approximating Natural Functions with Neural Networks

When studying the expressive power of neural networks, a main challenge ...
research
12/12/2015

The Power of Depth for Feedforward Neural Networks

We show that there is a simple (approximately radial) function on ^d, ex...
research
02/06/2020

Duality of Width and Depth of Neural Networks

Here, we report that the depth and the width of a neural network are dua...
research
07/18/2023

How Many Neurons Does it Take to Approximate the Maximum?

We study the size of a neural network needed to approximate the maximum ...
research
10/26/2020

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

It is known that Θ(N) parameters are sufficient for neural networks to m...
research
08/05/2022

Towards Antisymmetric Neural Ansatz Separation

We study separations between two fundamental models (or Ansätze) of anti...

Please sign up or login with your details

Forgot password? Click here to reset