On the Expressive Power of Neural Networks

05/31/2023
by   Jan Holstermann, et al.
0

In 1989 George Cybenko proved in a landmark paper that wide shallow neural networks can approximate arbitrary continuous functions on a compact set. This universal approximation theorem sparked a lot of follow-up research. Shen, Yang and Zhang determined optimal approximation rates for ReLU-networks in L^p-norms with p ∈ [1,∞). Kidger and Lyons proved a universal approximation theorem for deep narrow ReLU-networks. Telgarsky gave an example of a deep narrow ReLU-network that cannot be approximated by a wide shallow ReLU-network unless it has exponentially many neurons. However, there are even more questions that still remain unresolved. Are there any wide shallow ReLU-networks that cannot be approximated well by deep narrow ReLU-networks? Is the universal approximation theorem still true for other norms like the Sobolev norm W^1,1? Do these results hold for activation functions other than ReLU? We will answer all of those questions and more with a framework of two expressive powers. The first one is well-known and counts the maximal number of linear regions of a function calculated by a ReLU-network. We will improve the best known bounds for this expressive power. The second one is entirely new.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2022

Expressive power of binary and ternary neural networks

We show that deep sparse ReLU networks with ternary weights and deep ReL...
research
02/21/2019

Error bounds for approximations with deep ReLU neural networks in W^s,p norms

We analyze approximation rates of deep ReLU neural networks for Sobolev-...
research
08/10/2023

On the Optimal Expressive Power of ReLU DNNs and Its Application in Approximation with Kolmogorov Superposition Theorem

This paper is devoted to studying the optimal expressive power of ReLU d...
research
05/30/2019

Function approximation by deep networks

We show that deep networks are better than shallow networks at approxima...
research
10/17/2018

Finite sample expressive power of small-width ReLU networks

We study universal finite sample expressivity of neural networks, define...
research
11/25/2021

Error Bounds for a Matrix-Vector Product Approximation with Deep ReLU Neural Networks

Among the several paradigms of artificial intelligence (AI) or machine l...
research
10/31/2016

Tensor Switching Networks

We present a novel neural network algorithm, the Tensor Switching (TS) n...

Please sign up or login with your details

Forgot password? Click here to reset