Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

07/06/2021
by   Zuowei Shen, et al.
0

This paper develops simple feed-forward neural networks that achieve the universal approximation property for all continuous functions with a fixed finite number of neurons. These neural networks are simple because they are designed with a simple and computable continuous activation function σ leveraging a triangular-wave function and a softsign function. We prove that σ-activated networks with width 36d(2d+1) and depth 11 can approximate any continuous function on a d-dimensioanl hypercube within an arbitrarily small error. Hence, for supervised learning and its related regression problems, the hypothesis space generated by these networks with a size not smaller than 36d(2d+1)× 11 is dense in the space of continuous functions. Furthermore, classification functions arising from image and signal classification are in the hypothesis space generated by σ-activated networks with width 36d(2d+1) and depth 12, when there exist pairwise disjoint closed bounded subsets of ℝ^d such that the samples of the same class are located in the same subset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2019

Universal Approximation with Deep Narrow Networks

The classical Universal Approximation Theorem certifies that the univers...
research
05/26/2023

Universal approximation with complex-valued deep narrow neural networks

We study the universality of complex-valued neural networks with bounded...
research
06/06/2018

The effect of the choice of neural network depth and breadth on the size of its hypothesis space

We show that the number of unique function mappings in a neural network ...
research
12/21/2018

On the Relative Expressiveness of Bayesian and Neural Networks

A neural network computes a function. A central property of neural netwo...
research
11/10/2020

Expressiveness of Neural Networks Having Width Equal or Below the Input Dimension

The expressiveness of deep neural networks of bounded width has recently...
research
11/05/2018

Generalization Bounds for Neural Networks: Kernels, Symmetry, and Sample Compression

Though Deep Neural Networks (DNNs) are widely celebrated for their pract...
research
11/25/2022

LU decomposition and Toeplitz decomposition of a neural network

It is well-known that any matrix A has an LU decomposition. Less well-kn...

Please sign up or login with your details

Forgot password? Click here to reset