A Unified and Constructive Framework for the Universality of Neural Networks

12/30/2021
by   Tan Bui-Thanh, et al.
0

One of the reasons why many neural networks are capable of replicating complicated tasks or functions is their universal property. Though the past few decades have seen tremendous advances in theories of neural networks, a single constructive framework for neural network universality remains unavailable. This paper is an effort to provide a unified and constructive framework for the universality of a large class of activations including most of existing ones. At the heart of the framework is the concept of neural network approximate identity (nAI). The main result is: any nAI activation function is universal. It turns out that most of existing activations are nAI, and thus universal in the space of continuous functions on compacta. The framework has the following main properties. First, it is constructive with elementary means from functional analysis, probability theory, and numerical analysis. Second, it is the first unified attempt that is valid for most of existing activations. Third, as a by product, the framework provides the first university proof for some of the existing activation functions including Mish, SiLU, ELU, GELU, and etc. Fourth, it provides new proofs for most activation functions. Fifth, it discovers new activations with guaranteed universality property. Sixth, for a given activation and error tolerance, the framework provides precisely the architecture of the corresponding one-hidden neural network with predetermined number of neurons, and the values of weights/biases. Seventh, the framework allows us to abstractly present the first universal approximation with favorable non-asymptotic rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2023

Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control

Classical results in neural network approximation theory show how arbitr...
research
05/20/2021

Neural networks with superexpressive activations and integer weights

An example of an activation function σ is given such that networks with ...
research
06/23/2019

Learning Activation Functions: A new paradigm of understanding Neural Networks

There has been limited research in the domain of activation functions, m...
research
02/22/2021

Elementary superexpressive activations

We call a finite family of activation functions superexpressive if any m...
research
05/14/2015

Neural Network with Unbounded Activation Functions is Universal Approximator

This paper presents an investigation of the approximation property of ne...
research
11/03/2020

Analytical aspects of non-differentiable neural networks

Research in computational deep learning has directed considerable effort...
research
03/28/2020

Memorizing Gaussians with no over-parameterizaion via gradient decent on neural networks

We prove that a single step of gradient decent over depth two network, w...

Please sign up or login with your details

Forgot password? Click here to reset