On the Approximation Properties of Neural Networks

04/04/2019
by   Jonathan W. Siegel, et al.
0

We prove two new results concerning the approximation properties of neural networks. Our first result gives conditions under which the outputs of the neurons in a two layer neural network are linearly independent functions. Our second result concerns the rate of approximation of a two layer neural network as the number of neurons increases. We improve upon existing results in the literature by significantly relaxing the required assumptions on the activation function and by providing a better rate of approximation. We also provide a simplified proof that the class of functions represented by a two-layer neural network is dense in any compact set if the activation function is not a polynomial.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2023

Measure theoretic results for approximation by neural networks with limited weights

In this paper, we study approximation properties of single hidden layer ...
research
10/08/2019

Universal Approximation Theorems

The universal approximation theorem established the density of specific ...
research
11/21/2018

Neural Networks with Activation Networks

This work presents an adaptive activation method for neural networks tha...
research
12/04/2022

Understanding Sinusoidal Neural Networks

In this work, we investigate the representation capacity of multilayer p...
research
05/17/2022

Sharp asymptotics on the compression of two-layer neural networks

In this paper, we study the compression of a target two-layer neural net...
research
08/04/2021

Growing an architecture for a neural network

We propose a new kind of automatic architecture search algorithm. The al...
research
09/17/2018

Self Configuration in Machine Learning

In this paper we first present a class of algorithms for training multi-...

Please sign up or login with your details

Forgot password? Click here to reset