Neural Network Approximation

12/28/2020
by   Ronald DeVore, et al.
0

Neural Networks (NNs) are the method of choice for building learning algorithms. Their popularity stems from their empirical success on several challenging learning problems. However, most scholars agree that a convincing theoretical explanation for this success is still lacking. This article surveys the known approximation properties of the outputs of NNs with the aim of uncovering the properties that are not present in the more traditional methods of approximation used in numerical analysis. Comparisons are made with traditional approximation methods from the viewpoint of rate distortion. Another major component in the analysis of numerical approximation is the computational time needed to construct the approximation and this in turn is intimately connected with the stability of the approximation algorithm. So the stability of numerical approximation using NNs is a large part of the analysis put forward. The survey, for the most part, is concerned with NNs using the popular ReLU activation function. In this case, the outputs of the NNs are piecewise linear functions on rather complicated partitions of the domain of f into cells that are convex polytopes. When the architecture of the NN is fixed and the parameters are allowed to vary, the set of output functions of the NN is a parameterized nonlinear manifold. It is shown that this manifold has certain space filling properties leading to an increased ability to approximate (better rate distortion) but at the expense of numerical stability. The space filling creates a challenge to the numerical method in finding best or good parameter choices when trying to approximate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2019

Nonlinear Approximation and (Deep) ReLU Networks

This article is concerned with the approximation and expressive powers o...
research
12/14/2020

High-Order Approximation Rates for Neural Networks with ReLU^k Activation Functions

We study the approximation properties of shallow neural networks (NN) wi...
research
07/19/2021

Adaptive Two-Layer ReLU Neural Network: I. Best Least-squares Approximation

In this paper, we introduce adaptive neuron enhancement (ANE) method for...
research
04/24/2022

Piecewise-Linear Activations or Analytic Activation Functions: Which Produce More Expressive Neural Networks?

Many currently available universal approximation theorems affirm that de...
research
05/06/2020

Nonlinear Methods for Model Reduction

The usual approach to model reduction for parametric partial differentia...
research
01/16/2023

On Using Deep Learning Proxies as Forward Models in Deep Learning Problems

Physics-based optimization problems are generally very time-consuming, e...
research
08/11/2019

Data-Driven Randomized Learning of Feedforward Neural Networks

Randomized methods of neural network learning suffer from a problem with...

Please sign up or login with your details

Forgot password? Click here to reset