Approximation Power of Deep Neural Networks: an explanatory mathematical survey

07/19/2022
by   Mohammad Motamed, et al.
0

The goal of this survey is to present an explanatory review of the approximation properties of deep neural networks. Specifically, we aim at understanding how and why deep neural networks outperform other classical linear and nonlinear approximation methods. This survey consists of three chapters. In Chapter 1 we review the key ideas and concepts underlying deep networks and their compositional nonlinear structure. We formalize the neural network problem by formulating it as an optimization problem when solving regression and classification problems. We briefly discuss the stochastic gradient descent algorithm and the back-propagation formulas used in solving the optimization problem and address a few issues related to the performance of neural networks, including the choice of activation functions, cost functions, overfitting issues, and regularization. In Chapter 2 we shift our focus to the approximation theory of neural networks. We start with an introduction to the concept of density in polynomial approximation and in particular study the Stone-Weierstrass theorem for real-valued continuous functions. Then, within the framework of linear approximation, we review a few classical results on the density and convergence rate of feedforward networks, followed by more recent developments on the complexity of deep networks in approximating Sobolev functions. In Chapter 3, utilizing nonlinear approximation theory, we further elaborate on the power of depth and approximation superiority of deep ReLU networks over other classical methods of nonlinear approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2019

Nonlinear Approximation and (Deep) ReLU Networks

This article is concerned with the approximation and expressive powers o...
research
12/09/2019

Efficient approximation of high-dimensional functions with deep neural networks

In this paper, we develop an approximation theory for deep neural networ...
research
09/03/2022

Neural Networks for Chess

AlphaZero, Leela Chess Zero and Stockfish NNUE revolutionized Computer C...
research
08/25/2019

Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization

While deep learning is successful in a number of applications, it is not...
research
10/28/2016

Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods

The optimization problem behind neural networks is highly non-convex. Tr...
research
07/08/2019

Copula Representations and Error Surface Projections for the Exclusive Or Problem

The exclusive or (xor) function is one of the simplest examples that ill...
research
12/13/2019

On the approximation of rough functions with deep neural networks

Deep neural networks and the ENO procedure are both efficient frameworks...

Please sign up or login with your details

Forgot password? Click here to reset