Nonlinear Approximation via Compositions

02/26/2019
by   Zuowei Shen, et al.
0

We study the approximation efficiency of function compositions in nonlinear approximation, especially the case when compositions are implemented using multi-layer feed-forward neural networks (FNNs) with ReLU activation functions. The central question of interest is what are the advantages of function compositions in generating dictionaries and what is the optimal implementation of function compositions via ReLU FNNs, especially in modern computing architecture. This question is answered by studying the N-term approximation rate, which is the decrease in error versus the number of computational nodes (neurons) in the approximant, together with parallel efficiency for the first time. First, for an arbitrary function f on [0,1], regardless of its smoothness and even the continuity, if f can be approximated via nonlinear approximation using one-hidden-layer ReLU FNNs with an approximation rate O(N^-η), we quantitatively show that dictionaries with function compositions via deep ReLU FNNs can improve the approximation rate to O(N^-2η). Second, for Hölder continuous functions of order α with a uniform Lipchitz constant ω on a d-dimensional cube, we show that the N-term approximation via ReLU FNNs with two or three function compositions can achieve an approximation rate O( N^-2α/d). The approximation rate can be improved to O(L^-2α/d) by composing L times, if N is fixed and sufficiently large; but further compositions cannot achieve the approximation rate O(N^-α L/d). Finally, considering the computational efficiency per training iteration in parallel computing, FNNs with O(1) hidden layers are an optimal choice for approximating Hölder continuous functions if computing resources are enough.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2019

Deep Network Approximation Characterized by Number of Neurons

This paper quantitatively characterizes the approximation power of deep ...
research
02/10/2018

Optimal approximation of continuous functions by very deep ReLU networks

We prove that deep ReLU neural networks with conventional fully-connecte...
research
01/30/2023

Optimal Approximation Complexity of High-Dimensional Functions with Neural Networks

We investigate properties of neural networks that use both ReLU and x^2 ...
research
10/25/2020

Neural Network Approximation: Three Hidden Layers Are Enough

A three-hidden-layer neural network with super approximation power is in...
research
08/10/2023

On the Optimal Expressive Power of ReLU DNNs and Its Application in Approximation with Kolmogorov Superposition Theorem

This paper is devoted to studying the optimal expressive power of ReLU d...
research
04/05/2023

Hybrid Zonotopes Exactly Represent ReLU Neural Networks

We show that hybrid zonotopes offer an equivalent representation of feed...
research
08/05/2023

Approximating Positive Homogeneous Functions with Scale Invariant Neural Networks

We investigate to what extent it is possible to solve linear inverse pro...

Please sign up or login with your details

Forgot password? Click here to reset