On the Approximation and Complexity of Deep Neural Networks to Invariant Functions

10/27/2022
by   Gao Zhang, et al.
0

Recent years have witnessed a hot wave of deep neural networks in various domains; however, it is not yet well understood theoretically. A theoretical characterization of deep neural networks should point out their approximation ability and complexity, i.e., showing which architecture and size are sufficient to handle the concerned tasks. This work takes one step on this direction by theoretically studying the approximation and complexity of deep neural networks to invariant functions. We first prove that the invariant functions can be universally approximated by deep neural networks. Then we show that a broad range of invariant functions can be asymptotically approximated by various types of neural network models that includes the complex-valued neural networks, convolutional neural networks, and Bayesian neural networks using a polynomial number of parameters or optimization iterations. We also provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions. The empirical results obtained on simulation experiments demonstrate the effectiveness of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/09/2019

Efficient approximation of high-dimensional functions with deep neural networks

In this paper, we develop an approximation theory for deep neural networ...
research
08/15/2021

Towards Understanding Theoretical Advantages of Complex-Reaction Networks

Complex-valued neural networks have attracted increasing attention in re...
research
10/15/2019

Improved Generalization Bound of Permutation Invariant Deep Neural Networks

We theoretically prove that a permutation invariant property of deep neu...
research
03/15/2021

Function approximation by deep neural networks with parameters {0,±1/2, ± 1, 2}

In this paper it is shown that C_β-smooth functions can be approximated ...
research
11/07/2019

ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units using Chebyshev Approximations

In a recent paper[B. Li, S. Tang and H. Yu, arXiv:1903.05858, to appear ...
research
05/23/2018

Learning towards Minimum Hyperspherical Energy

Neural networks are a powerful class of nonlinear functions that can be ...
research
10/26/2020

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

It is known that Θ(N) parameters are sufficient for neural networks to m...

Please sign up or login with your details

Forgot password? Click here to reset