On the rate of convergence of fully connected very deep neural network regression estimates

by   Michael Kohler, et al.

Recent results in nonparametric regression show that deep learning, i.e., neural networks estimates with many hidden layers, are able to circumvent the so-called curse of dimensionality in case that suitable restrictions on the structure of the regression function hold. One key feature of the neural networks used in these results is that they are not fully connected. In this paper we show that we can get similar results also for fully connected multilayer feedforward neural networks with ReLU activation functions, provided the number of neurons per hidden layer is fixed and the number of hidden layers tends to infinity for sample size tending to infinity. The proof is based on new approximation results concerning fully connected deep neural networks.


page 1

page 2

page 3

page 4


Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function

This article contributes to the current statistical theory of deep neura...

Universal Approximation Theorems of Fully Connected Binarized Neural Networks

Neural networks (NNs) are known for their high predictive accuracy in co...

Householder-Absolute Neural Layers For High Variability and Deep Trainability

We propose a new architecture for artificial neural networks called Hous...

Correlation Functions in Random Fully Connected Neural Networks at Finite Width

This article considers fully connected neural networks with Gaussian ran...

On the Power of Shallow Learning

A deluge of recent work has explored equivalences between wide neural ne...

Structure and Performance of Fully Connected Neural Networks: Emerging Complex Network Properties

Understanding the behavior of Artificial Neural Networks is one of the m...

Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural Networks

Artificial Neural Networks form the basis of very powerful learning meth...