Nonparametric regression using deep neural networks with ReLU activation function

08/22/2017
by   Johannes Schmidt-Hieber, et al.
0

Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential parameters being much bigger than the sample size. The analysis gives some insights why multilayer feedforward neural networks perform well in practice. Interestingly, the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that scaling the network depth with the logarithm of the sample size is natural.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function

This article contributes to the current statistical theory of deep neura...
research
09/26/2018

Deep Neural Networks for Estimation and Inference: Application to Causal Effects and Other Semiparametric Estimands

We study deep neural networks and their use in semiparametric inference....
research
06/07/2021

Calibrating multi-dimensional complex ODE from noisy data via deep neural networks

Ordinary differential equations (ODEs) are widely used to model complex ...
research
03/20/2022

How do noise tails impact on deep ReLU networks?

This paper investigates the stability of deep ReLU neural networks for n...
research
01/30/2019

On Correlation of Features Extracted by Deep Neural Networks

Redundancy in deep neural network (DNN) models has always been one of th...
research
10/05/2022

Factor Augmented Sparse Throughput Deep ReLU Neural Networks for High Dimensional Regression

This paper introduces a Factor Augmented Sparse Throughput (FAST) model ...
research
02/13/2020

A Unifying Network Architecture for Semi-Structured Deep Distributional Learning

We propose a unifying network architecture for deep distributional learn...

Please sign up or login with your details

Forgot password? Click here to reset