How do noise tails impact on deep ReLU networks?

03/20/2022
by   Jianqing Fan, et al.
5

This paper investigates the stability of deep ReLU neural networks for nonparametric regression under the assumption that the noise has only a finite p-th moment. We unveil how the optimal rate of convergence depends on p, the degree of smoothness and the intrinsic dimension in a class of nonparametric regression functions with hierarchical composition structure when both the adaptive Huber loss and deep ReLU neural networks are used. This optimal rate of convergence cannot be obtained by the ordinary least squares but can be achieved by the Huber loss with a properly chosen parameter that adapts to the sample size, smoothness, and moment parameters. A concentration inequality for the adaptive Huber ReLU neural network estimators with allowable optimization errors is also derived. To establish a matching lower bound within the class of neural network estimators using the Huber loss, we employ a different strategy from the traditional route: constructing a deep ReLU network estimator that has a better empirical loss than the true function and the difference between these two functions furnishes a low bound. This step is related to the Huberization bias, yet more critically to the approximability of deep ReLU networks. As a result, we also contribute some new results on the approximation theory of deep ReLU neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2021

Robust Nonparametric Regression with Deep Neural Networks

In this paper, we study the properties of robust nonparametric estimatio...
research
08/12/2021

Statistical Learning using Sparse Deep Neural Networks in Empirical Risk Minimization

We consider a sparse deep ReLU network (SDRN) estimator obtained from em...
research
02/27/2023

Nonparametric regression for repeated measurements with deep neural networks

Analysis of repeated measurements for a sample of subjects has been inte...
research
05/31/2023

Optimal Estimates for Pairwise Learning with Deep ReLU Networks

Pairwise learning refers to learning tasks where a loss takes a pair of ...
research
07/01/2023

Partial Linear Cox Model with Deep ReLU Networks for Interval-Censored Failure Time Data

The partial linear Cox model for interval-censoring is well-studied unde...
research
10/26/2018

Size-Noise Tradeoffs in Generative Networks

This paper investigates the ability of generative networks to convert th...
research
08/22/2017

Nonparametric regression using deep neural networks with ReLU activation function

Consider the multivariate nonparametric regression model. It is shown th...

Please sign up or login with your details

Forgot password? Click here to reset